As part of our 2019 thematic report, we are interviewing civil society activists, leaders and experts about their experience of facing backlash by anti-rights groups. CIVICUS speaks with Brandi Geurkink, European campaigner at the Mozilla Foundation, a non-profit corporation based on the conviction that the internet is a global public resource that must remain open and accessible to all. The Mozilla Foundation seeks to fuel a movement for a healthy internet by supporting a diverse group of fellows working on key internet issues, connecting open internet leaders at events such as MozFest, publishing critical research in the Internet Health Report and rallying citizens around advocacy issues that connect the wellbeing of the internet directly to everyday life.
The regular internet user possibly identifies Mozilla with Firefox and doesn’t know that there is also a Mozilla Foundation. Can you tell us what the Mozilla Foundation is and what it does?
I get this question asked a lot. When I told my family I was working for Mozilla, they said, ‘wait, you are not a software professional, what are you doing there?’ What makes Mozilla different from other software developers is that it is a non-profit tech company. Mozilla is the creator of Firefox, which is a web browser, but an open source one. It also has users’ privacy at its core. And all of Mozilla’s work is guided by the Mozilla Manifesto, which provides a set of principles for an open, accessible and safe internet, viewed as a global public resource.
Profits that come from the Firefox browser are invested into the Mozilla Foundation, which is the Mozilla Corporation’s sole shareholder, and our mission is to build an open and healthy web. Mozilla creates and enables open-source technologies and communities that support the Manifesto’s principles; creates and delivers consumer products that represent the Manifesto’s principles; uses the Mozilla assets – intellectual property such as copyrights and trademarks, infrastructure, funds and reputation – to keep the internet an open platform; promotes models for creating economic value for the public benefit; and promotes the Mozilla Manifesto principles in public discourse and within the internet industry.
Mozilla promotes an open and healthy web through a variety of activities. For instance, we have a fellowships programme to empower and connect leaders from the internet health movement. This programme supports people doing all sorts of things, from informing debates on how user rights and privacy should be respected online to creating technologies that will enable greater user agency. Mozilla also produces an annual report, the Internet Health Report, and mobilises people in defence of a healthy internet. A lot of this work takes the form of campaigning for corporate accountability; we seek to influence the way in which tech companies are thinking about privacy and user agency within their products and to mobilise consumers so that they demand better behaviour and more control over their online lives.
How do you define a healthy internet?
A healthy internet is a place where people can safely and freely communicate and participate. For this to happen, the internet must truly be a global public resource rather than something that’s owned by a few giant tech companies, who are then in control of who participates and how they do it. Some key components of a healthy web are openness, privacy and security. We place a lot of emphasis on digital inclusion, which determines who has access; web literacy, which determines who can succeed online; and decentralisation, which focuses on who controls the web – ideally, many rather than just a few.
The internet is currently dominated by eight American and Chinese companies: Alphabet (Google’s parent company), Alibaba, Amazon, Apple, Baidu, Facebook, Microsoft and Tencent. These companies and their subsidiaries dominate all layers of the digital world, from search engines, browsers and social media services to core infrastructure like undersea cables and cloud computing. They built their empires by selling our attention to advertisers, creating new online marketplaces and designing hardware and software that we now cannot do without. Their influence is growing in both our private lives and public spaces.
What’s wrong about giant tech companies, and why it would be advisable to curb their power?
A lot of the problems that we see online are not ‘tech’ problems per se – they’re sociopolitical problems that are amplified, and in some cases incentivised, to spread like wildfire and reach more people than ever before. When it comes to disinformation, for instance, a big part of the problem is the business models that guide the major social media platforms that we communicate on. The most successful tech companies have grown the way they have because they have monetised our personal data. They cash in on our attention in the form of ad revenue. When you think about how we use platforms designed for viral advertising as our primary method of social and political discourse – and increasingly our consumption of news – you can start to see why disinformation thrives on platforms like Facebook and Google.
Another example of the ‘attention economy’ is YouTube, Google’s video platform, which recommends videos to users automatically, often leading us down ‘rabbit holes’ of increasingly more extreme content in order to keep us hooked and watching. When content recommendation algorithms are designed to maximise attention to drive profit, they end up fuelling radical beliefs and often spreading misinformation.
What can be done about people using the internet to disseminate extremist ideas, hate speech and false information?
I’m glad that you asked this because there is definitely a risk of censorship and regulation to fix this problem that actually results in violations of fundamental rights and freedoms. Worryingly, we’re seeing ‘fake news laws’ that use this problem as an excuse to limit freedom of speech and crack down on dissent, particularly in countries where civic space is shrinking and press freedom lacking. Mozilla fellow Renee di Resta puts this best when she says that freedom of reach is not the same as freedom of speech. Most of the big internet platforms have rules around what constitutes acceptable speech, which basically take the form of community guidelines. At the same time, platforms like Facebook, YouTube and Twitter give people the ability to amplify their ideas to a huge number of people. This is the ‘freedom of reach’, and increasingly we’re seeing that used to spread ideas that are at odds with the values that underpin peaceful and democratic societies, like equality and human rights.
I think that it’s important to acknowledge that the business models of major technology platforms create the perfect storm for the manipulation of users. Disinformation and hate speech are content designed to appeal to emotions such as fear, anger and even humour. Combine this with the ability to target specific profiles of people in order to manipulate their ideas, and this becomes the perfect place for this sort of ideas to take hold. Once purveyors of disinformation have gained enough of a following, they can comfortably move offline and mobilise these newly-formed communities, which is something we’re seeing more and more of. It’s this freedom of reach problem that platforms have yet to grapple with, maybe because it’s at odds with the very way that they make money. The challenge is to come up with ideas that improve the mechanisms to eliminate, on one hand, the likelihood of amplification of anti-rights ideas and hate speech, and on the other, the danger of censorship and discrimination against certain types of legitimate discourse.
There has been a lot of controversy about how social media platforms are, or are not, dealing with misinformation. Do you think fact-checking is the way to go?
Responsible reporting and factual information are crucial for people to make informed choices, including about who should govern them; that is why fighting misinformation with care for free speech is key. Among the things that can be done about misinformation it is worth mentioning the verification of advertisers, as well as improved monitoring tools to detect bots and check facts. These are things that if implemented correctly would have an impact on these issues, and not just during the time of elections.
But the critical place where platforms are currently failing to live up to their commitments is around transparency. There must be greater transparency into how people use platforms like Facebook and Google to pay for ads that are intended to manipulate political discourse. At the same time, we must ensure that these companies are open about how content monitoring happens on platforms and that there are redress policies in place for people whose content has been wrongfully removed or deleted. Specific attention should be paid to the situation of fragile democracies, where disinformation can be more harmful because of the absence or limited presence of independent media.
There have been election campaigns plagued by disinformation tactics in many different places, from India to Brazil. In response to public pressure, Facebook expressed a commitment to provide better transparency around how their platform is used for political advertisement so that sophisticated disinformation campaigns can be detected and understood and ultimately prevented. But the transparency tools that the company has released are largely insufficient. This has been repeatedly verified by independent researchers. There is a big disconnect between what companies say in public regarding what they intend to do or have done to prevent disinformation and the actual tools they put out there to do the job. I think Facebook should focus on creating tools that can actually get the job done.
And besides what the companies running the social media platforms are or are not doing, there have been independent initiatives that seem to have worked. A tactic that disinformation campaigns use is the repurposing of content, for instance using a photo that was taken in a different place and time or sharing an old article out of context to spread the rumour that something new has just happened when it’s actually something else entirely that has been reported five years ago. In response to this, The Guardian came up with a brilliant solution: when someone shares on Twitter or Facebook an article of theirs that’s over 12 months old a yellow sign will automatically appear on the shared image stating that the article is over 12 months old. The notice also appears when you click on the article. This initiative was a proactive move from The Guardian to empower people to think more critically about what they are seeing. We need many more initiatives like this.
Are disinformation campaigns also plaguing European politics in the ways that we’ve seen in the USA and Brazil?
Most definitely, which is why in the lead up to the 2019 European elections four leading internet companies – Facebook, Google, Twitter and Mozilla – signed the European Commission’s Code of Practice on Disinformation pledging to take specific steps to prevent disinformation from manipulating citizens of the European Union. This was basically a voluntary code of conduct, and what we saw when monitoring its implementation ahead of the European elections was that the platforms did not deliver what they promised to the European Commission in terms of detecting and acting against disinformation.
Fortunately, ahead of the European Parliamentary elections we didn’t see election interference and political propaganda on the scale that has happened in the Philippines, for example, which is an excellent case study if you want to learn about disinformation tactics that were used very successfully. But we still have a big problem with ‘culture war debates’ that create an atmosphere of confusion, opening rifts and undermining trust in democratic processes and traditional institutions. Social media platforms have still not delivered on transparency commitments that are desperately needed to better understand what is happening.
Civil society identified a case in Poland where pro-government Facebook accounts posed as elderly people or pensioners to spread government propaganda. Before the European elections and following an independent investigation, Facebook took down 77 pages and 230 fake accounts from France, Germany, Italy, Poland, Spain and the UK, which had been followed by an estimated 32 million people and generated 67 million interactions over the previous three months alone. These were mostly part of far-right disinformation networks. Among other things, they had spread a video that was seen by 10 million people, supposedly showing migrants in Italy destroying a police car, which was actually from an old movie, and a fake story about migrant taxi drivers raping white women in Poland. A UK-based disinformation network that was uncovered in March 2019 was dedicated to disseminating fake information on topics such as immigration, LGBTQI rights and religious beliefs.
Of course this is happening all the time, and not only during elections, although elections are moments of particular visibility when a lot more than usual is at stake, so there seems to be a spike in the use of misinformation tactics around elections. This also tends to happen around other, particularly stressful situations, for example a terror attack or more generally any current event that draws people’s attention.
Why do online dynamics favour the amplification of specific kinds of messages – i.e. messages of hate instead of a narrative of human rights?
Internet platforms are designed to amplify certain types of content that are created to appeal to deep emotions, because their aim is to keep you on the platform as long as possible and make you want to share that content with friends who will also be retained as long as possible on the platform. The higher the numbers of people online and the longer they stay, the higher the number of ads that will be delivered, and the higher the ad revenue will be. What will naturally happen once these platforms are up and running is that people will develop content with a political purpose, and the dynamics around this content will be exactly the same.
Some will say that users doing this are abusing internet platforms. I disagree: I think people doing this are using those platforms exactly how they were designed to be used, but for the purpose of spreading an extremist political discourse, and the fact that this is how platforms are supposed to work is indeed a big part of the problem. It does make a difference whether someone is trying to make money from users’ posts or the platform is just a space for people to exchange ideas. We need to understand that if we are not paying for the product, then we are the product. If nobody were trying to make money out of our online interactions, there would be a higher chance of online interactions being more similar to interactions happening anywhere else, with people exchanging ideas more naturally rather than trying to catch each other’s attention by trying to elicit the strongest possible reactions.
Does it make sense for us to keep trying to use the internet to have reasonable and civilised political conversations, or is it not going to happen?
I love the internet, and so I think it’s not an entirely hopeless situation. The fact that the attention economy, combined with the growing power of a handful of tech companies, drives the way that we use the internet is really problematic, but at the same time there is a lot of work being done to think through how alternative business models for the internet could look, and increasingly regulators and internet users are realising that the current model is really broken. A fundamental question worth asking is whether it is possible to balance a desire to maximise ad revenue, and therefore people’s time spent on social media, and social responsibility. I think that companies as big as Google or Facebook have a duty to invest in social responsibility even if it has a negative impact on their revenue or it requires a level of transparency and accountability that frightens them. Responsibility implies, among other things, getting people’s consent to use their data to determine what they see online, and provide users’ insights into when and how you’re making choices about what they see.
You may wonder, ‘why would they do that?’. Well, it’s interesting. The CEO of YouTube, Susan Wojcicki, recently published a blog post saying that the spread of harmful content on YouTube is more of a revenue risk for the company because it damages their reputation. I think that there is an element of reputational damage, but the much bigger risk that these companies face is policy-makers cracking down on these platforms and their ability to continue operating as usual without greater accountability. For instance, the European code of practice on disinformation was self-regulatory; we have seen at least in this case that the platforms that committed to the Code didn’t deliver tools that were sufficient to provide greater political ad transparency, and they are still not held accountable for this. Does this example mean that policy-makers will be under greater pressure to regulate the online space by mandating transparency instead of requesting it? These are the sort of conversations that should define new approaches to dealing with harmful content online in order to make sure it remains a positive force in our lives.