droits numériques

  • #UN75 : « Désormais, l'ONU doit demeurer accessible par le biais de plateformes virtuelles »

    Laura ObrienEn commémoration du 75ème anniversaire de l’Organisation des Nations Unies (ONU), CIVICUS organise des discussions avec des activistes, des avocats et des professionnels de la société civile sur les rôles que l'ONU a joués jusqu'à présent, les succès qu'elle a obtenu et les défis qu'elle doit relever pour l'avenir. CIVICUS s'entretient avec Laura O'Brien, responsable du plaidoyer avec les Nations Unies pour Access Now, une organisation de la société civile qui s'est donné pour mission de défendre et d'étendre les droits numériques des utilisateurs en danger dans le monde entier. Access Now se bat pour les droits humains à l'ère numérique en combinant le soutien technique direct, le travail politique intégral, le plaidoyer mondial, le soutien financier de base, des interventions juridiques et des réunions telles que la RightsCon.

    Dans quelle mesure la charte fondatrice des Nations Unies est-elle adéquate à l'ère de l'internet ?

    Depuis des années, la société civile encourage l'ONU à moderniser ses opérations afin de demeurer pertinentes à l'ère du numérique. En 2020, l'ONU a été confrontée à une dure réalité. L'organisation internationale a été obligée de faire la plus grande partie de son travail en ligne, tout en essayant d'atteindre de manière significative la communauté mondiale et de faire progresser la coopération internationale au milieu d'une crise sanitaire mondiale, d'un racisme systémique, du changement climatique et d'un autoritarisme croissant. La commémoration du 75ème anniversaire de l'ONU par un retour à sa charte fondatrice - un document axé sur la dignité inhérente de l'être humain - ne pourrait être plus cruciale.

    La Charte des Nations Unies a été rédigée bien avant l'existence d'Internet. Toutefois, sa vision globale reste cohérente avec la nature universelle de l'internet, qui permet au mieux la création illimitée de sociétés de connaissance fondées sur les droits humains fondamentaux, tout en amplifiant la nécessité de réduire les risques, non seulement par des moyens souverains, mais aussi par la coopération internationale. Guidée par les principes de la Charte des Nations Unies, la Déclaration rendue publique à l’occasion de la célébration du soixante-quinzième anniversaire de l’Organisation des Nations Unies s’engage à renforcer la coopération numérique dans le monde entier. Par cet engagement formel, les Nations Unies ont enfin pris en compte l’impact transformateur des technologies numériques sur notre vie quotidienne, ouvrant la voie - ou mieux, comme l’a dit le secrétaire général de l’ONU, établissant une « feuille de route » - pour nous guider à travers les promesses et les dangers de l’ère numérique.

    Alors que les dirigeants mondiaux ont reconnu la nécessité d'écouter « les peuples », comme le stipule le préambule de la Charte des Nations Unies, la société civile continue de rappeler à ces mêmes dirigeants d'écouter plus activement. Compte tenu de sa mission d'étendre et de défendre les droits fondamentaux de tous les individus, la société civile reste une force clé pour faire progresser la responsabilité de toutes les parties prenantes et garantir la transparence dans des processus multilatéraux souvent opaques.

    Quels défis avez-vous rencontré dans vos interactions avec le système des Nations Unies et comment les avez-vous relevés ?

    J'ai commencé à travailler dans mon rôle public en tant que défenseuse des intérêts de Access Now aux Nations Unies, quelques mois avant le confinement dû à la COVID-19 ici à New York. En ce sens, j'ai dû, dans mon nouveau rôle, relever les défis auxquels la société civile était confrontée à l'époque : comment faire en sorte que les acteurs de la société civile, dans toute leur diversité, participent de manière significative aux débats de l'ONU, alors que cette dernière déplace ses opérations au monde virtuel ? À l'époque, nous craignions que les mesures exceptionnelles utilisées pour lutter contre la pandémie soient utilisées pour restreindre l'accès de la société civile et ses possibilités de participer aux forums des Nations Unies. Nous nous sommes donc mobilisés. Plusieurs organisations de la société civile, dont CIVICUS, ont travaillé ensemble pour établir des principes et recommandations à l'intention des Nations Unies afin d'assurer l'inclusion de la société civile dans ses discussions pendant la pandémie et au-delà. Cela nous a aidé à travailler ensemble pour présenter une position unifiée sur l'importance de la participation des parties prenantes et pour rappeler aux Nations Unies de mettre en place des protections adéquates visant à garantir l'accessibilité des plateformes en ligne, ainsi que des garanties suffisantes pour protéger la sécurité des personnes impliquées virtuellement.

    Qu'est-ce qui ne fonctionne pas actuellement et devrait changer, et comment la société civile travaille-t-elle pour apporter ce changement ?

    L'année 2020 a été une année d'humble réflexion critique sur soi-même, tant au niveau individuel que collectif. Aujourd'hui plus que jamais, le monde se rend compte que le modèle centré sur l'État ne conduira pas à un avenir prometteur. Les problèmes auxquels est confrontée une partie du monde ont des conséquences pour le monde entier. Les décisions que nous prenons maintenant, notamment en ce qui concerne les technologies numériques, auront un impact sur les générations futures. Alors que le monde se remet des événements de 2020, nous avons besoin que les dirigeants mondiaux tirent les leçons de l'expérience acquise et continuent à s'engager dans cette réflexion critique. La résolution de problèmes mondiaux exige une action interdisciplinaire qui respecte et protège les détenteurs de droits issus de milieux divers et intersectoriels. Nous ne pouvons tout simplement pas continuer à fonctionner et à traiter ces questions de haut en bas. En effet, les menaces telles que la désinformation ont souvent leur origine au sommet.

    Partout dans le monde, la société civile se mobilise en première ligne des campagnes mondiales qui cherchent à sensibiliser aux problèmes auxquels nous sommes confrontés aujourd'hui et à leur impact sur les générations futures, tout en plaidant pour la responsabilisation dans les forums nationaux, régionaux et internationaux. De la condamnation des coupures d'Internet de #KeepItOn à la remise en question de la mise en œuvre des programmes d'identité numérique dans le monde de #WhyID, nous nous efforçons d'informer, de surveiller, de mesurer et de fournir des recommandations politiques qui respectent les droits, en fonction de nos diverses interactions avec les personnes les plus à risque.

    Quelles sont, selon vous, les principales faiblesses du système multilatéral mondial actuel et quelles leçons peut-on tirer de la pandémie de COVID-19 ?

    Le système multilatéral mondial doit cesser de fonctionner et de traiter les problèmes mondiaux de manière déconnectée. Cela nécessite non seulement un multilatéralisme mieux organisé en réseau - dans l'ensemble du système des Nations Unies, tant à New York qu'à Genève, et incluant les organisations régionales et les institutions financières, entre autres - mais aussi une approche plus interdisciplinaire des problèmes mondiaux. Par exemple, les recherches disponibles suggèrent que plus de 90 % des objectifs de développement durable sont liés aux droits de l'homme et au travail au niveau international. Pourquoi, alors, les acteurs internationaux continuent-ils à soulever ces objectifs uniquement en relation avec les débats sur le développement et non en relation avec les droits de l'homme ?

    De nombreux enseignements peuvent être tirés de la pandémie pour promouvoir une coopération internationale plus inclusive. En 2020, les Nations Unies ont pris conscience des avantages de la connectivité Internet : atteindre des voix plus diverses dans le monde entier. Des personnes qui, en raison d'innombrables obstacles, sont généralement incapables d'accéder physiquement aux instances des Nations Unies basées à Genève et à New York ont désormais la possibilité de contribuer de manière significative aux débats des Nations Unies via Internet. Dans le même temps, cependant, le basculement en ligne a également conduit à la reconnaissance officielle par les Nations Unies des graves conséquences pour les quatre milliards de personnes qui ne sont pas connectées à Internet. Ces personnes peuvent se heurter à divers obstacles dus à la fracture numérique et à l'insuffisance des ressources en matière de culture numérique, ou demeurer hors ligne en raison de l'imposition de coupures sélectives de services Internet.

    À l'avenir, l'ONU devrait continuer à donner accès à ses débats par le biais de plateformes virtuelles accessibles. Tout comme l'ONU est conçue pour faciliter les interactions entre les États, la société civile gagnerait à avoir à sa disposition des espaces tout aussi sûrs et ouverts pour se connecter. Malheureusement, trop de communautés continuent d'être marginalisées et vulnérables. Les gens subissent souvent des représailles lorsqu'ils élèvent la voix et diffusent leurs histoires au-delà des frontières. Nous nous efforçons de créer ce genre de forum civil ouvert avec RightsCon - le principal sommet mondial sur les droits de l'homme à l'ère numérique - et d'autres événements similaires. En juillet 2020, RightsCon Online a réuni 7 681 participants de 157 pays du monde entier dans le cadre d'un sommet virtuel. Les organisateurs ont surmonté les obstacles liés au coût et à l'accès en lançant un Fonds pour la connectivité qui a fourni un soutien financier direct aux participants pour qu'ils puissent se connecter et participer en ligne. Ces réunions doivent être considérées comme faisant partie intégrante non seulement de la gouvernance de l'internet, mais aussi de la réalisation des trois piliers des Nations Unies - développement, droits de l'homme et paix et sécurité - à l'ère numérique. Lorsqu'elle est menée de manière inclusive et sécurisée, la participation en ligne offre la possibilité d'accroître le nombre et la diversité des participants sur la plateforme et supprime les obstacles liés aux déplacements et aux contraintes de ressources.

    Globalement, la communauté internationale doit tirer les leçons de l'année 2020. Nous devons travailler solidairement pour promouvoir une coopération internationale ouverte, inclusive et significative pour un avenir prospère pour tous.

    Contactez Access Now via sonsite web ou son profilFacebook, et suivez@accessnow et@lo_brie sur Twitter.

  • HATE SPEECH: ‘The fact that this is how online platforms are supposed to work is a big part of the problem’

    Brandi Geurkink

    As part of our 2019thematic report, we are interviewing civil society activists, leaders and experts about their experience of facing backlash by anti-rights groups. CIVICUS speaks with Brandi Geurkink, European campaigner at the Mozilla Foundation, a non-profit corporation based on the conviction that the internet is a global public resource that must remain open and accessible to all. The Mozilla Foundation seeks to fuel a movement for a healthy internet by supporting a diverse group offellows working on key internet issues, connecting open internet leaders at events such asMozFest, publishing critical research in theInternet Health Report and rallying citizens aroundadvocacy issues that connect the wellbeing of the internet directly to everyday life.

    The regular internet user possibly identifies Mozilla with Firefox and doesn’t know that there is also a Mozilla Foundation. Can you tell us what the Mozilla Foundation is and what it does?

    I get this question asked a lot. When I told my family I was working for Mozilla, they said, ‘wait, you are not a software professional, what are you doing there?’ What makes Mozilla different from other software developers is that it is a non-profit tech company. Mozilla is the creator of Firefox, which is a web browser, but an open source one. It also has users’ privacy at its core. And all of Mozilla’s work is guided by the Mozilla Manifesto, which provides a set of principles for an open, accessible and safe internet, viewed as a global public resource.

    Profits that come from the Firefox browser are invested into the Mozilla Foundation, which is the Mozilla Corporation’s sole shareholder, and our mission is to build an open and healthy web. Mozilla creates and enables open-source technologies and communities that support the Manifesto’s principles; creates and delivers consumer products that represent the Manifesto’s principles; uses the Mozilla assets – intellectual property such as copyrights and trademarks, infrastructure, funds and reputation – to keep the internet an open platform; promotes models for creating economic value for the public benefit; and promotes the Mozilla Manifesto principles in public discourse and within the internet industry.

    Mozilla promotes an open and healthy web through a variety of activities. For instance, we have a fellowships programme to empower and connect leaders from the internet health movement. This programme supports people doing all sorts of things, from informing debates on how user rights and privacy should be respected online to creating technologies that will enable greater user agency. Mozilla also produces an annual report, the Internet Health Report, and mobilises people in defence of a healthy internet. A lot of this work takes the form of campaigning for corporate accountability; we seek to influence the way in which tech companies are thinking about privacy and user agency within their products and to mobilise consumers so that they demand better behaviour and more control over their online lives.

    How do you define a healthy internet?

    A healthy internet is a place where people can safely and freely communicate and participate. For this to happen, the internet must truly be a global public resource rather than something that’s owned by a few giant tech companies, who are then in control of who participates and how they do it. Some key components of a healthy web are openness, privacy and security. We place a lot of emphasis on digital inclusion, which determines who has access; web literacy, which determines who can succeed online; and decentralisation, which focuses on who controls the web – ideally, many rather than just a few.

    The internet is currently dominated by eight American and Chinese companies: Alphabet (Google’s parent company), Alibaba, Amazon, Apple, Baidu, Facebook, Microsoft and Tencent. These companies and their subsidiaries dominate all layers of the digital world, from search engines, browsers and social media services to core infrastructure like undersea cables and cloud computing. They built their empires by selling our attention to advertisers, creating new online marketplaces and designing hardware and software that we now cannot do without. Their influence is growing in both our private lives and public spaces.

    What’s wrong about giant tech companies, and why it would be advisable to curb their power?

    A lot of the problems that we see online are not ‘tech’ problems per se – they’re sociopolitical problems that are amplified, and in some cases incentivised, to spread like wildfire and reach more people than ever before. When it comes to disinformation, for instance, a big part of the problem is the business models that guide the major social media platforms that we communicate on. The most successful tech companies have grown the way they have because they have monetised our personal data. They cash in on our attention in the form of ad revenue. When you think about how we use platforms designed for viral advertising as our primary method of social and political discourse – and increasingly our consumption of news – you can start to see why disinformation thrives on platforms like Facebook and Google.

    Another example of the ‘attention economy’ is YouTube, Google’s video platform, which recommends videos to users automatically, often leading us down ‘rabbit holes’ of increasingly more extreme content in order to keep us hooked and watching. When content recommendation algorithms are designed to maximise attention to drive profit, they end up fuelling radical beliefs and often spreading misinformation.

    What can be done about people using the internet to disseminate extremist ideas, hate speech and false information?

    I’m glad that you asked this because there is definitely a risk of censorship and regulation to fix this problem that actually results in violations of fundamental rights and freedoms. Worryingly, we’re seeing ‘fake news laws’ that use this problem as an excuse to limit freedom of speech and crack down on dissent, particularly in countries where civic space is shrinking and press freedom lacking. Mozilla fellow Renee di Resta puts this best when she says that freedom of reach is not the same as freedom of speech. Most of the big internet platforms have rules around what constitutes acceptable speech, which basically take the form of community guidelines. At the same time, platforms like Facebook, YouTube and Twitter give people the ability to amplify their ideas to a huge number of people. This is the ‘freedom of reach’, and increasingly we’re seeing that used to spread ideas that are at odds with the values that underpin peaceful and democratic societies, like equality and human rights.

    I think that it’s important to acknowledge that the business models of major technology platforms create the perfect storm for the manipulation of users. Disinformation and hate speech are content designed to appeal to emotions such as fear, anger and even humour. Combine this with the ability to target specific profiles of people in order to manipulate their ideas, and this becomes the perfect place for this sort of ideas to take hold. Once purveyors of disinformation have gained enough of a following, they can comfortably move offline and mobilise these newly-formed communities, which is something we’re seeing more and more of. It’s this freedom of reach problem that platforms have yet to grapple with, maybe because it’s at odds with the very way that they make money. The challenge is to come up with ideas that improve the mechanisms to eliminate, on one hand, the likelihood of amplification of anti-rights ideas and hate speech, and on the other, the danger of censorship and discrimination against certain types of legitimate discourse.

    There has been a lot of controversy about how social media platforms are, or are not, dealing with misinformation. Do you think fact-checking is the way to go?

    Responsible reporting and factual information are crucial for people to make informed choices, including about who should govern them; that is why fighting misinformation with care for free speech is key. Among the things that can be done about misinformation it is worth mentioning the verification of advertisers, as well as improved monitoring tools to detect bots and check facts. These are things that if implemented correctly would have an impact on these issues, and not just during the time of elections.

    But the critical place where platforms are currently failing to live up to their commitments is around transparency. There must be greater transparency into how people use platforms like Facebook and Google to pay for ads that are intended to manipulate political discourse. At the same time, we must ensure that these companies are open about how content monitoring happens on platforms and that there are redress policies in place for people whose content has been wrongfully removed or deleted. Specific attention should be paid to the situation of fragile democracies, where disinformation can be more harmful because of the absence or limited presence of independent media.

    There have been election campaigns plagued by disinformation tactics in many different places, from India to Brazil. In response to public pressure, Facebook expressed a commitment to provide better transparency around how their platform is used for political advertisement so that sophisticated disinformation campaigns can be detected and understood and ultimately prevented. But the transparency tools that the company has released are largely insufficient. This has been repeatedly verified by independent researchers. There is a big disconnect between what companies say in public regarding what they intend to do or have done to prevent disinformation and the actual tools they put out there to do the job. I think Facebook should focus on creating tools that can actually get the job done.

    And besides what the companies running the social media platforms are or are not doing, there have been independent initiatives that seem to have worked. A tactic that disinformation campaigns use is the repurposing of content, for instance using a photo that was taken in a different place and time or sharing an old article out of context to spread the rumour that something new has just happened when it’s actually something else entirely that has been reported five years ago. In response to this, The Guardian came up with a brilliant solution: when someone shares on Twitter or Facebook an article of theirs that’s over 12 months old a yellow sign will automatically appear on the shared image stating that the article is over 12 months old. The notice also appears when you click on the article. This initiative was a proactive move from The Guardian to empower people to think more critically about what they are seeing. We need many more initiatives like this.

    Are disinformation campaigns also plaguing European politics in the ways that we’ve seen in the USA and Brazil?

    Most definitely, which is why in the lead up to the 2019 European elections four leading internet companies – Facebook, Google, Twitter and Mozilla – signed the European Commission’s Code of Practice on Disinformation pledging to take specific steps to prevent disinformation from manipulating citizens of the European Union. This was basically a voluntary code of conduct, and what we saw when monitoring its implementation ahead of the European elections was that the platforms did not deliver what they promised to the European Commission in terms of detecting and acting against disinformation.

    Fortunately, ahead of the European Parliamentary elections we didn’t see election interference and political propaganda on the scale that has happened in the Philippines, for example, which is an excellent case study if you want to learn about disinformation tactics that were used very successfully. But we still have a big problem with ‘culture war debates’ that create an atmosphere of confusion, opening rifts and undermining trust in democratic processes and traditional institutions. Social media platforms have still not delivered on transparency commitments that are desperately needed to better understand what is happening.

    Civil society identified a case in Poland where pro-government Facebook accounts posed as elderly people or pensioners to spread government propaganda. Before the European elections and following an independent investigation, Facebook took down 77 pages and 230 fake accounts from France, Germany, Italy, Poland, Spain and the UK, which had been followed by an estimated 32 million people and generated 67 million interactions over the previous three months alone. These were mostly part of far-right disinformation networks. Among other things, they had spread a video that was seen by 10 million people, supposedly showing migrants in Italy destroying a police car, which was actually from an old movie, and a fake story about migrant taxi drivers raping white women in Poland. A UK-based disinformation network that was uncovered in March 2019 was dedicated to disseminating fake information on topics such as immigration, LGBTQI rights and religious beliefs.

    Of course this is happening all the time, and not only during elections, although elections are moments of particular visibility when a lot more than usual is at stake, so there seems to be a spike in the use of misinformation tactics around elections. This also tends to happen around other, particularly stressful situations, for example a terror attack or more generally any current event that draws people’s attention.

    Why do online dynamics favour the amplification of specific kinds of messages – i.e. messages of hate instead of a narrative of human rights?

    Internet platforms are designed to amplify certain types of content that are created to appeal to deep emotions, because their aim is to keep you on the platform as long as possible and make you want to share that content with friends who will also be retained as long as possible on the platform. The higher the numbers of people online and the longer they stay, the higher the number of ads that will be delivered, and the higher the ad revenue will be. What will naturally happen once these platforms are up and running is that people will develop content with a political purpose, and the dynamics around this content will be exactly the same.

    Some will say that users doing this are abusing internet platforms. I disagree: I think people doing this are using those platforms exactly how they were designed to be used, but for the purpose of spreading an extremist political discourse, and the fact that this is how platforms are supposed to work is indeed a big part of the problem. It does make a difference whether someone is trying to make money from users’ posts or the platform is just a space for people to exchange ideas. We need to understand that if we are not paying for the product, then we are the product. If nobody were trying to make money out of our online interactions, there would be a higher chance of online interactions being more similar to interactions happening anywhere else, with people exchanging ideas more naturally rather than trying to catch each other’s attention by trying to elicit the strongest possible reactions.

    Does it make sense for us to keep trying to use the internet to have reasonable and civilised political conversations, or is it not going to happen?

    I love the internet, and so I think it’s not an entirely hopeless situation. The fact that the attention economy, combined with the growing power of a handful of tech companies, drives the way that we use the internet is really problematic, but at the same time there is a lot of work being done to think through how alternative business models for the internet could look, and increasingly regulators and internet users are realising that the current model is really broken. A fundamental question worth asking is whether it is possible to balance a desire to maximise ad revenue, and therefore people’s time spent on social media, and social responsibility. I think that companies as big as Google or Facebook have a duty to invest in social responsibility even if it has a negative impact on their revenue or it requires a level of transparency and accountability that frightens them. Responsibility implies, among other things, getting people’s consent to use their data to determine what they see online, and provide users’ insights into when and how you’re making choices about what they see.

    You may wonder, ‘why would they do that?’. Well, it’s interesting. The CEO of YouTube, Susan Wojcicki, recently published a blog post saying that the spread of harmful content on YouTube is more of a revenue risk for the company because it damages their reputation. I think that there is an element of reputational damage, but the much bigger risk that these companies face is policy-makers cracking down on these platforms and their ability to continue operating as usual without greater accountability. For instance, the European code of practice on disinformation was self-regulatory; we have seen at least in this case that the platforms that committed to the Code didn’t deliver tools that were sufficient to provide greater political ad transparency, and they are still not held accountable for this. Does this example mean that policy-makers will be under greater pressure to regulate the online space by mandating transparency instead of requesting it? These are the sort of conversations that should define new approaches to dealing with harmful content online in order to make sure it remains a positive force in our lives.

    Get in touch with the Mozilla Foundation through itswebsite, andfollow@mozilla and@bgeurkink on Twitter.

  • INNOVATION : « Les structures et pratiques conventionnelles en matière de droits humains ne sont plus optimales ou suffisantes »

    Ed RekoshCIVICUS s'entretient avec Edwin Rekosh, cofondateur et associé directeur de Rights CoLab, sur les effets de l'émergence des infrastructures numériques sur la société civile et sur l'importance de l'innovation et des droits numériques. Rights CoLab est une organisation multinationale de collaboration qui cherche à développer des stratégies audacieuses pour faire progresser les droits humains dans les domaines de la société civile, la technologie, les affaires et la finance. 

    Que fait Rights CoLab ?

    Rights CoLab génère des stratégies expérimentales et collaboratives pour relever les défis actuels en matière de droits humains dans une perspective systémique. En particulier, nous étudions et facilitons de nouvelles façons d'organiser l'engagement civique et de tirer parti des marchés pour provoquer un changement transformationnel.

    Nous voyons l'opportunité de soutenir l'engagement civique en s'appuyant sur des tendances en dehors de l'espace philanthropique traditionnel. Par exemple, nous nous intéressons aux modèles organisationnels issus de l'entreprise sociale, qui peuvent générer des revenus commerciaux pour soutenir les opérations. Nous nous intéressons également à l'utilisation de la technologie pour réduire les coûts et atteindre les objectifs de la société civile sans structure organisationnelle formelle, en gérant un site web ou une application par exemple. En outre, nous étudions le changement générationnel dans la façon dont les jeunes envisagent leur carrière, avec un nombre croissant de jeunes qui recherchent une vie professionnelle mêlant des objectifs de carrière à but non lucratif et à but lucratif. Nous pensons qu'il est impératif de développer des moyens plus efficaces de collaborer, notamment au-delà des frontières, des perspectives professionnelles et des domaines d'expertise.

    Parmi les défis que nous cherchons à relever, citons la résurgence de l'autoritarisme et des politiques populistes, qui ont renforcé l'accent mis sur la souveraineté nationale et la diabolisation des organisations de la société civile (OSC) locales, perçues comme des agents de valeurs et d'intérêts étrangers antagonistes. Nous cherchons également à aborder les réalités géopolitiques changeantes qui sapent l'infrastructure des droits humains construite au cours du dernier demi-siècle, ainsi que les héritages à long terme de la dynamique du pouvoir colonial. Enfin, nous cherchons à développer de nouvelles approches pour limiter l'impact négatif sur les droits humains du pouvoir croissant des entreprises, particulièrement aggravé par la pandémie.

    Quelle a été l'inspiration derrière la fondation de Rights CoLab ?

    La décision de créer Rights CoLab est partie du principe que le domaine des droits humains a atteint un stade de maturité, comportant de nombreux défis qui soulèvent des questions sur les structures et les pratiques étant devenues conventionnelles, mais peut-être plus optimales ou suffisantes.

    J'étais un avocat spécialisé dans les droits humains qui était passé de la pratique juridique dans un grand cabinet d'avocats à un travail pour une organisation de défense des droits humains à Washington, DC. L'expérience que j'ai vécue en gérant un projet en Roumanie au début des années 1990 a complètement transformé ma façon de voir les droits humains et mon rôle en tant qu'avocat américain. J'ai commencé à travailler main dans la main avec les OSC locales, jouant un rôle clé en tant que soutien en coulisse et connecteur de la société civile, reliant les OSC entre elles et aux ressources, et soutenant la mise en œuvre d'autres stratégies basées sur la solidarité.

    Peu après, j'ai fondé puis présidé PILnet, un réseau mondial d'avocats de l'intérêt public et du secteur privé dans l'espace de la société civile. Au moment où j'ai décidé de quitter ce poste, j'ai commencé à m'intéresser à la fermeture de l'espace pour la société civile que je voyais se produire autour de moi, et qui affectait particulièrement le travail que nous faisions en Russie et en Chine. J'ai fini par reprendre contact avec Paul Rissman et Joanne Bauer, les deux autres cofondateurs de Rights CoLab, et nous avons commencé à échanger des notes sur nos préoccupations et nos idées respectives concernant l'avenir des droits humains. Tous les trois, nous avons créé Rights CoLab afin de poursuivre la conversation, en examinant les défis actuels en matière de droits humains selon trois perspectives très différentes. Nous voulions créer un espace où nous pourrions poursuivre ce dialogue et faire appel à d'autres personnes pour favoriser l'expérimentation de nouvelles approches.

    Dans quelle mesure l'arène de la société civile a-t-elle changé ces dernières années en raison de l'émergence des infrastructures numériques ?

    Elle a changé de façon spectaculaire. L'une des principales conséquences de l'émergence des infrastructures numériques est que la sphère publique s'est étendue de multiples façons. Le rôle des médias est moins limité par les frontières et il y a beaucoup moins d'intermédiation par le contrôle éditorial. Cela représente à la fois une opportunité et une menace pour les droits humains. Les individus et les groupes peuvent influencer le discours public avec moins de barrières à l'entrée, mais d'un autre côté, la sphère publique n'est plus réglementée par les gouvernements de manière prévisible, ce qui érode les moyens traditionnels de responsabilité et rend difficile la garantie d'un terrain de jeu équitable pour le marché des idées. La technologie numérique permet également de faire preuve de solidarité au-delà des frontières d'une manière beaucoup moins restreinte par certaines des limitations pratiques du passé. En bref, bien que de nouvelles menaces pour les droits humains découlent de l'émergence des infrastructures numériques, les outils numériques offrent également des opportunités.

    Dans quelle mesure les droits et les infrastructures numériques sont-ils essentiels au travail de la société civile ?

    À bien des égards, les droits numériques sont secondaires par rapport aux structures, pratiques et valeurs de la société civile. La société civile découle intrinsèquement du respect de la dignité humaine, de l'esprit créatif de l'entreprise humaine et de la politique de solidarité. Les modes d'organisation des personnes entre elles pour s'engager dans le monde qui les entoure dépendent principalement de valeurs, de compétences et de pratiques à orientation sociale. La technologie numérique ne peut fournir que des outils, qui ne possèdent pas intrinsèquement ces caractéristiques. En ce sens, la technologie numérique n'est ni nécessaire ni suffisante pour organiser la société civile. Néanmoins, les technologies numériques peuvent améliorer l'organisation de la société civile, à la fois en exploitant certaines des nouvelles opportunités inhérentes à l'infrastructure numérique émergente, et en garantissant les droits numériques dont nous avons besoin pour éviter les conséquences négatives de l'infrastructure numérique émergente sur les droits humains.

    Nous nous efforçons d'identifier les approches de la société civile qui peuvent contribuer à résoudre ces problèmes. Un exemple est Chequeado, un média argentin à but non lucratif qui se consacre à la vérification du discours public, à la lutte contre la désinformation et à la promotion de l'accès à l'information dans les sociétés d'Amérique latine. Chequeado, qui existe sous la forme d'une plateforme technologique et d'une application, a pu s'adapter rapidement pour répondre à la pandémie de COVID-19 en développant un tableau de bord de vérification des faits. Celui-ci vise à dissiper la désinformation sur les origines, la transmission et le traitement du COVID-19, et combattre la désinformation qui conduit à la discrimination ethnique et à une méfiance croissante envers la science. Par conséquent, s'il est essentiel de comprendre les utilisations potentielles de la technologie numérique, il est tout aussi important de garder le cap sur des éléments qui ont peu à voir avec la technologie en soi, comme les valeurs, la solidarité et les normes et institutions fondées sur des principes.

    Comment le Rights CoLab promeut-il l'innovation dans la société civile ?

    Nous poursuivons l'innovation au sein de la société civile dans plusieurs dimensions : la façon dont les groupes de la société civile s'organisent, y compris leurs structures de base et leurs modèles de revenus ; la façon dont ils utilisent la technologie ; et les changements nécessaires à l'écosystème de la société civile internationale pour atténuer les effets négatifs des dynamiques de pouvoir contre-productives qui découlent du colonialisme.

    Pour les deux premières dimensions, nous nous sommes associés à d'autres centres de ressources afin de co-créer une carte géo-localisée d'études de cas illustrant l'innovation dans les formes organisationnelles et les modèles de revenus. Nous avons développé une typologie pour cette base de données croissante d'exemples qui met l'accent sur les alternatives au modèle traditionnel pour les groupes de la société civile basés localement - en d'autres termes, les alternatives au financement caritatif transfrontalier. Avec nos partenaires, nous développons également des méthodologies de formation et des stratégies de communication visant à faciliter l'expérimentation et l'adoption plus large de modèles alternatifs pour structurer et financer les activités de la société civile.

    Notre effort pour améliorer l'écosystème de la société civile internationale s'appuie sur un projet de changement systémique que nous avons lancé sous le nom de RINGO (Reimagining the International NGO). L'un des principaux axes du projet RINGO est l'intermédiation entre les grandes OSC internationales et les espaces civiques plus locaux. L'hypothèse est que les OSC internationales peuvent être un obstacle ou un catalyseur pour une société civile locale plus forte, et que la manière dont elle est organisée actuellement - avec des rôles dominants concentrés dans le nord et l'ouest du monde - doit être repensée.

    RINGO comprend un laboratoire social avec 50 participants représentant un large éventail de tailles et de types d'OSC, provenant de diverses régions géographiques. Au cours d'un processus de deux ans, le laboratoire social générera des prototypes qui pourront être testés dans le but de transformer radicalement le secteur et la manière dont nous organisons la société civile au niveau mondial. Nous espérons tirer des prototypes des enseignements précieux qui pourront être reproduits ou reformulés et mis à l'échelle. Il existe déjà de nombreuses bonnes pratiques, mais il y a aussi des dysfonctionnements systémiques qui ne sont toujours pas résolus. Nous sommes donc à la recherche de pratiques, de processus et de structures nouveaux et plus transformationnels. Si nous ne cherchons pas l'utopie, nous recherchons le changement systémique. C'est pourquoi le processus d'enquête avec le laboratoire social est vital, car il nous permet de creuser en profondeur les problèmes fondamentaux qui paralysent le système, en allant au-delà des pratiques palliatives et superficiellement attrayantes.

    Contactez Rights CoLab via sonsite web et suivez@rightscolab et@EdRekosh sur Twitter.

  • ONLINE CIVIC SPACE: ‘We shouldn’t expect tech giants to solve the problems that they have created’

    Marek TuszynskiAs part of our 2019 thematic report, we are interviewing civil society activists and leaders about their experiences of backlash from anti-rights groups and their strategies to strengthen progressive narratives and civil society responses. CIVICUS speaks to Marek Tuszynski, co-founder and creative director of Tactical Tech, aBerlin-based international civil society organisation that engages with citizens and civil society to explore the impacts of technologyon society and individual autonomy. Founded in 2003, in a context where optimism about technology prevailed but focus was lacking on what specifically it could do for civil society, Tactical Tech uses its research findings to create practical solutions for citizens and civil society.

    Some time ago it seemed that the online sphere could offer civil society a new space for debate and action – until it became apparent that online civic space was being restricted too. What kinds of restrictions are you currently seeing online, and what's changed in recent years?

    Fifteen years ago, the digital space in a way belonged to the people who were experimenting with it. People were building that space using the available tools, there was a movement towards open source software, and activists were trying build an online space that would empower people to exercise democratic freedoms, and even build democracy from the ground up. But those experimental spaces became gentrified, appropriated, taken over and assimilated into other existing spaces. In that sense, digital space underwent processes very similar to all other spaces that offer alternatives and in which people are able to experiment freely. That space shrank massively, and free spaces were replaced by centralised technology and started to be run as business models.

    For most people, including civil society, using the internet means resorting to commercial platforms and systems such as Google and Facebook. The biggest change has been the centralisation of what used to be a distributed system where anybody was able to run their own services. Now we rely on centralised, proprietary and controlled services. And those who initially weren’t very prevalent, like state or corporate entities, are now dominating. The difference is also in the physical aspect, because technology is becoming more and more accessible and way cheaper than it used to be, and a lot of operations that used to require much higher loads of technology have become affordable by a variety of state and non-state entities.

    The internet became not just a corporate space, but also a space for politics and confrontation on a much larger scale than it was five or ten years ago. Revelations coming from whistleblowers such as Edward Snowden and scandals such as those with Facebook and Cambridge Analytica are making people much more aware of what this space has become. It is now clear that it is not all about liberation movements and leftist politics, and that there are many groups on the other end of the political spectrum that have become quite savvy in using and abusing technology.

    In sum, changes are being driven by both economic and, increasingly, political factors. What makes them inescapable is that technology is everywhere, and it has proliferated so fast that it has become very hard to imagine going back to doing anything without it. It is also very hard, if not impossible, to compartmentalise your life and separate your professional and personal activities, or your political and everyday or mundane activities. From the point of view of technology, you always inhabit the same, single space.

    Do people who use the internet for activism rather than, say, to share cat pictures, face different or specific threats online?

    Yes, but I would not underestimate the cat pictures, as insignificant as they may seem to people who are using these tools for political or social work. It is the everyday user who defines the space that others use for activism. The way technologies are used by people who use them for entertainment ends up defining them for all of us.

    That said, there are indeed people who are much more vulnerable, whose exposure or monitoring can restrict their freedoms and be dangerous for them – not only physically but also psychologically. These people are exposed to potential interceptions and surveillance to find out what are they doing and how, and also face a different kind of threat, in the form of online harassment, which may impact on their lives well beyond their political activities, as people tend to be bullied not only for what they do, but also for what or who they are.

    There seems to be a very narrow understanding of what is political. In fact, regardless of whether you consider yourself political, very mundane activities and behaviours can be seen by others as political. So it is not just about what you directly produce in the form of text, speech, or interaction, but also about what can be inferred from these activities. Association with organisations, events, or places may become equally problematic. The same happens with the kind of tools you are using and the times you are using them, whether you are using encryption and why. All these elements that you may not be thinking of may end up defining you as a person who is trying to do something dangerous or politically controversial. And of course, many of the tools that activists use and need, like encryption, are also used by malicious actors, because technology is not intrinsically good or bad, but is defined by its users. You can potentially be targeted as a criminal just for using – for activism, for instance – the same technologies that criminals use.

    Who are the ‘vulnerable minorities’ you talk about in your recentreport on digital civic space, and why are they particularly vulnerable online?

    Vulnerable minorities are precisely those groups that face greater risks online because of their gender, race or sexual orientation. Women generally are more vulnerable to online harassment, and politically active women even more so. Women journalists, for instance, are subject to more online abuse than male journalists when speaking about controversial issues or voicing opinions. They are targeted because of their gender. This is also the case for civil society organisations (CSOs) focused on women’s rights, which are being targeted both offline and online, including through distributed denial of service (DDoS) attacks, website hacks, leaks of personal information, fabricated news, direct threats and false reports against Facebook content leading to the suspension of their pages. Digital attacks sometimes translate into physical violence, when actors emboldened by the hate speech promoted on online platforms end up posing serious threats not only to people’s voices but also to their lives.

    But online spaces can also be safe spaces for these groups. In many places the use of internet and online platforms creates spaces where people can exercise their freedoms of expression and protest. They can come out representing minorities, be it sexual or otherwise, in a way they would not be able to in the physical places where they live, because it would be too dangerous or practically impossible. They are able to exercise these freedoms in online spaces because these spaces are still separate from the places where they live. However, there is a limited understanding of the fact that this does not make these spaces neutral. Information can be leaked, shared, distorted and weaponised, and used to hurt you when you least expect it.

    Still, for many minorities, and especially for sexual minorities, social media platforms are the sole place where they can exercise their freedoms, access information and actually be who they are, and say it aloud. At the same time, they technically may retain anonymity but their interests and associations will give away who they are, and this can be used against them. These outlets can create an avenue for people to become political, but that avenue can always be closed down in non-democratic contexts, where those in power can decide to shut down entire services or cut off the internet entirely.

    Is this what you mean when you refer to social media as ‘a double-edged sword’? What does this mean for civil society, and how can we take advantage of the good side of social media?

    Social media platforms are a very important tool for CSOs. Organisations depend on them to share information, communicate and engage with their supporters, organise events, measure impact and response based on platform analytics, and even raise funds. But the use of these platforms has also raised concerns regarding the harvesting of data, which is analysed and used by the corporations themselves, by third-party companies and by governments.

    Over the years, government requests for data from and about social media users have increased, and so have arrests and criminalisation of organisations and activists based on their social media behaviour. So again, what happens online does not stay online – in fact, it sometimes has serious physical repercussions on the safety and well-being of activists and CSO staff. Digital attacks and restrictions affect individuals and their families, and may play a role in decisions on whether to continue to do their work, change tactics, or quit. Online restrictions can also cause a chilling effect on the civil society that is at the forefront of the promotion of human rights and liberties. For these organisations, digital space can be an important catalyst for wider civil political participation in physical spaces, so when it is attacked, restricted, or shrunk, it has repercussions for civic participation in general.

    Is there some way that citizens and civil society can put pressure on giant tech companies to do the right thing?

    When we talk about big social media actors we think of Facebook, Twitter, Instagram and WhatsApp – three of which are in fact part of Facebook – and we don’t think of Google because it is not seen as social media, even though it is more pervasive, it is everywhere, and it is not even visible as such.

    We shouldn’t expect these companies to solve the problems they have created. They are clearly incapable of addressing the problems they cause. One of these problems is online harassment and abuse of the rules. They have no capacity to clean the space of certain activities and if they try to do so, then they will censor any content that resembles something dangerous, even if it isn’t, to not risk being accused of supporting radical views.

    We expect tech giants to be accountable and responsible for the problems they create, but that’s not very realistic, and it won’t just happen by itself. When it comes to digital-based repression and the use of surveillance and data collection to impose restrictions, there is a striking lack of accountability. Tech platforms depend on government authorisation to operate, so online platforms and tech companies are slow to react, if they do at all, in the face of accusations of surveillance, hate speech, online harassment and attacks, especially when powerful governments or other political forces are involved.

    These companies are not going to do the right thing if they are not encouraged to do so. There are small steps as well as large steps one can take, starting with deciding how and when to use each of these tools, and whether to use them at all. At every step of the way, there are alternatives that you can use to do different things – for one, you can decentralise the way you interact with people and not use one platform for everything.

    Of course, that’s not the whole problem, and the solution cannot be based on individual choices alone. A more structural solution would have to take place at the level of policy frameworks, as can be seen in Europe where regulations have been put in place and it is possible to see a framework shaping up for large companies to take more responsibility, and to define who they are benefiting from their access to personal information.

    What advice can you offer for activists to use the internet more safely?

    We have a set of tools and very basic steps to enable people who don’t want to leave these platforms, who depend on them, to understand what it is that they are doing, what kind of information they leave behind that can be used to identify them and how to avoid putting into the system more information than is strictly necessary. It is important to learn how to browse the internet privately and safely, how to choose the right settings on Google and Facebook and take back control of your data and your activity in these spaces.

    People don’t usually understand how much about themselves is online and can be easily found via search engines, and the ways in which by exposing themselves they also expose the people who they work with and the activities they do. When using the internet we reveal where we are, what we are working on, what device we are using, what events we are participating in, what we are interested in, who we are connecting with, the phone providers we use, the visas we apply for, our travel itineraries, the kinds of financial transactions we do and with whom, and so on. To do all kinds of things we are increasingly dependent on more and more interlinked and centralised platforms that share information with one another and with other entities, and we aren’t even aware that they are doing it because they use trackers and cookies, among other things. We are giving away data about ourselves and what we do all the time, not only when we are online, but also when others enter information about us, for instance when travelling.

    But there are ways to reduce our data trail, become more secure online and build a healthier relationship with technology. Some basic steps are to delete your activity as it is stored by search engines such as Google and switch to other browsers. You can delete unnecessary apps, switch to alternative apps for messaging, voice and video calls and maps – ideally to some that offer the same services you are used to, but that do not profit from your data – change passwords, declutter your accounts and renovate your social media profiles, separate your accounts to make it more difficult for tech giants to follow your activities, tighten your social media privacy settings, opt for private browsing (but still, be aware that this does not make you anonymous on the web), disable location services on mobile devices and do many other things that will keep you safer online.

    Another issue that activists face online is misinformation and disinformation strategies. In that regard, there is a need for new tactics and standards to enable civil society groups, activists, bloggers and journalists to react by verifying information and creating evidence based on solid information. Online space can enable this if we promote investigation as a form of engagement. If we know how to protect ourselves, we can make full use of this space, in which there is still room for many positive things.

    Get in touch with Tactical Tech through itswebsite and Facebook page, or follow@Info_Activism on Twitter.

COMMUNIQUEZ AVEC NOUS

Canaux numériques

Siège social
25  Owl Street, 6th Floor
Johannesbourg,
Afrique du Sud,
2092
Tél: +27 (0)11 833 5959
Fax: +27 (0)11 833 7997

Bureau pour l’onu: New-York
CIVICUS, c/o We Work
450 Lexington Ave
New-York
NY 10017
Etats-Unis

Bureau pour l’onu : Geneve
11 Avenue de la Paix
Genève
Suisse
CH-1202
Tél: +41.79.910.34.28