derechos digitales

  • #UN75: “En lo sucesivo, la ONU debe seguir proporcionando acceso a través de plataformas virtuales”

    Laura ObrienEn conmemoración del 75º aniversario de la fundación de las Naciones Unidas (ONU), CIVICUS está teniendo conversaciones con activistas, personas defensoras y profesionales de la sociedad civil acerca de los roles que la ONU ha desempeñado hasta ahora, los éxitos que ha conseguido y los desafíos que enfrenta de cara al futuro. CIVICUS conversa con Laura O’Brien, Oficial de Incidencia en la ONU de Access Now, una organización de la sociedad civil que se ha propuesto la misión de defender y extender los derechos digitales de los usuarios en riesgo en todo el mundo. Access Now lucha por los derechos humanos en la era digital mediante una combinación de apoyo técnico directo, labor política integral, incidencia global, apoyo financiero a nivel de base, intervenciones legales y encuentros tales como RightsCon. 

    ¿En qué medida la Carta fundacional de la ONU resulta adecuada en la era de internet?

    Durante años la sociedad civil ha animado a la ONU a modernizar sus operaciones para mantener su relevancia en la era digital. En 2020, la ONU se encontró con una dura realidad. La organización internacional se vio obligada a llevar a cabo la mayor parte de su trabajo en línea, al tiempo que intentaba llegar de forma significativa a la comunidad mundial y avanzar en la cooperación internacional en medio de una crisis sanitaria global, el racismo sistémico, el cambio climático y el autoritarismo creciente. Conmemorar el 75º aniversario de la ONU mediante un retorno a su Carta fundacional - un documento centrado en la dignidad inherente al ser humano - no podría ser más crucial.

    La Carta de las Naciones Unidas fue redactada mucho antes de que existiera internet. No obstante, su visión global sigue siendo consistente con el carácter universal de internet, que en el mejor de los casos permite crear sociedades del conocimiento sin fronteras basadas en los derechos humanos fundamentales, al tiempo que amplifica la necesidad de reducir los riesgos, no solo por medios soberanos, sino también mediante la cooperación internacional. Guiada por los principios de la Carta de las Naciones Unidas, la Declaración sobre la conmemoración del 75º aniversario de las Naciones Unidas se compromete acertadamente a mejorar la cooperación digital en todo el mundo. A través de este compromiso formal, las Naciones Unidas finalmente prestaron atención al impacto transformador que las tecnologías digitales tienen sobre nuestra vida cotidiana, allanando el camino – o mejor, como lo expresó el Secretario General de las Naciones Unidas, estableciendo una “hoja de ruta” - para guiarnos a través de las promesas y peligros de la era digital.

    Si bien los líderes mundiales reconocieron la necesidad de escuchar a “los pueblos”, tal como se recoge en el preámbulo de la Carta de las Naciones Unidas, la sociedad civil continúa recordándoles a esos mismos líderes que deben escuchar más activamente. Dada su misión de ampliar y defender los derechos humanos fundamentales de todos los individuos, la sociedad civil sigue siendo una fuerza clave para avanzar en la rendición de cuentas de todas las partes involucradas y garantizar la transparencia de procesos multilaterales que a menudo son opacos.

    ¿Qué desafíos ha enfrentado en sus interacciones con el sistema de las Naciones Unidas y cómo los ha manejado?

    Comencé a trabajar en mi rol de representación externa, en calidad de Oficial de Incidencia en la ONU de Access Now, pocos meses antes del confinamiento por el COVID-19 aquí en Nueva York. En ese sentido, debí navegar en mi nuevo rol los retos que enfrentaba la sociedad civil en ese momento: ¿cómo asegurarnos de que los actores de la sociedad civil, en toda su diversidad, participen de forma significativa en los debates de las Naciones Unidas a medida que la ONU desplaza sus operaciones al ámbito virtual? En ese momento temíamos que las medidas de excepción utilizadas para luchar contra la pandemia pudieran ser utilizadas para restringir el acceso de la sociedad civil y sus oportunidades de participación en los foros de las Naciones Unidas. De modo que nos movilizamos. Varias organizaciones de la sociedad civil, entre ellas CIVICUS, trabajaron juntas para establecer principios y recomendaciones para la ONU de modo de asegurar la inclusión de la sociedad civil en los debates de la organización durante la pandemia y en lo sucesivo. Esto nos ayudó a trabajar juntas para presentar una posición unificada en relación con la importancia de la participación de las partes interesadas y para recordar a las Naciones Unidas que debía establecer protecciones adecuadas para garantizar plataformas en línea accesibles y suficientes salvaguardas para proteger la seguridad de quienes participan en forma virtual.

    ¿Qué es lo que actualmente no funciona y debería cambiar? ¿De qué manera está trabajando la sociedad civil para lograr ese cambio?

    El año 2020 fue un año de humilde autorreflexión crítica a nivel tanto individual como colectivo. Ahora, más que nunca, el mundo se está dando cuenta de que el modelo estadocéntrico no nos conducirá a un futuro esperanzador. Los problemas que enfrenta una parte del mundo tienen consecuencias para todo el mundo. Las decisiones que tomemos ahora, en particular en lo que respecta a las tecnologías digitales, repercutirán sobre las generaciones futuras. A medida que el mundo se recupere de los acontecimientos de 2020, necesitamos que los líderes mundiales aprovechen las lecciones aprendidas y continúen participando en esta reflexión crítica. La solución de los problemas globales requiere de una acción interdisciplinaria que respete y proteja a personas portadoras de derechos procedentes de entornos diversos e intersectoriales. Sencillamente, no podemos seguir operando ni abordando estas cuestiones desde arriba hacia abajo. De hecho, amenazas tales como la desinformación suelen originarse en la cúspide.

    En todo el mundo la sociedad civil se está movilizando al frente de campañas globales que buscan sensibilizarnos acerca de los problemas que enfrentamos actualmente y su repercusión sobre las generaciones futuras, mientras que abogan por la rendición de cuentas en foros nacionales, regionales e internacionales. Desde la condena de los cortes de internet de #KeepItOn hasta el cuestionamiento de la implementación de programas de identidad digital en todo el mundo de #WhyID, estamos trabajando para informar, supervisar, medir y proporcionar recomendaciones de políticas que respeten los derechos, sobre la base de nuestras diversas interacciones con las personas que están en mayor riesgo.

    ¿Cuáles son, en su opinión, las principales debilidades del actual sistema multilateral global, y qué lecciones pueden extraerse de la pandemia de COVID-19?

    El sistema multilateral global debe dejar de funcionar y de abordar los problemas mundiales en forma inconexa. Esto requiere no solamente un multilateralismo mejor conectado en red - en todo el sistema de las Naciones Unidas, tanto en Nueva York como en Ginebra, e incluyendo a las organizaciones regionales y las instituciones financieras, entre otras organizaciones - sino también un abordaje de los problemas mundiales desde una perspectiva más interdisciplinaria. Por ejemplo, las investigaciones disponibles sugieren que más del 90% de los Objetivos de Desarrollo Sostenible (SDG) se vinculan con los derechos humanos internacionales y el trabajo. Por lo tanto, es necesario proteger los derechos humanos para alcanzar los SDG. ¿Por qué, entonces, los actores internacionales siguen planteando los SDG únicamente en relación con los debates sobre el desarrollo y no en relación con los derechos humanos?

    Se pueden extraer de la pandemia muchas lecciones para promover una cooperación internacional más inclusiva. En 2020, las Naciones Unidas tomaron conciencia de los beneficios de la conectividad a internet: llegar a más voces diversas en todo el mundo. Gente que debido a un sinfín de barreras normalmente no puede acceder físicamente a las plataformas de las Naciones Unidas con sede en Ginebra y Nueva York, ahora pudo contribuir significativamente a los debates de las Naciones Unidas vía internet. Sin embargo, al mismo tiempo el funcionamiento en línea también hizo que las Naciones Unidas reconocieran oficialmente el grave impacto que representan los aproximadamente 4.000 millones de personas que siguen desconectadas de internet. Esas personas pueden sufrir discriminación en la red, experimentar diversas barreras debido a las brechas digitales y a la insuficiencia de recursos de alfabetización digital, o permanecer desconectadas a cause de la imposición de cortes selectivos del servicio de internet.

    En lo sucesivo, la ONU debe seguir proporcionando acceso a sus debates a través de plataformas virtuales accesibles. Así como la ONU están construida para facilitar las interacciones entre los Estados, el mundo se beneficiaría si hubiera espacios igualmente seguros y abiertos para que la sociedad civil se conecte. Lamentablemente, demasiadas comunidades continúan siendo marginalizadas y vulnerables. La gente a menudo sufre represalias cuando alza la voz y difunde sus historias a través de las fronteras. Nosotros nos esforzamos por crear ese tipo de foro civil abierto con la RightsCon - la principal cumbre mundial sobre derechos humanos en la era digital - y otros eventos similares. En julio de 2020, RightsCon Online reunió a 7.681 participantes de 157 países de todo el mundo en una cumbre virtual. Los organizadores superaron barreras de costos y acceso mediante la puesta en marcha de un Fondo de Conectividad que proporcionó apoyo financiero directo para que los participantes pudieran conectarse y participar en línea. Estas reuniones deben considerarse una parte integral no solo de la gobernanza de internet sino también de la concreción de los tres pilares de las Naciones Unidas - desarrollo, derechos humanos y paz y seguridad - en la era digital. Cuando se lleva a cabo de manera inclusiva y segura, la participación en línea ofrece la oportunidad de ampliar el número y la diversidad de participantes en la plataforma y elimina las barreras y limitaciones de recursos relacionadas con los viajes.

    En términos generales, la comunidad internacional debe aprender las lecciones de 2020. Debemos trabajar de manera solidaria para promover una cooperación internacional abierta, inclusiva y significativa a fin de lograr un futuro próspero para todos y todas.

    Póngase en contacto con Access Now a través de supágina web o su perfil deFacebook, y siga a@accessnow y a@lo_brie en Twitter.

  • HATE SPEECH: ‘The fact that this is how online platforms are supposed to work is a big part of the problem’

    Brandi Geurkink

    As part of our 2019thematic report, we are interviewing civil society activists, leaders and experts about their experience of facing backlash by anti-rights groups. CIVICUS speaks with Brandi Geurkink, European campaigner at the Mozilla Foundation, a non-profit corporation based on the conviction that the internet is a global public resource that must remain open and accessible to all. The Mozilla Foundation seeks to fuel a movement for a healthy internet by supporting a diverse group offellows working on key internet issues, connecting open internet leaders at events such asMozFest, publishing critical research in theInternet Health Report and rallying citizens aroundadvocacy issues that connect the wellbeing of the internet directly to everyday life.

    The regular internet user possibly identifies Mozilla with Firefox and doesn’t know that there is also a Mozilla Foundation. Can you tell us what the Mozilla Foundation is and what it does?

    I get this question asked a lot. When I told my family I was working for Mozilla, they said, ‘wait, you are not a software professional, what are you doing there?’ What makes Mozilla different from other software developers is that it is a non-profit tech company. Mozilla is the creator of Firefox, which is a web browser, but an open source one. It also has users’ privacy at its core. And all of Mozilla’s work is guided by the Mozilla Manifesto, which provides a set of principles for an open, accessible and safe internet, viewed as a global public resource.

    Profits that come from the Firefox browser are invested into the Mozilla Foundation, which is the Mozilla Corporation’s sole shareholder, and our mission is to build an open and healthy web. Mozilla creates and enables open-source technologies and communities that support the Manifesto’s principles; creates and delivers consumer products that represent the Manifesto’s principles; uses the Mozilla assets – intellectual property such as copyrights and trademarks, infrastructure, funds and reputation – to keep the internet an open platform; promotes models for creating economic value for the public benefit; and promotes the Mozilla Manifesto principles in public discourse and within the internet industry.

    Mozilla promotes an open and healthy web through a variety of activities. For instance, we have a fellowships programme to empower and connect leaders from the internet health movement. This programme supports people doing all sorts of things, from informing debates on how user rights and privacy should be respected online to creating technologies that will enable greater user agency. Mozilla also produces an annual report, the Internet Health Report, and mobilises people in defence of a healthy internet. A lot of this work takes the form of campaigning for corporate accountability; we seek to influence the way in which tech companies are thinking about privacy and user agency within their products and to mobilise consumers so that they demand better behaviour and more control over their online lives.

    How do you define a healthy internet?

    A healthy internet is a place where people can safely and freely communicate and participate. For this to happen, the internet must truly be a global public resource rather than something that’s owned by a few giant tech companies, who are then in control of who participates and how they do it. Some key components of a healthy web are openness, privacy and security. We place a lot of emphasis on digital inclusion, which determines who has access; web literacy, which determines who can succeed online; and decentralisation, which focuses on who controls the web – ideally, many rather than just a few.

    The internet is currently dominated by eight American and Chinese companies: Alphabet (Google’s parent company), Alibaba, Amazon, Apple, Baidu, Facebook, Microsoft and Tencent. These companies and their subsidiaries dominate all layers of the digital world, from search engines, browsers and social media services to core infrastructure like undersea cables and cloud computing. They built their empires by selling our attention to advertisers, creating new online marketplaces and designing hardware and software that we now cannot do without. Their influence is growing in both our private lives and public spaces.

    What’s wrong about giant tech companies, and why it would be advisable to curb their power?

    A lot of the problems that we see online are not ‘tech’ problems per se – they’re sociopolitical problems that are amplified, and in some cases incentivised, to spread like wildfire and reach more people than ever before. When it comes to disinformation, for instance, a big part of the problem is the business models that guide the major social media platforms that we communicate on. The most successful tech companies have grown the way they have because they have monetised our personal data. They cash in on our attention in the form of ad revenue. When you think about how we use platforms designed for viral advertising as our primary method of social and political discourse – and increasingly our consumption of news – you can start to see why disinformation thrives on platforms like Facebook and Google.

    Another example of the ‘attention economy’ is YouTube, Google’s video platform, which recommends videos to users automatically, often leading us down ‘rabbit holes’ of increasingly more extreme content in order to keep us hooked and watching. When content recommendation algorithms are designed to maximise attention to drive profit, they end up fuelling radical beliefs and often spreading misinformation.

    What can be done about people using the internet to disseminate extremist ideas, hate speech and false information?

    I’m glad that you asked this because there is definitely a risk of censorship and regulation to fix this problem that actually results in violations of fundamental rights and freedoms. Worryingly, we’re seeing ‘fake news laws’ that use this problem as an excuse to limit freedom of speech and crack down on dissent, particularly in countries where civic space is shrinking and press freedom lacking. Mozilla fellow Renee di Resta puts this best when she says that freedom of reach is not the same as freedom of speech. Most of the big internet platforms have rules around what constitutes acceptable speech, which basically take the form of community guidelines. At the same time, platforms like Facebook, YouTube and Twitter give people the ability to amplify their ideas to a huge number of people. This is the ‘freedom of reach’, and increasingly we’re seeing that used to spread ideas that are at odds with the values that underpin peaceful and democratic societies, like equality and human rights.

    I think that it’s important to acknowledge that the business models of major technology platforms create the perfect storm for the manipulation of users. Disinformation and hate speech are content designed to appeal to emotions such as fear, anger and even humour. Combine this with the ability to target specific profiles of people in order to manipulate their ideas, and this becomes the perfect place for this sort of ideas to take hold. Once purveyors of disinformation have gained enough of a following, they can comfortably move offline and mobilise these newly-formed communities, which is something we’re seeing more and more of. It’s this freedom of reach problem that platforms have yet to grapple with, maybe because it’s at odds with the very way that they make money. The challenge is to come up with ideas that improve the mechanisms to eliminate, on one hand, the likelihood of amplification of anti-rights ideas and hate speech, and on the other, the danger of censorship and discrimination against certain types of legitimate discourse.

    There has been a lot of controversy about how social media platforms are, or are not, dealing with misinformation. Do you think fact-checking is the way to go?

    Responsible reporting and factual information are crucial for people to make informed choices, including about who should govern them; that is why fighting misinformation with care for free speech is key. Among the things that can be done about misinformation it is worth mentioning the verification of advertisers, as well as improved monitoring tools to detect bots and check facts. These are things that if implemented correctly would have an impact on these issues, and not just during the time of elections.

    But the critical place where platforms are currently failing to live up to their commitments is around transparency. There must be greater transparency into how people use platforms like Facebook and Google to pay for ads that are intended to manipulate political discourse. At the same time, we must ensure that these companies are open about how content monitoring happens on platforms and that there are redress policies in place for people whose content has been wrongfully removed or deleted. Specific attention should be paid to the situation of fragile democracies, where disinformation can be more harmful because of the absence or limited presence of independent media.

    There have been election campaigns plagued by disinformation tactics in many different places, from India to Brazil. In response to public pressure, Facebook expressed a commitment to provide better transparency around how their platform is used for political advertisement so that sophisticated disinformation campaigns can be detected and understood and ultimately prevented. But the transparency tools that the company has released are largely insufficient. This has been repeatedly verified by independent researchers. There is a big disconnect between what companies say in public regarding what they intend to do or have done to prevent disinformation and the actual tools they put out there to do the job. I think Facebook should focus on creating tools that can actually get the job done.

    And besides what the companies running the social media platforms are or are not doing, there have been independent initiatives that seem to have worked. A tactic that disinformation campaigns use is the repurposing of content, for instance using a photo that was taken in a different place and time or sharing an old article out of context to spread the rumour that something new has just happened when it’s actually something else entirely that has been reported five years ago. In response to this, The Guardian came up with a brilliant solution: when someone shares on Twitter or Facebook an article of theirs that’s over 12 months old a yellow sign will automatically appear on the shared image stating that the article is over 12 months old. The notice also appears when you click on the article. This initiative was a proactive move from The Guardian to empower people to think more critically about what they are seeing. We need many more initiatives like this.

    Are disinformation campaigns also plaguing European politics in the ways that we’ve seen in the USA and Brazil?

    Most definitely, which is why in the lead up to the 2019 European elections four leading internet companies – Facebook, Google, Twitter and Mozilla – signed the European Commission’s Code of Practice on Disinformation pledging to take specific steps to prevent disinformation from manipulating citizens of the European Union. This was basically a voluntary code of conduct, and what we saw when monitoring its implementation ahead of the European elections was that the platforms did not deliver what they promised to the European Commission in terms of detecting and acting against disinformation.

    Fortunately, ahead of the European Parliamentary elections we didn’t see election interference and political propaganda on the scale that has happened in the Philippines, for example, which is an excellent case study if you want to learn about disinformation tactics that were used very successfully. But we still have a big problem with ‘culture war debates’ that create an atmosphere of confusion, opening rifts and undermining trust in democratic processes and traditional institutions. Social media platforms have still not delivered on transparency commitments that are desperately needed to better understand what is happening.

    Civil society identified a case in Poland where pro-government Facebook accounts posed as elderly people or pensioners to spread government propaganda. Before the European elections and following an independent investigation, Facebook took down 77 pages and 230 fake accounts from France, Germany, Italy, Poland, Spain and the UK, which had been followed by an estimated 32 million people and generated 67 million interactions over the previous three months alone. These were mostly part of far-right disinformation networks. Among other things, they had spread a video that was seen by 10 million people, supposedly showing migrants in Italy destroying a police car, which was actually from an old movie, and a fake story about migrant taxi drivers raping white women in Poland. A UK-based disinformation network that was uncovered in March 2019 was dedicated to disseminating fake information on topics such as immigration, LGBTQI rights and religious beliefs.

    Of course this is happening all the time, and not only during elections, although elections are moments of particular visibility when a lot more than usual is at stake, so there seems to be a spike in the use of misinformation tactics around elections. This also tends to happen around other, particularly stressful situations, for example a terror attack or more generally any current event that draws people’s attention.

    Why do online dynamics favour the amplification of specific kinds of messages – i.e. messages of hate instead of a narrative of human rights?

    Internet platforms are designed to amplify certain types of content that are created to appeal to deep emotions, because their aim is to keep you on the platform as long as possible and make you want to share that content with friends who will also be retained as long as possible on the platform. The higher the numbers of people online and the longer they stay, the higher the number of ads that will be delivered, and the higher the ad revenue will be. What will naturally happen once these platforms are up and running is that people will develop content with a political purpose, and the dynamics around this content will be exactly the same.

    Some will say that users doing this are abusing internet platforms. I disagree: I think people doing this are using those platforms exactly how they were designed to be used, but for the purpose of spreading an extremist political discourse, and the fact that this is how platforms are supposed to work is indeed a big part of the problem. It does make a difference whether someone is trying to make money from users’ posts or the platform is just a space for people to exchange ideas. We need to understand that if we are not paying for the product, then we are the product. If nobody were trying to make money out of our online interactions, there would be a higher chance of online interactions being more similar to interactions happening anywhere else, with people exchanging ideas more naturally rather than trying to catch each other’s attention by trying to elicit the strongest possible reactions.

    Does it make sense for us to keep trying to use the internet to have reasonable and civilised political conversations, or is it not going to happen?

    I love the internet, and so I think it’s not an entirely hopeless situation. The fact that the attention economy, combined with the growing power of a handful of tech companies, drives the way that we use the internet is really problematic, but at the same time there is a lot of work being done to think through how alternative business models for the internet could look, and increasingly regulators and internet users are realising that the current model is really broken. A fundamental question worth asking is whether it is possible to balance a desire to maximise ad revenue, and therefore people’s time spent on social media, and social responsibility. I think that companies as big as Google or Facebook have a duty to invest in social responsibility even if it has a negative impact on their revenue or it requires a level of transparency and accountability that frightens them. Responsibility implies, among other things, getting people’s consent to use their data to determine what they see online, and provide users’ insights into when and how you’re making choices about what they see.

    You may wonder, ‘why would they do that?’. Well, it’s interesting. The CEO of YouTube, Susan Wojcicki, recently published a blog post saying that the spread of harmful content on YouTube is more of a revenue risk for the company because it damages their reputation. I think that there is an element of reputational damage, but the much bigger risk that these companies face is policy-makers cracking down on these platforms and their ability to continue operating as usual without greater accountability. For instance, the European code of practice on disinformation was self-regulatory; we have seen at least in this case that the platforms that committed to the Code didn’t deliver tools that were sufficient to provide greater political ad transparency, and they are still not held accountable for this. Does this example mean that policy-makers will be under greater pressure to regulate the online space by mandating transparency instead of requesting it? These are the sort of conversations that should define new approaches to dealing with harmful content online in order to make sure it remains a positive force in our lives.

    Get in touch with the Mozilla Foundation through itswebsite, andfollow@mozilla and@bgeurkink on Twitter.

  • INNOVACIÓN: “Las estructuras y prácticas usuales de derechos humanos ya no son óptimas ni suficientes”

    Ed Rekosh

    CIVICUS conversa con Edwin Rekosh, cofundador y socio director de Rights CoLab, sobre los efectos en la sociedad civil del surgimiento de infraestructuras digitales y la importancia de la innovación y los derechos digitales. Rights CoLab es una red multinacional de colaboración que busca desarrollar estrategias audaces para impulsar los derechos humanos en los campos de la sociedad civil, la tecnología, las empresas y las finanzas.

    ¿Qué hace Rights CoLab?

    Rights CoLab produce estrategias experimentales y de colaboración para abordar los desafíos actuales en materia de derechos humanos desde una perspectiva sistémica. En particular, investigamos y facilitamos nuevas formas de organizar el compromiso cívico y de aprovechar los mercados para lograr un cambio transformador.

    Vemos en los cambios que están ocurriendo fuera del espacio filantrópico tradicional una oportunidad para impulsar el compromiso cívico. Por ejemplo, nos interesan los modelos organizativos que están surgiendo en el campo de la empresa social, donde pueden generarse ingresos de actividades comerciales para mantener las operaciones. También nos interesa el uso de la tecnología para reducir costos y alcanzar los objetivos de la sociedad civil sin necesidad de una estructura organizativa formal, por ejemplo, a través de una página web o una aplicación. Además, estamos explorando el cambio generacional que está ocurriendo respecto de la forma en que la juventud percibe sus carreras: cada vez son más numerosos los y las jóvenes que buscan una vida laboral que combine objetivos profesionales sin ánimo de lucro y con ánimo de lucro. Creemos que es imperativo desarrollar formas más eficaces de colaboración, especialmente a través de las fronteras, las perspectivas profesionales y los terrenos de experiencia.

    Entre los desafíos que tratamos de abordar está el resurgimiento del autoritarismo y la política populista, que ha reforzado el énfasis en la soberanía nacional y la demonización de las organizaciones de la sociedad civil (OSC) locales, percibidas como agentes de valores e intereses extranjeros antagónicos. También tratamos de abordar las cambiantes realidades geopolíticas que están socavando la infraestructura de derechos humanos construida en el último medio siglo, así como los legados a largo plazo de la dinámica de poder colonial. Y queremos desarrollar nuevos enfoques para frenar el impacto negativo sobre los derechos humanos del creciente poder corporativo, especialmente bajo las formas que se han visto agravadas por la pandemia.

    ¿Qué fue lo que les inspiró a fundar Rights CoLab?

    La decisión de fundar Rights CoLab se basó en la comprensión de que el campo de los derechos humanos ha llegado a una etapa de madurez, llena de desafíos que plantean interrogantes respecto de estructuras y prácticas que se han vuelto convencionales, pero que posiblemente hayan dejado de ser óptimas o suficientes.

    Yo era un abogado de derechos humanos que había pasado de ejercer el derecho en un gran bufete a trabajar para una organización de derechos humanos en Washington, DC. La experiencia que tuve gestionando un proyecto en Rumanía a principios de los años ‘90 transformó por completo mi forma de ver los derechos humanos y mi rol como abogado estadounidense. Trabajé codo a codo con OSC locales, desempeñando entre bambalinas un rol clave de apoyo e interconexión de la sociedad civil, poniendo a las OSC en contacto entre sí y acercándolas a los recursos, e impulsando la implementación de otras estrategias basadas en la solidaridad.

    Poco después, fundé y luego presidí PILnet, una red mundial de abogados de interés público y del sector privado dentro del espacio de la sociedad civil. Más o menos cuando decidí dejar ese rol, me estaba comenzando a enfocar en el cierre del espacio cívico que veía que estaba ocurriendo a mi alrededor, y que afectaba especialmente al trabajo que estábamos haciendo en Rusia y China. Me reconecté con Paul Rissman y Joanne Bauer, los otros dos cofundadores de Rights CoLab, y empezamos a comparar nuestras apreciaciones, preocupaciones e ideas sobre el futuro de los derechos humanos. Los tres creamos Rights CoLab como una forma de continuar la conversación, examinando los desafíos actuales en materia de derechos humanos desde tres perspectivas muy diferentes. Queríamos crear un espacio donde pudiéramos continuar ese diálogo e incluir a otras personas para fomentar la experimentación con nuevos enfoques.

    ¿En qué medida ha cambiado en los últimos años el terreno de la sociedad civil como consecuencia del surgimiento de infraestructuras digitales?

    Ha cambiado muchísimo. Una de las principales consecuencias del surgimiento de la infraestructura digital ha sido la ampliación de la esfera pública en varios sentidos. El papel de los medios de comunicación está menos limitado por las fronteras y hay mucha menos intermediación vía control editorial. Esto representa tanto una oportunidad como una amenaza para los derechos humanos. Los individuos y los grupos pueden influir sobre el discurso público con menos barreras de entrada, pero, por otro lado, la esfera pública ya no es regulada por los gobiernos de forma predecible, lo cual erosiona los mecanismos tradicionales de rendición de cuentas y vuelve difícil garanteizar un terreno de juego parejo para el mercado de las ideas. La tecnología digital también permite que la solidaridad se exprese a través de las fronteras de forma mucho menos restringida por ciertas limitaciones prácticas del pasado. En resumen, aunque el surgimiento de las infraestructuras digitales supone nuevas amenazas para los derechos humanos, las herramientas digitales también ofrecen oportunidades.

    ¿Cuán centrales para el trabajo de la sociedad civil son los derechos e infraestructuras digitales?

    En muchos sentidos, los derechos digitales son secundarios a las estructuras, prácticas y valores de la sociedad civil. La sociedad civil se deriva intrínsecamente del respeto a la dignidad humana, el espíritu creativo de la acción humana y la política de la solidaridad. Los modos en que las personas se organizan para relacionarse con el mundo que les rodea dependen principalmente de valores, capacidades y prácticas orientadas socialmente. La tecnología digital solo proporciona herramientas, las cuales no poseen intrínsecamente ninguna de esas características. En ese sentido, la tecnología digital no es necesaria para la organización de la sociedad civil, ni tampoco es suficiente. Sin embargo, las tecnologías digitales pueden mejorar la organización de la sociedad civil, tanto mediante el aprovechamiento de algunas de las nuevas oportunidades inherentes a la infraestructura digital emergente como mediante el aseguramiento de los derechos digitales que necesitamos para evitar las consecuencias negativas que dicha infraestructura puede tener sobre los derechos humanos.

    Estamos esforzándonos por identificar perspectivas de sociedad civil que puedan ayudar a abordar estas cuestiones. Un ejemplo de ello es Chequeado, un medio de comunicación argentino sin fines de lucro que se dedica a verificar el discurso público, contrarrestar la desinformación y promover el acceso a la información en las sociedades latinoamericanas. Chequeado, que toma la forma de una plataforma tecnológica y una aplicación, logró adaptarse rápidamente para responder a la pandemia de COVID-19 desarrollando un tablero de verificación de datos que pudiera disipar la desinformación sobre los orígenes, la transmisión y el tratamiento de la COVID-19, así como combatir la desinformación conducente a la discriminación étnica y al aumento de la desconfianza en la ciencia. Es decir, si bien es esencial comprender los usos potenciales de la tecnología digital, también lo es mantener la atención en ciertos elementos que tienen poco que ver con la tecnología en sí, tales como los valores, la solidaridad y las normas e instituciones basadas en principios.

    ¿Cómo promueve RightsColab la innovación en la sociedad civil?

    Impulsamos la innovación en la sociedad civil en varias dimensiones: en la forma en que se organizan los grupos de la sociedad civil, incluidas sus estructuras básicas y modelos de ingresos; en el modo en que utilizan la tecnología; y en los cambios que necesita el ecosistema de la sociedad civil internacional para mitigar los efectos negativos de las contraproducentes dinámicas de poder procedentes del colonialismo.

    Respecto de las dos primeras dimensiones, nos hemos asociado con otros nodos de recursos para crear conjuntamente un mapa geolocalizado de estudios de casos que ilustren la innovación en materia de formas organizativas y modelos de ingresos. Para esta creciente base de datos de ejemplos hemos desarrollado una tipología centrada en las alternativas al modelo tradicional disponibles para los grupos de sociedad civil de base local, es decir, las alternativas a la financiación benéfica transfronteriza. Junto con nuestros aliados, también estamos desarrollando metodologías de formación y estrategias de comunicación que buscan facilitar una mayor experimentación y una más amplia adopción de modelos alternativos para estructurar y financiar las actividades de la sociedad civil.

    Nuestro esfuerzo para mejorar el ecosistema de la sociedad civil internacional se basa en un proyecto de cambio sistémico que hemos lanzado bajo el nombre de RINGO (“Reimaginando la ONG internacional”, por sus siglas en inglés). Un punto clave del proyecto RINGO es la intermediación entre las grandes OSC internacionales y los espacios cívicos más locales. La hipótesis es que las OSC internacionales pueden ser una barrera o un factor habilitante de una sociedad civil local más fuerte, y que la forma en que el sistema está organizado ahora -con los roles principales concentrados en el norte y el oeste del planeta- necesita ser replanteada.

    RINGO incluye un Laboratorio Social con 50 participantes que representan un espectro amplio de OSC de distintos tipos y tamaños, procedentes de una diversidad de geografías. A lo largo de un proceso de dos años, el Laboratorio Social generará prototipos que podrán ponerse a prueba con la intención de transformar radicalmente el sector y la forma en que organizamos la sociedad civil a nivel global. Esperamos extraer valiosas lecciones de los prototipos que puedan reproducirse o reformularse e implementarse en mayor escala. Ya hay muchas buenas prácticas, pero también hay disfuncionalidades sistémicas que siguen sin ser abordadas. Por eso buscamos prácticas, estructuras y procesos nuevos y más transformadores. Si bien no perseguimos una utopía, sí pretendemos lograr un cambio sistémico. De ahí que el proceso de indagación a través del Laboratorio Social sea vital para profundizar en los problemas de fondo que paralizan el sistema, avanzando más allá de prácticas paliativas y superficialmente atractivas.

    Póngase en contacto con Rights CoLab a través de supágina web y siga a@rightscolab y a@EdRekosh en Twitter. 

  • ONLINE CIVIC SPACE: ‘We shouldn’t expect tech giants to solve the problems that they have created’

    Marek TuszynskiAs part of our 2019 thematic report, we are interviewing civil society activists and leaders about their experiences of backlash from anti-rights groups and their strategies to strengthen progressive narratives and civil society responses. CIVICUS speaks to Marek Tuszynski, co-founder and creative director of Tactical Tech, aBerlin-based international civil society organisation that engages with citizens and civil society to explore the impacts of technologyon society and individual autonomy. Founded in 2003, in a context where optimism about technology prevailed but focus was lacking on what specifically it could do for civil society, Tactical Tech uses its research findings to create practical solutions for citizens and civil society.

    Some time ago it seemed that the online sphere could offer civil society a new space for debate and action – until it became apparent that online civic space was being restricted too. What kinds of restrictions are you currently seeing online, and what's changed in recent years?

    Fifteen years ago, the digital space in a way belonged to the people who were experimenting with it. People were building that space using the available tools, there was a movement towards open source software, and activists were trying build an online space that would empower people to exercise democratic freedoms, and even build democracy from the ground up. But those experimental spaces became gentrified, appropriated, taken over and assimilated into other existing spaces. In that sense, digital space underwent processes very similar to all other spaces that offer alternatives and in which people are able to experiment freely. That space shrank massively, and free spaces were replaced by centralised technology and started to be run as business models.

    For most people, including civil society, using the internet means resorting to commercial platforms and systems such as Google and Facebook. The biggest change has been the centralisation of what used to be a distributed system where anybody was able to run their own services. Now we rely on centralised, proprietary and controlled services. And those who initially weren’t very prevalent, like state or corporate entities, are now dominating. The difference is also in the physical aspect, because technology is becoming more and more accessible and way cheaper than it used to be, and a lot of operations that used to require much higher loads of technology have become affordable by a variety of state and non-state entities.

    The internet became not just a corporate space, but also a space for politics and confrontation on a much larger scale than it was five or ten years ago. Revelations coming from whistleblowers such as Edward Snowden and scandals such as those with Facebook and Cambridge Analytica are making people much more aware of what this space has become. It is now clear that it is not all about liberation movements and leftist politics, and that there are many groups on the other end of the political spectrum that have become quite savvy in using and abusing technology.

    In sum, changes are being driven by both economic and, increasingly, political factors. What makes them inescapable is that technology is everywhere, and it has proliferated so fast that it has become very hard to imagine going back to doing anything without it. It is also very hard, if not impossible, to compartmentalise your life and separate your professional and personal activities, or your political and everyday or mundane activities. From the point of view of technology, you always inhabit the same, single space.

    Do people who use the internet for activism rather than, say, to share cat pictures, face different or specific threats online?

    Yes, but I would not underestimate the cat pictures, as insignificant as they may seem to people who are using these tools for political or social work. It is the everyday user who defines the space that others use for activism. The way technologies are used by people who use them for entertainment ends up defining them for all of us.

    That said, there are indeed people who are much more vulnerable, whose exposure or monitoring can restrict their freedoms and be dangerous for them – not only physically but also psychologically. These people are exposed to potential interceptions and surveillance to find out what are they doing and how, and also face a different kind of threat, in the form of online harassment, which may impact on their lives well beyond their political activities, as people tend to be bullied not only for what they do, but also for what or who they are.

    There seems to be a very narrow understanding of what is political. In fact, regardless of whether you consider yourself political, very mundane activities and behaviours can be seen by others as political. So it is not just about what you directly produce in the form of text, speech, or interaction, but also about what can be inferred from these activities. Association with organisations, events, or places may become equally problematic. The same happens with the kind of tools you are using and the times you are using them, whether you are using encryption and why. All these elements that you may not be thinking of may end up defining you as a person who is trying to do something dangerous or politically controversial. And of course, many of the tools that activists use and need, like encryption, are also used by malicious actors, because technology is not intrinsically good or bad, but is defined by its users. You can potentially be targeted as a criminal just for using – for activism, for instance – the same technologies that criminals use.

    Who are the ‘vulnerable minorities’ you talk about in your recentreport on digital civic space, and why are they particularly vulnerable online?

    Vulnerable minorities are precisely those groups that face greater risks online because of their gender, race or sexual orientation. Women generally are more vulnerable to online harassment, and politically active women even more so. Women journalists, for instance, are subject to more online abuse than male journalists when speaking about controversial issues or voicing opinions. They are targeted because of their gender. This is also the case for civil society organisations (CSOs) focused on women’s rights, which are being targeted both offline and online, including through distributed denial of service (DDoS) attacks, website hacks, leaks of personal information, fabricated news, direct threats and false reports against Facebook content leading to the suspension of their pages. Digital attacks sometimes translate into physical violence, when actors emboldened by the hate speech promoted on online platforms end up posing serious threats not only to people’s voices but also to their lives.

    But online spaces can also be safe spaces for these groups. In many places the use of internet and online platforms creates spaces where people can exercise their freedoms of expression and protest. They can come out representing minorities, be it sexual or otherwise, in a way they would not be able to in the physical places where they live, because it would be too dangerous or practically impossible. They are able to exercise these freedoms in online spaces because these spaces are still separate from the places where they live. However, there is a limited understanding of the fact that this does not make these spaces neutral. Information can be leaked, shared, distorted and weaponised, and used to hurt you when you least expect it.

    Still, for many minorities, and especially for sexual minorities, social media platforms are the sole place where they can exercise their freedoms, access information and actually be who they are, and say it aloud. At the same time, they technically may retain anonymity but their interests and associations will give away who they are, and this can be used against them. These outlets can create an avenue for people to become political, but that avenue can always be closed down in non-democratic contexts, where those in power can decide to shut down entire services or cut off the internet entirely.

    Is this what you mean when you refer to social media as ‘a double-edged sword’? What does this mean for civil society, and how can we take advantage of the good side of social media?

    Social media platforms are a very important tool for CSOs. Organisations depend on them to share information, communicate and engage with their supporters, organise events, measure impact and response based on platform analytics, and even raise funds. But the use of these platforms has also raised concerns regarding the harvesting of data, which is analysed and used by the corporations themselves, by third-party companies and by governments.

    Over the years, government requests for data from and about social media users have increased, and so have arrests and criminalisation of organisations and activists based on their social media behaviour. So again, what happens online does not stay online – in fact, it sometimes has serious physical repercussions on the safety and well-being of activists and CSO staff. Digital attacks and restrictions affect individuals and their families, and may play a role in decisions on whether to continue to do their work, change tactics, or quit. Online restrictions can also cause a chilling effect on the civil society that is at the forefront of the promotion of human rights and liberties. For these organisations, digital space can be an important catalyst for wider civil political participation in physical spaces, so when it is attacked, restricted, or shrunk, it has repercussions for civic participation in general.

    Is there some way that citizens and civil society can put pressure on giant tech companies to do the right thing?

    When we talk about big social media actors we think of Facebook, Twitter, Instagram and WhatsApp – three of which are in fact part of Facebook – and we don’t think of Google because it is not seen as social media, even though it is more pervasive, it is everywhere, and it is not even visible as such.

    We shouldn’t expect these companies to solve the problems they have created. They are clearly incapable of addressing the problems they cause. One of these problems is online harassment and abuse of the rules. They have no capacity to clean the space of certain activities and if they try to do so, then they will censor any content that resembles something dangerous, even if it isn’t, to not risk being accused of supporting radical views.

    We expect tech giants to be accountable and responsible for the problems they create, but that’s not very realistic, and it won’t just happen by itself. When it comes to digital-based repression and the use of surveillance and data collection to impose restrictions, there is a striking lack of accountability. Tech platforms depend on government authorisation to operate, so online platforms and tech companies are slow to react, if they do at all, in the face of accusations of surveillance, hate speech, online harassment and attacks, especially when powerful governments or other political forces are involved.

    These companies are not going to do the right thing if they are not encouraged to do so. There are small steps as well as large steps one can take, starting with deciding how and when to use each of these tools, and whether to use them at all. At every step of the way, there are alternatives that you can use to do different things – for one, you can decentralise the way you interact with people and not use one platform for everything.

    Of course, that’s not the whole problem, and the solution cannot be based on individual choices alone. A more structural solution would have to take place at the level of policy frameworks, as can be seen in Europe where regulations have been put in place and it is possible to see a framework shaping up for large companies to take more responsibility, and to define who they are benefiting from their access to personal information.

    What advice can you offer for activists to use the internet more safely?

    We have a set of tools and very basic steps to enable people who don’t want to leave these platforms, who depend on them, to understand what it is that they are doing, what kind of information they leave behind that can be used to identify them and how to avoid putting into the system more information than is strictly necessary. It is important to learn how to browse the internet privately and safely, how to choose the right settings on Google and Facebook and take back control of your data and your activity in these spaces.

    People don’t usually understand how much about themselves is online and can be easily found via search engines, and the ways in which by exposing themselves they also expose the people who they work with and the activities they do. When using the internet we reveal where we are, what we are working on, what device we are using, what events we are participating in, what we are interested in, who we are connecting with, the phone providers we use, the visas we apply for, our travel itineraries, the kinds of financial transactions we do and with whom, and so on. To do all kinds of things we are increasingly dependent on more and more interlinked and centralised platforms that share information with one another and with other entities, and we aren’t even aware that they are doing it because they use trackers and cookies, among other things. We are giving away data about ourselves and what we do all the time, not only when we are online, but also when others enter information about us, for instance when travelling.

    But there are ways to reduce our data trail, become more secure online and build a healthier relationship with technology. Some basic steps are to delete your activity as it is stored by search engines such as Google and switch to other browsers. You can delete unnecessary apps, switch to alternative apps for messaging, voice and video calls and maps – ideally to some that offer the same services you are used to, but that do not profit from your data – change passwords, declutter your accounts and renovate your social media profiles, separate your accounts to make it more difficult for tech giants to follow your activities, tighten your social media privacy settings, opt for private browsing (but still, be aware that this does not make you anonymous on the web), disable location services on mobile devices and do many other things that will keep you safer online.

    Another issue that activists face online is misinformation and disinformation strategies. In that regard, there is a need for new tactics and standards to enable civil society groups, activists, bloggers and journalists to react by verifying information and creating evidence based on solid information. Online space can enable this if we promote investigation as a form of engagement. If we know how to protect ourselves, we can make full use of this space, in which there is still room for many positive things.

    Get in touch with Tactical Tech through itswebsite and Facebook page, or follow@Info_Activism on Twitter.

CONTACTA CON NOSOTROS

CANALES DIGITALES

SUDÁFRICA
25  Owl Street, 6th Floor
Johannesburgo,
Sudáfrica,
2092
Tel: +27 (0)11 833 5959
Fax: +27 (0)11 833 7997

UN HUB: GINEBRA
11 Avenue de la Paix
Ginebra
Suiza
CH-1202
Tel: +41.79.910.34.28

UN HUB: NUEVA YORK
CIVICUS, c/o We Work
450 Lexington Ave
Nueva York
NY 10017
Estados Unidos