technology

  • AGAINST DISINFORMATION: ‘Enabling each to tell their story offers an opportunity to share our truths’

    CIVICUS speaks to Chris Worman, Vice President of Alliances and Program Development at TechSoup, a non-profit international network that provides technical support and technological tools to civil society organisations (CSOs). TechSoup facilitates civil society access to donated or discounted software, hardware and services; supplies CSOs with the information they need to make smart decisions about technology; connects like-minded people, online and in person; and works on the ground to create social good solutions.

    A person posing for the cameraDescription automatically generated

     

    What is TechSoup and what does it do?

    TechSoup is a complicated beast. We are a network of civil society capacity-building organisations working together to help ensure civil society has the resources it needs. We are also a community builder and philanthropic infrastructure. Founded in 1987, TechSoup is primarily known for the first part, for helping CSOs get and use technology. To date, the TechSoup Global Network has helped more than 1.2 million organisations, primarily grassroots community-based organisations, access roughly US$12 billion in technology products and services and millions of hours of free training and support. We also sometimes build technology with civil society through the organisation’s apps division, Caravan Studios.

    TechSoup then works through partnerships to help understand technology in context and community. This manifests differently depending on particular needs. The TransparenCEE programme brings together technologists and civil society to build tools and campaigns to encourage participatory democracy. A myriad of other projects through the TechSoup Network address everything from increasing internet access for rural farmers in Colombia’s demilitarised zone, to working on STEM (science, technology, education and mathematics) skills with teachers in rural Romania, to developing tools to support social services for Australia’s homeless people.

    Finally, as philanthropic infrastructure, we provide a variety of tools and services, such as NGOsource, which leverages our network and community to help US foundations meet their regulatory requirements related to grantmaking across borders. Used by nearly 400 US foundations and common infrastructure for grantmaking abroad, in its first six years of operations NGOsource has helped lower the cost of international grantmaking by more than US$60 million and saved CSOs and funders more than 120 years of human labour by reducing duplicative due diligence processes.

    What makes TechSoup necessary in the current tech environment?

    We have been dwelling on two trends of the current – and coming – tech environment: contested digital space and the shift of technology to the cloud.

    In terms of contested digital spaces – another way of saying ‘closing digital space’, manifesting as a combination of anti-civil society narratives, digital surveillance and policies that challenge rights online, or the lack of any relevant policy at all – TechSoup believes there is an urgent and critical need for CSOs to secure and build their digital reputations, and have the opportunity to join or lead digital campaigns that help build positive, pro-civil society narratives across digital media. The collective impact of individual CSOs that are more able to raise their voices online offers some hope of undermining anti-civil society narratives that would paint us all as foreign intermediaries intent on undermining culture and national identity instead of what we are – an important part of society, locally rooted and locally driven by community-based organisations intent on leaving the world better than they found it.

    While increasing the capacity of individual organisations, we need to offer better tools to those who would join or lead digital campaigns. Our work with civil society to design and build the kinds of campaigns and tools they might hope to use to organise their communities from online to offline has shown that through such work, organisations that adopt digital tools for campaigning purposes become more savvy consumers of technology in general, and more committed stakeholders in and advocates for building and preserving rights online – a critical element in bringing in organisations that might not be policy-focused into the struggle for better digital policies.

    Finally, these tools, campaigns, practices and communities need to be carefully considered and crafted to ensure safety. As a colleague from a context with closing space recently noted, with the internet came easy surveillance. This is an important point and dovetails into the other main shift we see, the shift to the cloud. All on-premise tools – think everything that isn’t Google Suite or Microsoft O365 –will go away in the coming years. This is both really good and really less good news. On the less good side, most CSOs are not ready to be fully in the cloud due to connectivity issues. Further, for many the cloud is not a safe place due to issues relatred to bad policies or no policies, such as who can access data in the cloud and on what terms.

    On the good side, moving to the cloud can lower costs while opening opportunities for CSOs to link data for evidence, to drive advocacy and support new tools. One good example of this is a project conducted by our Irish partner, Enclude, who worked with Irish social service organisations to design a fit-for-purpose case management solution. The tool they built together met, for the first time, the needs of participating organisations, thus lowering their costs to provide services. Perhaps of equal or greater importance, it allowed organisations to pool data and use that data to learn from each other and build evidence for advocacy.

    So, whether we like it or not, we are all going to the cloud. This offers opportunities but also necessitates increased capacity to represent and build our communities online, and work to ensure the cloud is a safe place for us all. TechSoup has been working to address these issues in a variety of ways and is in the middle of growing our programmes in these areas from pilot phases to our entire global community, effectively building the infrastructure upon which we can link the million-plus organisations we serve to partners who have technology or policy training capacities and interests and might want to engage the grassroots organisations we reach.

    What are the barriers that CSOs experience to access existing technologies?

    For more than 30 years, our mantra has been ‘democratising access to technology’. The main barriers to doing so seem related to CSOs choosing and being able to use the best tools for their work. TechSoup tries to lower that barrier in two main ways. First, by being a trusted source for curation and education, helping CSOs know what technologies are available and how to use them through online communities and courses. Though historically we have been quite focused on corporate technologies, this is fast expanding into ‘tech4good’ through projects like our Public Good App House, where CSOs can begin exploring tools that are specific to each Sustainable Development Goal and were built for CSOs and audited for security purposes by us.

    Second, we lower access barriers by helping reduce the price point. Through our technology donation programme we are able to offer technologies at an extreme discount – what our French partner used to call ‘solidarity pricing’. The discount makes technology accessible at a price point most can afford – more than 80 per cent of the CSOs we have served have fewer than five staff – while generating revenues that help us provide free or steeply discounted training and support.

     

    What does the data that you have collected through your work tell you about the ways CSOs use or don’t use technology?

    The data tells us CSOs use technologies in about as many ways, and at about as many levels, as there are shapes and sizes in civil society. Some organisations are incredibly advanced and teach us new things they have learned or developed every day. Many could use some guidance and support on things that could improve their operational efficiencies so they can spend more time on their programmes. Most, let’s face it, don’t care about technology as long as it works. And that is fine!

    The challenge, perhaps, is that very few organisations are fully aware of the ramifications of their technical choices. For instance: do you know where all your data is right now? Who has access to it? When did you last change your passwords? Few are also aware of how deeply we rely on technological infrastructure that is owned, operated and accessible by actors who may have interests contrary to our own. This lack of understanding, and the potentially negative ramifications of it, are exacerbated by the acceleration towards the cloud and increasing digitalisation of society. There are excellent thought leaders in this space – such as The Engine Room and the Stanford PACS (Center on Philanthropy and Civil Society) and its Digital Civil Society Lab – and I think we are beginning to understand and work on how we can partner, contextualise and bring civil society into critical discussions about digital access and rights – individually, collectively and in relation to digital policy.

    We have to. Technology does not seem to be going away and as the world digitises, civil society needs to understand, craft and advocate for digital rights. Civil society is and always has been the champion of human rights. We must do so in the digital space. It will take all of us but it must be done.

    What are the typical needs of advocacy CSOs that you seek to respond to?

    We primarily help in choosing and using tools, but also we are increasingly providing training on digital storytelling, digital marketing and analytics to make sure stories are reaching their intended audiences, and training on how to work in an online environment cluttered with misinformation. These skills are certainly important for advocacy but are equally relevant for digital community-building and fundraising – both helpful in building a local base in the face of closing spaces.

    Digital security is another big area for many. We provide some tools and guidance but those who are truly threatened need a much more personal level of support to map risks and develop plans than we can easily do en masse. We are working on more there but are always happy to recommend partners.

    A third area is supporting base-building needs. We are piloting a variety of ways to connect advocacy organisations to the ‘rest’ of civil society – at least the million-plus CSOs in our community. Doing so, however, presents an interesting exercise in framing, communications and community building. Very few of the community we reach would consider themselves advocacy organisations. Fewer still sit around dwelling on rights-based frameworks. Regardless, they do their best to support, defend and enable their communities in their own ways. They can be reached and invited to engage in solidarity with those who are more particularly vocal about rights but it takes work to meet them where they are. We have seen some incredibly encouraging examples of broader bases of support and hosts of unlikely allies when advocacy organisations have the tools to appeal to the broader community, and look forward to more work in this area.

    You mentioned the fact that the online space is increasingly cluttered with misinformation. Why do you think misinformation is so easily propagated on social media, and what tools can civil society use to stop it?

    A funder recently asked me: ‘won’t we soon have a tool that simply tells us what is fake news?’ Sure. But a lot of disinformation is either fun or empowering to those who propagate it, or both. Our job in civil society will be to help educate voters and policy-makers about why facts are important and disinformation is a threat. We could do that by spending all of our time trying to stop the spread of misinformation. Some people think that is the way to go and not they are not necessarily wrong. On the other hand, technology platforms across which misinformation is spread are much more able to do that than we are. They can incorporate tools that spot deep fakes, monitor stories that are going viral around key words and work with civil society to interpret and distinguish what is harmful and threatening and what is not. They already have human moderators doing much of that work around obvious issues, but they are not trained to know that, for instance, a certain cat meme or dumpling joke is actually a political smear. We know and need partnerships – some of which are emerging around elections in particular countries – to help platforms and civil society meet in the middle.

    Another approach, and one that we work with through our programmes, is described earlier: helping CSOs have the tools to build their own narratives, better use analytical tools to understand when their narratives are working and whether they are reaching their intended audiences, and helping to form narrative communities. There are hundreds of trolls, thousands who spread their lies and millions who see it. There are millions of CSOs, hundreds of millions who follow or ‘hear’ them on social media. Enabling each to tell their story, and enabling the collective to coordinate in solidarity, offers an opportunity to flood the digital space with our truths. Once all are moving, we will have more messaging, more quickly, and tapped into more local realities than a handful of trolls could ever manage. If we incorporate analytical tools to understand what messages are working and coordinate around successful messaging across our communities, our collective weight will overwhelm opposition. Until the government shuts off the internet… worth trying until then!

    We are building a repository of specific tools and successful campaigns, such as the one we have built at TransparenCEE, focused on digital campaigning. But there are a lot of great resources out there, produced by JustLabs, MobLab and others.

    Can you tell us a success story from your recent work?

    One of my favorite stories – one that opened my eyes – happened nearly 10 years ago when I was living in a small town in Romania. I had launched TechSoup Romania through a community foundation I had started a few years before. Some funders had supported us to run a convening of technologists and CSOs we were calling the ‘Local Philanthropy Workshop’, through which tech people and CSO people worked on digital storytelling, tools and projects.

    On one of the first afternoons, the leader of a local environmental CSO and a tech guy were talking. The environmentalist was sharing that he wanted to make a map of illegal garbage dumps in the county. The technologist asked if he had them in a spreadsheet with geocoordinates. The environmentalist emailed him the list and three minutes later the technologist showed him a googlemap version of what he had been hoping for. The environmentalist walked it across the street to the newspaper and it ran on the front page the next day with an article about illegal dumping.

    Three minutes of tech and advocacy campaign came true because the right skills came together at the right time. It is a simple story compared to some of the much larger and more complex ones that have come since, but perhaps more indicative of what success might look like for most of us. Big data and artificial intelligence, blockchain and machine learning, digital ID and quantum are all good and shiny and important for those who have the data, tools and resources to work with them. For most of us, I believe, simpler solutions supporting the resolution of local challenges – where communities and civil society come together – are perhaps more in reach and perhaps, in aggregate, more meaningful as we seek collectively to come to grips with the influence of technology on society, and learn how to navigate the good and the bad of it as the world digitises.

    Get in touch with TechSoup through itswebsite andFacebook page, or follow@ChrisWorman on Twitter.

     

  • DIGITAL DIVIDE: ‘The uncritical adoption of technology is particularly risky in humanitarian crises’

    CIVICUS speaks to Barnaby Willitts-King, Senior Research Fellow at the Overseas Development Institute (ODI). Established in 1960 and currently working in 50 countries around the world, ODI is a global non-partisan, non-profit and evidence-driven think tank. Barnaby’s latest research with ODI’s Humanitarian Policy Group (HPG) focuses on the effects of the adoption of information and communications technologies in the humanitarian sector.

    BarnabyWillitts King DIGITAL

    Which would you say have been the biggest humanitarian crises of 2019, and how effective and efficient has the humanitarian response been?

    The crises in the Democratic Republic of the Congo, Syria and Yemen have affected the most people in 2019 and look set to continue through 2020. In the majority of these crises, and the many more affecting over 160 million people, there are major funding challenges and problems of access to people in need due to conflict. Despite these challenges, international humanitarian assistance from the United Nations (UN), the Red Cross movement and civil society organisations (CSOs) supported 64 per cent of those it was aiming to reach in 2019 and is reaching more people than ever before.

    However, huge challenges remain to reforming the international system of humanitarian action to make it more effective, efficient and appropriate, while confronting the largely political blockages to solving the underlying causes of such crises. The space for neutral humanitarian action remains under pressure from increasingly polarised geopolitics and a retreat from multilateralism.

    Concerns about national security, migration and terrorism have led donors belonging to the Organisation for Economic Co-operation and Development’s Development Assistance Committee (DAC) to introduce laws and policies that have had significant knock-on consequences for the ability of CSOs to support people in crisis. Such was the case with UK legislation, subsequently amended, which would have criminalised aid workers in some conflict zones.

    What have you learned from your research about resource flows to countries affected by humanitarian crises?

    There is a mismatch between the global picture of humanitarian response and funding flows from major DAC donors, and what is visible in countries and communities affected by crisis. The 2016 World Humanitarian Summit launched the Grand Bargain initiative, an agreement between donors and agencies that included a commitment to increase the flow of resources to local and national humanitarian actors. However, the flow of resources to such local actors still remains far below the 25 per cent target , as seen for instance in evidence from Somalia and South Sudan.

    Beyond resource flows to local organisations and administrations, HPG’s recent research based on field studies in Iraq, Nepal and Uganda on the resources that households use to cope with crisis has revealed the narrow way in which humanitarian agencies have been looking at resource flows.

    This shows that the international community undervalues the role of locally led response, which starts in affected communities, and the resources they mobilise and make use of, including community support mechanisms, remittances from the diaspora, government and private sector funding and faith-based giving. These funds and other resources are not easily measured or tracked and are not sufficiently understood by local and international humanitarian actors.

    Globally, this study estimated that international humanitarian assistance comprises as little as one per cent of the total resource flows to countries affected by humanitarian crises. Remittances are one clear example of a major resource flow that is potentially significant in crises but insufficiently understood or factored in; others include faith-based flows and local community resources.

    What should the international community do to put the affected countries and local communities at the centre of the planning and funding of responses?

    There are many things that the humanitarian community needs to do in order to achieve this reorientation of international humanitarian assistance. First, it should focus on the household perspective in resource analysis and tracking by investing in household economy, market and political economy analysis. Second, it should design programming specifically for each crisis. Third, it should use aid smartly to focus on gaps and catalyse the right kind of investments and flows – for example, through supporting entrepreneurship or facilitating remittances. Fourth, it should develop better humanitarian needs assessments that incorporate livelihoods and political analysis and involve government. Fifth, it should strengthen data literacy and data. Sixth, it should build a community of practice on tracking the wide range of resources in crises, from private flows to humanitarian, peace and development funding.

    This shift in perspective is critical to better reflect local agency and a more diverse set of resources that people in crisis rely on. Aid should be used not just to respond to gaps in need but to catalyse better and more effective use of flows beyond aid, which may be the best way to ‘localise’ the response.

     Your latest research focuses on the effects of the rapid adoption of digital technologies in the humanitarian sector. What problems has technology helped to solve, and what new challenges has it created?

    Digital approaches are certainly transforming humanitarian action in a number of ways, and there are examples of them making aid more effective, efficient and transparent: for instance, in collecting and analysing data, such as using drones to map disaster sites, volunteer ‘crowdmappers’ to process the data and machine learning to analyse large and complex datasets to improve targeting. Humanitarian programming can be streamlined through the seamless and secure transfer of digital payments to recipients or by using biometric verification of aid recipients for efficiency and security. Technology also connects and gives agency to affected people – for example through apps enabling them to contact first responders directly, or for aid recipients to give feedback to aid agencies and for volunteer networks to fundraise on social media with crowdfunders.

    However, there are increasing concerns about the dominance of technology in development and humanitarian assistance, and the risks such technologies can present in situations of armed conflict. The uncritical adoption of the latest technology fad is increasingly seen as particularly risky in humanitarian crises where more traditional methods of aid distribution may still make more sense. The vulnerability of people’s data when it is being generated in ever greater quantities is of paramount concern where people are in situations of conflict, with risks they could be targeted by hostile governments.

    Do you think that the use of technology has led to more inclusive and participatory processes? If not, what should be done so that technology lives up to its full potential?

    Inclusion is an important goal for the humanitarian sector, but it has proved difficult to achieve in practice, as we explore in our research. The careful adaptation of existing tools has indeed been used to increase coverage and inclusion – for instance by enabling participatory mapping of communities by residents of informal settlements, training drone pilots pilots in affected countries and using messaging applications to disseminate to displaced people. However, these benefits are still too often assumed as a natural consequence of adopting technology-based approaches. In reality, differences in the access to and the use of technology, often along gender, income or racial lines, constitute a ‘digital divide’. This means that the benefits of these approaches are not evenly distributed and leave many excluded.

    Inclusion is also limited by in-built biases in many applied technologies – for instance facial recognition software, whose ‘coded gaze’ has not been taught to recognise diverse datasets of faces, or automated mapping technologies that lack the contextual understanding to recognise houses in disaster-affected areas.

    The uncritical adoption of technologies in crises may reinforce the ingrained power dynamics of the sector or violate humanitarian principles, either through a shift to digital registration that unintentionally excludes those most in need, or humanitarian independence being compromised through partnerships with the private sector, including the surveillance and security industry.

    Instead, these new tools will require active correction and contextual knowledge to be adapted to particular humanitarian crises, in order to include and protect the people humanitarian assistance is intended to serve. In some cases, tools such as biometric registration or mapping of improvised settlements may not be appropriate, with poor data protection practices presenting unacceptable risks to populations made vulnerable by persecution, conflict, or displacement.

    Get in touch with ODI through its website and Facebook page, or follow @hpg_odion Twitter.

  • DISINFORMATION: ‘A moral case based on rigorous technical research can bring about change’

    CIVICUS speaks with Imran Ahmed, founding Chief Executive Officer of the Centre for Countering Digital Hate (CCDH), about the rise of disinformation and hate speech in the context of the pandemic, and the roles civil society can play in countering them. CCDH is an international civil society organisation that seeks to disrupt the architecture of online hate and misinformation. Founded in 2018, it develops strategies and runs campaigns to strengthen tolerance and democracy, as well as counterstrategies to resist new forms of hate and disinformation.

    ImranAhmed

    How did the Centre for Countering Digital Hate get started and what it is trying to achieve?

    The Centre for Countering Digital Hate seeks to disrupt the production and distribution of content of hate and misinformation in digital spaces. It exists because digital channels have become one of the primary means through which we transmit information, establish social mores and behavioural or attitudinal norms, and create value as a society.

    As it happens, those spaces have been colonised by malignant actors who have undermined some of the basic precepts of our democracy. They use trolling to undermine tolerance and the liberal values that give everyone an equal voice in those spaces and use misinformation not only to destabilise the fundamental tenets of the scientific method but also to spread hate.

    We try to counter this by making malignant activity more costly. We use exposure and inoculation to make it more difficult and create costs, whether political, economic, or social, for those undertaking malignant activity.

    How did your work change under the COVID-19 pandemic?

    As early as February 2020, we pivoted the entire organisation towards fighting COVID-19 misinformation. We saw that extremist groups that were already on our radar were having discussions about COVID-19 as an opportunity, and any opportunity for a neo-Nazi is a threat to a civilised democratic society.

    We always try to put our efforts where there is most need. A few months back, in December 2019, we had done a study on vaccines and disinformation for the UK parliament’s All-Party Parliamentary Group on Vaccinations for All, so we were already aware that anti-vaxxers were a sophisticated group of misinformation actors. In a paper that we put together for the UK and US governments in April 2020, we expressed concern about a surge in xenophobia driven by the pandemic and deriving from psychological, sociopsychological and neurological factors. There is a correlation between disgust sensitivity – which is high in a pandemic – and xenophobia. We also realised that anti-vaxxers were a very sophisticated group of propagandists, and if they were able to professionalise the production of COVID-19 misinformation, they would cause a lot of trouble.

    How does COVID-19 disinformation connect with identity-based hate?

    At a very simple level, because of the correlation between disgust sensitivity and xenophobia, we can look at the research in social psychology by Michael Bang Petersen and at explanations by neuro-endocrinologists such as Robert Sapolsky, which tell us that disgust sensitivity and group thinking are co-located in the insular cortex of the brain. For a year and a half we have warned that there is a problem, as people have been primed at a really basic level, in the sense that if you view anyone who is different from you and outside of your group as a potential threat, it triggers the frantic inner group thinking in your brain.

    We know this is going to be an ongoing problem, but we do not know its long-term ramifications. This could potentially set back some of the work we’ve done, for example on migrants’ rights or climate change and taking responsibility for what happens to the world and not just yourself. There is a lazy assumption that we are going to ‘build back better’ because people are feeling positive about things once they feel we are coming out of the pandemic, yet for the past year and a half we have been neurologically and psychologically primed to be very insular.

    What programmes and campaigns have you developed to reduce disinformation and hate?

    One of the things we do well is produce actionable intelligence. I think what is key about our model is that we do not produce raw data, or research, or even insight, which is the analysis of data in context. We produce actionable intelligence, which is insight plus an understanding of what it is that you can do to change things.

    Part of the problem with digital misinformation and hate is that people do not know what they can do about it because the platforms are resistant to doing anything and absolve themselves of the problem. We challenged this understanding through our work on anti-vaxxers.

    First, in late 2020 Facebook stated that anti-vax misinformation wasn’t banned on their platform, and then they changed that as a result of our research showing that misinformation causes harm. It may sound trite to say misinformation causes harm in a pandemic, but it does – on a scale that is both massive and grave –, and we had to go out and prove it. Second, their platforms were uniquely being used by these bad actors to organise, and we had to prove that as well. Third, we produced the ‘Disinformation Dozen’, an analysis that showed that 12 anti-vaxxers were responsible for almost two-thirds of anti-vaccine misinformation circulating on social media platforms.

    When we put out this research, everyone from President Biden to physicians begged social media platforms to change their behaviour and take responsibility as publishers. They have the biggest audience of any publishing company in the world, 4.5 billion users, and they must take that responsibility seriously. Recently Google announced that they are going to take action against the Disinformation Dozen. This took CCDH 18 months of campaigning. We were told it was a freedom of speech issue and that it would lead nowhere, but we have shown that if you present a moral case based on rigorous technical research, you can shift views and force people to confront the ramifications of the technology they have created. I think we have shown that change is possible, and I am very proud of that.

    There are many areas affected by misinformation, from public health and migrants’ rights to sexual health and reproductive rights. In the last few months, for instance, we have taken on anti-abortion, violent extremist neo-Nazis in the Ukraine, using the same model of rigorous research and strong campaigning. We put out a report showing that Google and Facebook were taking money from anti-abortion campaigners by putting up ads. This means that they were enabling terrible organisations to spread misinformation that undermines women’s reproductive rights. In response to our report, they removed those ads the next day. More so, due to our campaign in the last few weeks, Heartbeat International and Live Action were banned from advertising on Google. 

    How can civil society come together to put more pressure on governments and big tech companies to hold them accountable?

    We need more people who not only have good technical skills but also understand persuasion, campaigning and activism, and who believe and bolster the moral argument to understanding the technology. In a risk society, where human-made risk and scientifically-generated negative externalities increasingly comprise what we campaign on, whether big tech undermining democracy and public health or climate change and the energy mix, these are areas where it is more important than ever that we understand that technical problems require moral argumentations. You need to make the moral argument and have the courage to make it, while also having a strong technical understanding of what is really going on.

    For example, if you want to make the case, as President Biden did, that Facebook are killing people, you have to nail down exactly how their technology functions and be absolutely certain before you state it. That is what we do on the basis of our research. It is important to start reaching out beyond our usual allies and build alliances across science, technology and campaigning.

    Get in touch with the Centre for Countering Digital Hate through itswebsite orFacebook page, and followcounterhate on Instagram and@CCDHate on Twitter. 

  • EUROPEAN MEDIA FREEDOM ACT: ‘National security cannot justify the use of spyware on journalists’

    Jordan HigginsCIVICUS speaks about the role of civil society in the drafting process of the European Media Freedom Act with Jordan Higgins, Press and Policy Officer at the European Centre for Press and Media Freedom (ECPMF).

    Founded in 2015, ECPMF is a civil society organisation that seeks to promote, preserve and defend media freedom by monitoring violations,providing practical support and engaging diverse stakeholders across Europe.

    Why was the European Media Freedom Act (EMFA) needed?

    The EMFA aims to support media freedom and promote media pluralism in the European Union (EU). While media-related matters have traditionally fallen under the competence of member states, EU-wide action has become necessary due to the severity of the threats media freedom faces across Europe.

    The EMFA was introduced in September 2022 and underwent successive rounds of negotiations, culminating in a political agreement reached on 15 December 2023. It is comprehensive and seeks to address critical threats to media freedom, including the independence of public service broadcasters, concentration of media ownership and the capture of media through the allocation of state advertising, among other issues.

    It safeguards the right of audiences to access pluralistic media sources and establishes a European Board for Media Services, composed of national media authorities that will advise the European Commission on the consistent application of key provisions of the Act in all member states. It also focuses on ensuring the safety of journalists, protecting them and their sources from surveillance and the use of spyware.

    In sum, the EMFA is a crucial tool to address some of the major threats faced by journalists and protect the editorial and market independence of media.

    What did civil society bring to negotiations?

    This initiative aimed to strengthen press freedom in Europe and was widely welcomed by civil society, including us at ECPMF.

    From the early stages, media freedom organisations proposed critical amendments to specific aspects of the EMFA that did not comply with the highest media freedom standards. In particular, we pushed for greater transparency in media ownership, comprehensive rules regulating financial relations between the state and media, including the allocation of state advertising, and full protection of journalists from all forms of surveillance, including spyware. We also advocated for the independence of national media regulators and the European Board for Media Services.

    The process incorporated the perspectives of media freedom experts and journalists and culminated in the final trilogue negotiations between the European Parliament, Council and Commission. One of the key areas of interest for media freedom advocates during these negotiations was EMFA Article 4 on the protection of journalistic sources. In particular, we hoped to see the removal of provisions – promoted by Cyprus, Finland, France, Greece, Italy, Malta and Sweden – that included ‘threats to national security’ as justification for the use of spyware on journalists.

    To what extent did the final text address civil society concerns?

    Civil society, particularly media freedom organisations, advocated for a robust version of the EMFA that considered the needs of those most affected by it. Throughout the negotiation process, we voiced our objections to concerns from publishers’ groups and regarding proposed amendments to Article 4, which could have removed legal safeguards that shield journalists from the deployment of spyware under the pretext of national security. Fortunately, the final version no longer cites ‘national security’ as a justification for using spyware on journalists.

    Now our work will shift towards ensuring the effective implementation of the EMFA through active monitoring, particularly in EU member states where press freedom is under the greatest threat.


    Get in touch with ECPMF through itswebsite orFacebook page, and follow@ECPMF on Twitter.

  • INNOVATION: ‘Conventional human rights structures and practices may no longer be optimal or sufficient’

    Ed RekoshCIVICUS speaks with Edwin Rekosh, co-founder and managing partner of Rights CoLab, about the effects on civil society of the emergence of digital infrastructures and the importance of innovation and digital rights. Rights CoLab is a multinational collaborative organisation that seeks to develop bold strategies to advance human rights across the fields of civil society, technology, business and finance. 

    What does Rights CoLab do?

    Rights CoLab generates experimental and collaborative strategies to address current challenges to human rights from a systemic perspective. In particular, we investigate and facilitate new ways of organising civic engagement and leveraging markets to bring about transformational change.

    We see opportunity to support civic engagement by building on trends outside the traditional philanthropic space. For example, we are interested in organisational models coming out of social enterprise, where there may be commercial revenue to sustain operations. We are also interested in the use of technology to reduce costs and achieve civil society goals without a formal organisational structure, through running a website or an app for instance. In addition, we are exploring generational change in the way younger people view their careers, with increasing numbers of young people seeking a work life that blends non-profit and for-profit career goals. We believe it’s imperative to develop more effective ways to collaborate, especially across borders, professional perspectives and fields of expertise.

    Among the challenges we seek to address is a resurgence of authoritarianism and populist politics, which has reinforced an emphasis on national sovereignty and the demonisation of local civil society organisations (CSOs) as perceived agents of antagonistic foreign values and interests. We also seek to address shifting geopolitical realities that are undermining the human rights infrastructure built in the last half century as well as the long-term legacies of colonial power dynamics. And we aim to develop new approaches to reining in the negative human rights impact of increasing corporate power, particularly in ways that have been aggravated by the pandemic.

    What was the inspiration behind the foundation of Rights CoLab?

    The decision to establish Rights CoLab was premised on the understanding that the human rights field has reached a mature stage, filled with challenges that raise questions about structures and practices that have become conventional, but may no longer be optimal or sufficient.

    I was a human rights lawyer who had transitioned from legal practice in a large law firm to working for a human rights organisation in Washington, DC. The experience I had managing a project in Romania in the early 1990s completely transformed how I viewed human rights and my role as an American lawyer. I started working hand in hand with locally based CSOs, playing a key role as a behind-the-scenes supporter and connector of civil society, linking CSOs to each other and to resources, and supporting the implementation of other solidarity-based strategies.

    Soon after, I founded and then served as president of PILnet, a global network for public interest and private sector lawyers within the civil society space. Around the time I decided to leave that role, I was becoming focused on the closing space for civil society that I saw happening around me, particularly affecting work we were doing in Russia and China. I wound up reconnecting with Paul Rissman and Joanne Bauer, the two other co-founders of Rights CoLab, and we began comparing notes about our respective concerns and ideas about the future of human rights. The three of us set up Rights CoLab as a way to continue the conversation, looking at current challenges in human rights from three very different perspectives. We wanted to create a space where we could continue that dialogue and bring in others to foster experimentation with new approaches.

    How much has the civil society arena changed in recent years due to the emergence of digital infrastructures?

    It has changed dramatically. One key consequence of the emergent digital infrastructure is that the public sphere has expanded in myriad ways. The role of the media is less constrained by borders and there is much less intermediation through editorial control. That represents both opportunity and threat for human rights. Individuals and groups can influence public discourse with fewer barriers to entry, but on the other hand, the public sphere is no longer regulated by governments in predictable ways, which erodes traditional means of accountability and makes it difficult to ensure a fair playing field for the marketplace of ideas. Digital technology also allows for solidarity across borders in ways that are much less constrained by some of the practical limitations of the past. In short, although new threats to human rights stem from the emergence of digital infrastructures, digital tools also offer opportunity.

    How crucial are digital rights and infrastructures to the work of civil society?

    In a lot of ways, digital rights are secondary to the structures, practices and values of civil society. Civil society is inherently derived from respect for human dignity, the creative spirit of human endeavour and the politics of solidarity. The modes in which people organise with each other in order to engage with the world around them depends primarily on socially oriented values, skills and practices. Digital technology can only provide tools, which do not inherently possess any of those characteristics. In that sense, digital technology is neither necessary to civil society organising, nor is it sufficient. Nevertheless, digital technologies can enhance civil society organising, both by exploiting some of the new opportunities inherent in the emerging digital infrastructure as well as by assuring the digital rights we need in order to avoid negative human rights consequences from the emergent digital infrastructure. 

    We are making efforts to identify civil society approaches that can help address these issues. One example is Chequeado, an Argentine non-profit media outlet that is dedicated to verification of public discourse, countering disinformation and promoting access to information in Latin American societies. Chequeado, which exists as a tech platform and app, was able to adapt rapidly to respond to the COVID-19 pandemic by developing a fact-checker dashboard to dispel misinformation on the origins, transmission and treatment of COVID-19 and combat misinformation that leads to ethnic discrimination and growing mistrust in science. Therefore, while understanding the potential uses of digital technology is essential, so is keeping the focus on elements that have little to do with technology per se, such as values, solidarity and principle-based norms and institutions.

    How does Rights CoLab promote innovation in civil society?

    We pursue civil society innovation in several dimensions: how civil society groups organise themselves, including their basic structures and revenue models; how they use technology; and changes needed by the international civil society ecosystem to mitigate the negative effects of counter-productive power dynamics that stem from colonialism.

    For the first two dimensions, we have partnered with other resource hubs to co-create a geo-located map of case studies illustrating innovation in organisational forms and revenue models. We have developed a typology for this growing database of examples that emphasises alternatives to the traditional model for locally based civil society groups – in other words, alternatives to cross-border charitable funding. With our partners, we are also developing training methodologies and communication strategies that aim to facilitate further experimentation and wider adoption of alternative models for structuring and financing civil society activities.

    Our effort to improve the international civil society ecosystem relies on a systems-change project that we have launched under the name RINGO (Reimagining the International NGO). A key focus of the RINGO project is the intermediation between the large international CSOs and more local civic spaces. The hypothesis is that international CSOs can be a barrier or an enabler to a stronger local civil society, and the way it’s organised now – with dominant roles concentrated in the global north and west – needs a rethink.

    RINGO involves a Social Lab with 50 participants representing a wide range of sizes and types of CSOs, coming from a diversity of geographies. Over the course of a two-year process, the Social Lab will generate prototypes that can be tried out with the intention of radically transforming the sector and how we organise civil society at the global level. We hope to extract valuable lessons from the prototypes that can be replicated or reformulated and scaled. There are already many good practices, but there are also systemic dysfunctionalities that remain unaddressed. So we are looking for new, more transformational practices, processes and structures. While we don’t seek utopia, we do seek systemic change. Hence the inquiry process with the Social Lab is vital as we dig deep into the root issues that paralyse the system, moving beyond palliative, superficially appealing practices.

    Get in touch with Rights CoLab through itswebsite and follow@rightscolab and@EdRekosh on Twitter.

  • JORDAN: ‘Commercial spyware that enables digital repression and abuse must be completely banned’

    CIVICUS speaks with Access Now about their forensic investigation that exposed the use of Pegasus spyware to target activists and journalists in Jordan. Access Now is an international civil society organisation that works to defend and extend the digital rights of people and communities at risk.

    internet 1971623 1280

    What restrictions do Jordanian journalists and activists face?

    Over the past four years, the Jordanian government has dialled up its crackdown on the rights to freedom of association, expression and peaceful assembly. Journalists, human rights defenders, labour unions and activists are routinely harassed, detained and prosecuted under vague and draconian laws. In late 2022 and throughout 2023, several lawyers, journalists and activists were arrested in connection with protests or for their social media posts.

    Repression has deepened as a result of the new cybercrime law adopted in August 2023. This law threatens online freedom of expression on the basis of ambiguous and overly broad provisions about ‘spreading fake news’, ‘promoting, instigating, aiding or inciting immorality’, ‘online assassination of personality’, ‘provoking strife’ and ‘undermining national unity’. The law is now being weaponised to quash pro-Palestinian protests and activism in Jordan. Since 7 October 2023, hundreds of protesters expressing solidarity with Palestinians in Gaza have been detained and many others prosecuted under this draconian law.

    Our recent forensic investigation into the use of NSO Group’s Pegasus spyware in Jordan has revealed an additional layer of repression, with at least 35 people being targeted for no reason other than their peaceful political dissent and human rights advocacy.

    How’s spyware used, and who’s using it?

    In January 2022, Access Now and Front Line Defenders revealed that Pegasus spyware had been used to hack prominent Jordanian human rights lawyer Hala Ahed. Hala was hacked in March 2021, and it was an isolating and traumatic experience for her. Access Now then joined Citizen Lab to further investigate the use of Pegasus spyware in Jordan.

    Our joint forensic investigation uncovered a terrifyingly widespread use of Pegasus to target Jordanian media and civil society. We found traces of Pegasus spyware on the mobile devices of 30 activists, journalists, lawyers and civil society members. Further forensic analysis by our partners Human Rights Watch, Amnesty International’s Security Lab and the Organized Crime and Corruption Reporting Project identified five more Pegasus victims, bringing the total to 35.

    This is the largest pool of Pegasus victims uncovered in Jordan so far, but we believe actual numbers are much higher. We don’t know exactly who is behind these attacks because spyware manufacturers such as NSO Group make the identification of perpetrators of cyberattacks very hard.

    The NSO Group blatantly claims its surveillance technologies are crucial for governments to fight crime and terrorism. Conveniently, this is the same pretext used by dictators and repressive regimes to criminalise the work of journalists and activists and prosecute them under draconian counterterrorism and cybercrime laws. It’s a match made in hell, as a result of which activists and journalists are hacked, prosecuted, jailed, tortured and killed merely for exercising their rights or doing their job.

    What can activists and journalists do to protect themselves?

    Unfortunately, given their stealthy nature, there’s no bulletproof protection against spyware attacks. Zero click spyware doesn’t require any interaction between the software and the user of the hacked device. It basically exploits a vulnerability in the device’s software to infect it without the user’s knowledge.

    Still, there are some basic protection measures everyone should implement. For example, every time a vulnerability is discovered, Apple patches it, which means it’s important for users to ensure their device’s operating system is always up to date, otherwise the patch won’t apply. Activists can also enable the Lockdown Mode feature on their Apple devices, which seems to be helping protect at-risk users.

    How does Access Now hold governments and companies accountable?

    For years, Access Now and broader civil society have been campaigning for a global moratorium on the export, sale, transfer, servicing and use of targeted digital surveillance technologies until rigorous human rights safeguards are put in place. Commercial spyware that enables digital repression and abuse worldwide, such as Pegasus, must be completely banned. We are not there yet, but this is our baseline to rein in the surveillance tech industry.

    There have been some positive steps toward holding spyware companies accountable. For instance, a number of Israeli spyware outfits including NSO Group, Candiru and four Intellexa entities were added to a list of the US Department of Commerce that includes entities engaging in activities contrary to the USA’s national security or foreign policy interests. The latest addition to the list was the Canada-based firm Sandvine, blacklisted for enabling digital repression in Egypt. In February 2024, the US State Department also announced a new visa sanctions policy that will deny visas to anyone involved in, facilitating or deriving financial benefit from the misuse of commercial spyware around the world.

    Civil society plays a vital role in exposing how these shady companies profit from facilitating human rights abuses around the world and demanding accountability for violations and reparation to spyware victims. Its continued work is key to holding governments and spyware companies accountable.


    Civic space in Jordan is rated ‘repressed’ by theCIVICUS Monitor.

    Get in touch with Access Now through itswebsite orFacebook andInstagram pages, and follow@accessnow on Twitter.

  • NEPAL: ‘The TikTok ban signals efforts to control the digital space in the name of national sovereignty’

    GandakiPradeshCIVICUS speaks about the recentTikTok ban in Nepal with Anisha, provincial coordinator for Gandaki Pradesh at Body and Data.

    Founded in 2017,Body and Data is a civil society organisation promoting an accessible, safe and just digital space for all people in Nepal. Anisha, known by her digital name Aneekarma, oversees a project focused on online expression by women and LGBTQI+ people and leads Body and Data’s digital rights initiative in Nepal’s Gandaki province. 

    Why did the Nepali government ban TikTok?

    The government has cited multiple reasons for banning TikTok. It cited concerns about a rise in cybercrime, the disruption of social harmony – mainly due to the circulation of ‘vulgar’ content that ‘damages societal values’ – and TikTok’s perceived promotion of a ‘begging culture’, as content creators use it to seek money or gifts from their audience during live sessions. They also invoked the fact that the platform is being banned in some global north countries, although those bans normally apply only to government phones.

    Ultimately, it all boils down to an attempt to restrict freedom of expression. TikTok has grown to be a significant platform. It serves a diverse audience including housewives, older people, small business owners and entrepreneurs. Recently, people began using TikTok to voice opinions and exercise free speech against the authorities, provoking anger and fear among political leaders who have stepped up surveillance.

    How will this ban impact on digital rights?

    Nepal is a democratic country where freedom of speech and expression are fundamental, and the ban on TikTok has raised concerns about these rights being compromised. These concerns have been exacerbated by the government’s plans to introduce a separate bill aimed at tightening control over social media.

    The enforcement of the TikTok ban infringes on the basic rights of freedom of expression and access to information. The platform was used not just for entertainment and for small enterprises to promote their products and services but also as a channel to share diverse opinions, engage in creative expression and amplify the voices of excluded communities, particularly women.

    Bans on popular social media platforms add complexity to the ongoing international debate regarding digital rights. There are growing concerns surrounding the intersection of technology, free expression and governance in the digital age. The TikTok ban sparks discussions on the delicate balance between government regulation and individual liberties.

    What potential privacy or security concerns arise from users shifting to other platforms?

    Because of TikTok being banned, users have started to migrate to alternative platforms, which raises further privacy and security concerns. It is paramount that digital rights are safeguarded during this transition.

    User education and awareness campaigns on privacy and security best practices are needed to enhance digital literacy. Users must be confident that their personal information is well protected. Transparent data practices, including clear information on data collection and usage, are vital for building user trust and enabling informed decision-making.

    The influx of new users to alternative platforms may also introduce potential cybersecurity threats. Platforms should continuously invest in security measures such as encryption protocols, regular audits and prompt vulnerability fixes. It is also essential to implement user authentication and verification mechanisms to mitigate risks such as fake accounts and identity theft.

    The situation in Nepal raises additional concerns due to the government’s limited understanding of cybersecurity. The absence of consultation with experts before this type of decision is made poses severe risks, as evidenced by instances of people’s personal data being exposed and government websites being hacked.

    The TikTok ban only made the gap in the oversight of data privacy clearer. A comprehensive approach is required to address these issues, integrating technological measures, transparent policies, education initiatives and regulatory frameworks to ensure robust safeguards for user privacy and digital rights.

    What are the global implications of the growing trend of TikTok bans?

    The growing trend of countries considering or implementing bans on TikTok due to security concerns reflects a global unease surrounding potential risks associated with the platform. Often intertwined with geopolitical tensions, the TikTok ban signals broader government efforts to control the digital space in the name of national sovereignty. These bans underscore an intensified scrutiny of data privacy and security practices on digital platforms, with governments expressing reservations about the potential misuse of user data.

    This trend is reshaping the global tech landscape, prompting questions about the dominance of specific platforms and the role of international tech companies. Governments face a significant challenge in striking a delicate balance between encouraging innovation and implementing regulations to address security and privacy concerns.

    As users encounter bans on TikTok, they may migrate to alternative platforms, fostering increased competition and influencing user demographics and content trends. This trend emphasises the need for international collaboration on digital standards and regulations to address security concerns and establish a framework for responsible behaviour in the global digital arena.

    Ultimately, bans on TikTok carry broader implications for the future of digital platforms, shaping discussions on user awareness, advocacy and the delicate interplay between innovation and regulation in the evolving digital landscape.

    How can governments regulate platforms without compromising people’s rights to free expression and privacy?

    Governments face the complex challenge of regulating social media platforms to combat misinformation and disinformation while also safeguarding their citizens’ rights to free expression and privacy. Sophisticated strategies are required to achieve a balance between national security imperatives and global digital rights.

    Just as TikTok has established its own guidelines regarding harmful content, governments can collaborate with technology companies to define clear and transparent standards for social media conduct that do not compromise people’s right to express their opinions, but rather that counteract misinformation. It is crucial to implement robust fact-checking mechanisms and foster media literacy to empower users to distinguish between reliable and deceptive information.

    International collaboration to standardise regulations is key to preventing the infringement of digital rights across borders. The adoption of privacy-enhancing technologies, such as end-to-end encryption, preserves individual privacy while facilitating uninhibited self-expression. It is paramount to recognise that state-controlled surveillance and censorship directly threaten our freedom of expression. Rather than resorting to outright bans, governments should prioritise measures that address concerns about misinformation and privacy to strike a nuanced balance that safeguards fundamental rights.


    Civic space in Nepal is rated ‘narrowed’ by theCIVICUS Monitor.

    Get in touch with Body and Data through itswebsite orInstagram page,and follow@bodyanddata and@aneekarma on Twitter.

  • TAIWAN: ‘China will do to us what it did to Hong Kong, and what it has long done to Tibetans and Uighurs’

    MinHsuanWuCIVICUS speaks about the situation in Taiwan withMin-Hsuan Wu, known as ttcat,a social movement activist and campaigner and co-founder and CEO of a Doublethink Lab.

    Founded in 2019, Doublethink Lab is a civil society organisation (CSO) focused on researching malign Chinese influence operations and disinformation campaigns and their impacts, bridging the gap between the democracy movement, tech communities and China experts, and facilitating a global civil society network to strengthen democratic resilience against digital authoritarianism.

    What is the story behind Doublethink Lab?

    Doublethink Lab was founded three years ago, in September 2019. Four years ago, we experienced a tremendous amount of disinformation influencing our 2018 local elections. After these elections, there were lots of signals and leads of information-related, mostly disinformation campaigns – all affiliated with or supported by China.

    We realised that to tackle the challenge of strengthening and safeguarding our democracy we needed people to combine their talents and diverse professional backgrounds into a project focused on digital defence.

    Our main mandate is to produce a better understanding of how Chinese external propaganda functions and effectively influences political processes and public opinion elsewhere, including in Taiwan.

    Our strategy to combat disinformation differs from the usual fact-checking initiatives. Our work isn’t published in fact-checking reports. Instead, we follow the disinformation to try to understand who is spreading it and whether it is being spread by our citizens dynamically or by other kinds of actors funded by the Chinese state. Often, when analysing social media posts, it is possible to see the huge structure made up of Chinese bots liking, sharing and retweeting disinformation.

    What is the likely outcome of rising Chinese aggression toward Taiwan?

    It’s not news that tensions between Taiwan and China are increasing. China is increasingly using ‘grey zone’ tactics to push boundaries, increasing pressure and influencing people. Through various means, China is threatening Taiwanese people. This clearly increases the chance of the whole situation leading to China invading Taiwan.

    Most military experts would agree that this won’t happen right now, with Xi Jinping having just secured his third term as chairman of the Chinese Communist Party and awaiting confirmation of a third term as president of China. Some say an invasion could occur in 2025 or 2027, but I think it will depend on how strongly the Taiwanese people can defend themselves from now on: if our resistance increases, the costs of an invasion for China increase accordingly. Our resistance might therefore postpone the crystallisation of China’s wishes for a bit longer.

    On the other hand, China’s tactics may be backfiring: as China escalates militarily against us, the Chinese narrative is becoming less and less popular in Taiwan. More and more people have realised China is not a good neighbour. It is no longer thought of as a business opportunity for us but as a potent threat to our ways of life, our livelihoods and our lives. China’s aggressive attitude is pushing Taiwanese people towards embracing defence tactics to protect our country, which is a positive thing for us. We are much more aware of the need to build strong national and civil defence now.

    Did the recent visit by US House Speaker Nancy Pelosi make any difference, for better or worse?

    Pelosi’s visit didn’t complicate the situation, but whether we see it as helpful or not depends on the perspective we look at it from. Her visit in August 2022 was meant as a show of support to Taiwan, and happened despite China’s threats of retaliation. It was the first visit by a US House Speaker in a quarter of a century. From a democracy or human rights perspective, it was quite beneficial. Pelosi spoke up against China’s human rights violations and the challenges posed by totalitarian regimes. Her presence brought visibility to our country’s situation regarding China. It put a spotlight on it, and now people see how China treats us and what a destabilising factor it is for the region. It clearly bothered China, judging by the way it reacted to it on the international stage.

    From a geopolitical and military perspective, Pelosi’s visit didn’t produce any benefit. It didn’t – couldn’t – bring any kind of peaceful dialogue. China’s vision and military exercises won’t change. But Pelosi’s visit didn’t complicate the situation; it just brought it under the spotlight so more Western media are paying attention to Taiwan. This kind of attention is somehow opening up many windows of opportunity for Taiwan to collaborate with other countries and agencies. No one knows what will come out of this, but from what I’ve seen so far, increased opportunities of international collaboration may improve our chances of safety.

    What would it take to bring peace and stability to the region?

    That’s a huge question. For me, the ultimate solution would be the opening up of civic space and the democratisation of China, Russia and other totalitarian regimes in Southeast Asia. However, we know this is too big a hope and it’s not really up to us.

    There used to be a civil society in China, but under Xi’s rule civic space has been continuously shrinking for 10 years. More and more activists are getting arrested. We all saw what happened recently in Hong Kong: China cracked down hard on civic movements and arrested people for even having a podcast –regular citizens were sent to jail just in case. China shut down all forms of civic expression, including news agencies. China will do to Taiwan what it did to Hong Kong, and what it has long done to Tibetans and Uighurs within China.

    If you ask me, I would say peace would require the demise of the Chinese Communist Party, but people think I am crazy when I put it this way. But from our perspective, this is the only forever solution. If you have an aggressive, expansionist neighbour trying to invade you, attaining peace is quite hard because it is not up to you. There can’t be peace unless your neighbour changes.

    Without justice there won’t be any peace. I’m not sure which kind of peace people wish to see: I think they are wrong if they define peace as just the absence of war. It that’s what they want, they can move to Hong Kong. Hong Kong is peaceful now – there are no mobilisations, no protests, no disorder. But is this really peace? It’s just an illusion: people are quiet because they lost their rights and freedoms. This is not the kind of peace we want for Taiwan.

    We need to find a way to open up civic space and bring democracy to the region – that is the only way forward.

    How is Taiwanese civil society working to make this happen?

    Lots of Taiwanese CSOs are working to limit China’s influence in the region, especially in Taiwan. There is an organisation called Economic Democracy Union that conducts serious research about Chinese influence on our economy; their work show how Chinese collaborators pretend to be Taiwanese companies and penetrate very sensitive industries such as electronics or e-commerce – industries that capture lots of personal data. Economy Democracy Union brings these issues to the surface with the aim of promoting new regulations to protect us from these influence-seeking tactics.

    There are also many CSOs working to strengthen civic defence, which isn’t just war-related, but rather focused on preparedness for disaster or any kind of military operation; their goal is to teach citizens how to react in these cases.

    Right now, Doublethink Lab is doing an investigation on China’s information operations. We do election monitoring and try to disclose disinformation campaigns or far-fetched narratives flooding into Taiwanese media. We are building a global network to bridge the gap between academia and civil society on a global scale. We want people to know what Chinese influence looks like in different countries, the channels it travels through, its tactics and its final goals.

    Doublethink Lab isn’t the only organisation advocating for digital defence. There are several others focusing on Chinese media influence, disinformation campaigns, fact-checking processes and civic education to identify fake news, among other related issues.

    What support does Taiwanese civil society need from the international community?

    We need resources. Most Taiwanese CSOs are small grassroots organisations. People tend to view Taiwan as a rich country with a very prosperous economy, but the truth is that civil society movements struggle a lot. Human rights CSOs and those working to counter Chinese influence usually have fewer resources than a regular charity. CSOs need more resources to be able to recruit new talent.

    Right now is the perfect time to ask ourselves what we really need. I always ask my fellow activists what they need, and answers resemble a lot those of activists in Hong Kong or Ukraine. Something the international community can also help with is by exposing Taiwan’s struggle. We don’t want people to think our issues are disconnected from those of the rest of the world – we want to become closer and we want to be understood. We need more connections with CSOs in the rest of the world. We need all forms of help to prepare and get ready for what’s coming.


    Civic space in Taiwan is rated ‘open’ by theCIVICUS Monitor.

    Get in touch with Doublethink Lab through itswebsite and follow @doublethinklab and@TTCATz on Twitter.

  • UN CYBERCRIME TREATY: ‘Civil society is fact-checking the arguments made by states’

    IanTennantCIVICUS speaks with Ian Tennant about the importance of safeguarding human rights in the ongoing process to draft a United Nations (UN) Cybercrime Treaty.

    Ian isthe Chair of theAlliance of NGOs on Crime Prevention and Criminal Justice, a broad network of civil society organisations (CSOs) advancing the crime prevention and criminal justice agenda through engagement with relevant UN programmes and processes. He’s the Head of the Vienna Multilateral Representation and Resilience Fund at theGlobal Initiative Against Transnational Organized Crime, a global CSO headquartered in Geneva, focused on research, analysis and engagement on all forms of organised crime and illicit markets. Both organisations participate as observers in negotiations for the UN Cybercrime Treaty.

    Why is there need for a UN treaty dealing with cybercrime?

    There is no consensus on the need for a UN treaty dealing with cybercrime. The consensus-based bodies dealing with cybercrime at the UN, primarily the UN Commission on Crime Prevention and Criminal Justice (CCPCJ), could not agree on whether there was a need for the treaty since the issue was first raised officially at the UN Crime Congress in 2010, and in 2019 it was taken to a vote at the UN General Assembly. The resolution starting the process towards a treaty was passed with minority support, due to a high number of abstentions. Nevertheless, the process is now progressing and member states on all sides of the debate are participating.

    The polarisation of positions on the need for the treaty has translated into a polarisation of views of how broad the treaty should be – with those countries that were in favour of the treaty calling for a broad range of cyber-enabled crimes to be included, and those that were against the treaty calling for a narrowly focussed treaty on cyber-dependent crimes.

    What should be done to ensure the treaty isn’t used by repressive regimes to crack down on dissent?

    Balancing effective measures against cybercrime and human rights guarantees is the fundamental issue that needs to be resolved by this treaty negotiation process, and at the moment it is unclear how this will be accomplished. The most effective way to ensure the treaty is not used to crack down on dissent and other legitimate activities is to ensure a treaty focused on a clear set of cyber-dependent crimes with adequate and clear human rights safeguards present throughout the treaty.

    In the absence of a digital rights treaty, this treaty has to provide those guarantees and safeguards. If a broad cooperation regime without adequate safeguards is established, there is a real risk that the treaty could be used by some states as a tool of oppression and suppression of activism, journalism and other civil society activities that are vital in any effective crime response and prevention strategy.

    How much space is there for civil society to contribute to the negotiations process?

    The negotiations for the treaty have been opened for CSOs to contribute to the process through an approach that does not allow states to veto individual CSOs. There is space for CSOs to bring in their contributions under each agenda item, and through intersessional meetings where they can present and lead discussions with member states. This process is in some ways a model that other UN negotiations could follow as a best practice.

    CSOs, as well as the private sector, are bringing vital perspectives to the table on the potential impacts of proposals made in the treaty negotiations, on practical issues, on data protection and on human rights. Fundamentally, CSOs are providing fact-checking and evidence to back up or challenge the arguments made by member states as proposals are made and potential compromises are discussed.

    What progress has been made so far, and what have been the main obstacles in the negotiations?

    On paper, the Ad Hoc Committee has only two meetings left until the treaty is supposed to be adopted – one meeting will take place in August and the other in early 2024. The Committee has already held five meetings, during which the full range of issues and draft provisions to be included in the treaty have been discussed. The next stage will be for a draft treaty to be produced by the Chair, and then for that draft to be debated and negotiated in the next two meetings.

    The main obstacle has been the existence of quite fundamental differences in visions for the treaty – from a broad treaty allowing for criminalisation of and cooperation on a diverse range of offences to a narrow treaty focussed on cyber-dependent crimes. Those different objectives mean that the Committee has so far lacked a common vision, which is what negotiations need to discover in the coming months.

    What are the chances that the final version of the treaty will meet international human rights standards while fulfilling its purpose?

    It is up to the negotiators from all sides, and how far they are willing to move in order to achieve agreement, whether the treaty will have a meaningful impact on cybercrime while also staying true to international human rights standards and the general human rights ethos of the UN. This is the optimal outcome, but given the current political atmosphere and challenges, it will be hard to achieve.

    There is a chance the treaty could be adopted without adequate safeguards, and that consequently only a small number of countries ratify it, thereby diminishing its usefulness, but also directing the rights risks to only those countries who sign up. There is also a chance the treaty could have very high human rights standards, but again not many countries ratify it – limiting its usefulness for cooperation but neutering its human rights risks.


    Get in touch with the Alliance of NGOs on Crime Prevention and Criminal Justice through itswebsite and follow@GI_TOC and@IanTennant9 on Twitter. 

  • UN CYBERCRIME TREATY: ‘This is not about protecting states but about protecting people’

    StephaneDuguinCIVICUS speaks withStéphane Duguin aboutthe weaponisation of technology and progress being madetowards a United Nations (UN) Cybercrime Treaty.

    Stéphaneis an expert onthe use of disruptive technologies such as cyberattacks, disinformation campaigns and online terrorism and theChief Executive Officer of the CyberPeace Institute,a civil society organisation (CSO) founded in 2019 to help humanitarian CSOs and vulnerable communitieslimit the harm of cyberattacks andpromote responsible behaviour in cyberspace. It conducts research and advocacy and provides legal and policy expertise in diplomatic negotiations, including theUN Ad Hoc Committee elaborating the Cybercrime Convention.

    Why is there need for a new UN treaty dealing with cybercrime?

    Several legal instruments dealing with cybercrime already exist, including the 2001 Council of Europe Budapest Convention on Cybercrime, the first international treaty aimed at addressing cybercrimes and harmonising legislations to enhance cooperation in the area of cybersecurity, ratified by 68 states around the world as of April 2023. This was followed by regional tools such as the 2014 African Union Convention on Cyber Security and Personal Data Protection, among others.

    But the problem behind these instruments is that they aren’t enforced properly. The Budapest Convention has not even been ratified by most states, although it is open to all. And even when they’ve been signed and ratified, these instruments aren’t operationalised. This means that data is not accessible across borders, international cooperation is complicated to achieve and requests for extradition are not followed up on.

    There is urgent need to reshape cross-border cooperation to prevent and counter crimes, especially from a practical point of view. States with more experience fighting cybercrimes could help less resourced ones by providing technical assistance and helping build capacity.

    This is why the fact that the UN is currently negotiating a major global Cybercrime Convention is so important. In 2019, to coordinate the efforts of member states, CSOs, including CyberPeace Institute, academic institutions and other stakeholders, the UN General Assembly established the Ad Hoc Committee to elaborate a ‘Comprehensive International Convention on Countering the Use of Information and Communication Technologies for Criminal Purpose’ – a Cybercrime Convention in short. This will be the first international legally binding framework for cyberspace.

    The aims of the new treaty are to reduce the likelihood of attacks, and when these happen, to limit the harm and ensure victims have access to justice and redress. This is not about protecting states but about protecting people.

    What were the initial steps in negotiating the treaty?

    The first step was to take stock of what already existed and, most importantly, what was missing in the existing instruments in order to understand what needed to be done. It was also important to measure the efficacy of existing tools and determine whether they weren’t working due to their design or because they weren’t being properly implemented. Measuring the human harm of cybercrime was also key to define a baseline for the problem we’re trying to address with the new treaty.

    Another step, which interestingly has not been part of the discussion, would be an agreement among all state parties to stop engaging in cybercrimes themselves. It’s strange, to say the least, to be sitting at the table discussing definitions of cyber-enabled and cyber-dependent crimes with states that are conducting or facilitating cyberattacks. Spyware and targeted surveillance, for instance, are being mostly financed and deployed by states, which are also financing the private sector by buying these technologies with taxpayers’ money.

    What are the main challenges?

    The main challenge has been to define the scope of the new treaty, that is, the list of offences to be criminalised. Crimes committed with the use of information and communication technologies (ICTs) generally belong to two distinct categories: cyber-dependent crimes and cyber-enabled crimes. States generally agree that the treaty should include cyber-dependent crimes: offences that can only be committed using computers and ICTs, such as illegally accessing computers, performing denial-of-service attacks and creating and spreading malware. If these crimes weren’t part of the treaty, there wouldn’t be a treaty to speak of.

    The inclusion of cyber-enabled crimes, however, is more controversial. These are offences that are carried out online but could be committed without ICTs, such as banking fraud and data theft. There’s no internationally agreed definition of cyber-enabled crimes. Some states consider offences related to online content, such as disinformation, incitement to extremism and terrorism, as cyber-enabled crimes. These are speech-based offences, the criminalisation of which can lead to the criminalisation of online speech or expression, with negative impacts on human rights and fundamental freedoms.

    Many states that are likely to be future signatories to the treaty use this kind of language to strike down dissent. However, there is general support for the inclusion of limited exceptions on cyber-enabled crimes, such as online child sexual exploitation and abuse, and computer-related fraud.

    There is no way we can reach a wide definition of cyber-enabled crimes unless it’s accompanied with very strict human rights safeguards. In the absence of safeguards, the treaty should encompass a limited scope of crimes. But there’s no agreement on a definition of safeguards and how to put them in place, particularly when it comes to personal data protection.

    For victims as well as perpetrators, there’s absolutely no difference between cyber-enabled and cyber-dependent crimes. If you are a victim, you are a victim of both. A lot of criminal groups – and state actors – are using the same tools, infrastructure and processes to perform both types of attacks.

    Even though there’s a need to include more cyber-enabled crimes, the way it’s being done is wrong, as there are no safeguards or clear definitions. Most states that are pushing for this have abundantly demonstrated that they don’t respect or protect human rights, and some – including China, Egypt, India, Iran, Russia and Syria – have even proposed to delete all references to international human rights obligations.

    Another challenge is the lack of agreement on how international cooperation mechanisms should follow up to guarantee the practical implementation of the treaty. The ways in which states are going to cooperate and the types of activities they will perform together to combat these crimes remain unclear.

    To prevent misuse of the treaty by repressive regimes we should focus both on the scope of criminalisation and the conditions for international cooperation. For instance, provisions on extradition should include the principle of dual criminality, which means an act should not be extraditable unless it constitutes a crime in both the countries making and receiving the request. This is crucial to prevent its use by authoritarian states to persecute dissent and commit other human rights violations.

    What is civil society bringing to the negotiations?

    The drafting of the treaty should be a collective effort aimed at preventing and decreasing the amount of cyberattacks. As independent bodies, CSOs are contributing to it by providing knowledge on the human rights impacts and potential threats and advocating for guarantees for fundamental rights.

    For example, the CyberPeace Institute has been analysing disruptive cyberattacks against healthcare institutions amid COVID-19 for two years. We found at least 500 cyberattacks leading to the theft of data of more than 20 million patients. And this is just the tip of the iceberg.

    The CyberPeace Institute also submits recommendations to the Committee based on a victim-centric approach, involving preventive measures, evidence-led accountability for perpetrators, access to justice and redress for victims and prevention of re-victimisation.

    We also advocate for a human-rights-by-design approach, which would ensure full respect for human rights and fundamental freedoms through robust protections and safeguards. The language of the Convention should refer to specific human rights frameworks such as the Universal Declaration of Human Rights and the International Covenant on Civil and Political Rights. It is important that the fight against cybercrime should not pit national security against human rights.

    This framing is especially significant because governments have long exploited anti-cybercrime measures to expand state control, broaden surveillance powers, restrict or criminalise freedoms of expression and assembly and target human rights defenders, journalists and political opposition in the name of national security or fighting terrorism.

    In sum, the goal of civil society is to demonstrate the human impact of cybercrimes and make sure states take this into consideration when negotiating the framework and the regulations – which must be created to protect citizens. We bring in the voices of victims, the most vulnerable ones, whose daily cybersecurity is not properly protected by the current international framework. And, as far as the CyberPeace Institute is concerned, we advocate for the inclusion of a limited scope of cybercrimes with clear and narrow definitions to prevent the criminalisation of behaviours that constitute the exercise of fundamental freedoms and human rights.

    At what point in the treaty process are we now?

    A consolidated negotiating document was the basis for the second reading done in the fourth and fifth sessions held in January and April 2023. The next step is to release a zero draft in late June, which will be negotiated in the sixth session that will take place in New York between August and September 2023.

    The process normally culminates with a consolidation by states, which is going to be difficult since there’s a lot of divergence and a tight deadline: the treaty should be taken to a vote at the 78th UN General Assembly session in September 2024.

    There’s a bloc of states looking for a treaty with the widest possible scope, and another bloc leaning towards a convention with a very limited scope and strong safeguards. But even within this bloc there is still disagreement when it comes to data protection, the approach to security and the ethics of specific technologies such as artificial intelligence.

    What are the chances that the final version of the treaty will meet international human rights standards while fulfilling its purpose?

    Considering how the process has been going so far, I’m not very optimistic, especially on the issue of upholding human rights standards, because of the crucial lack of definition of human rights safeguards. We shouldn’t forget negotiations are happening in a context of tense geopolitical confrontation. The CyberPeace Institute has been tracing the attacks deployed since the start of Russia’s full-scale invasion of Ukraine. We’ve witnessed over 1,500 campaigns of attacks with close to 100 actors involved, many of them states, and impacts on more than 45 countries. This geopolitical reality further complicates the negotiations.

    By looking at the text that’s on the table right now, it is falling short of its potential to improve the lives of victims in cyberspace. This is why the CyberPeace Institute remains committed to the drafting process – to inform and sensitise the discussions toward a more positive outcome.


     

    Get in touch with the CyberPeace Institute through itswebsite or itsFacebook page, and follow@CyberpeaceInst and@DuguinStephane on Twitter.


     

  • USA: ‘We should shift away from overreliance on policing and promote community-based solutions’

    AbdulNasserRadCIVICUS speaks about police violence in the USA with Abdul Nasser Rad, Managing Director of Research and Data at Campaign Zero (CZ).

    Launched in 2015, CZ is an activist-led and research-driven civil society organisation that works to end police violence and promote public safety beyond policing.

    What factors affect the level of police brutality in the USA?

    Police violence remains a threat in some parts of the country, and particularly to some communities. In 2022, US law enforcement officers killed 1,251 people. While this number is the highest to date since our data tracking began in 2013, it’s crucial to note that trends vary across regions. Some cities have witnessed an increase in such incidents, while others have seen improvements.

    Several factors help explain variations in police violence and use of force across the USA. Racial segregation and socio-economic neighbourhood indicators, for instance, have been found to predict police violence, along with individual-level demographic factors such as the race of the officer involved.

    A combination of historical disinvestment and a societal tendency to respond to social issues with enforcement and prison-related measures rather than restorative or human-centred solutions are leading drivers of the disproportionate impact police violence has on communities of colour. A book by Khalil Gibran Muhammad, The Condemnation of Blackness, provides a comprehensive analysis of the myth of Black criminality and the use of the carceral state in perpetuating the second-class treatment of Black people in the USA.

    How are you working to end police violence?

    Our approach is to work both on immediate harm reduction and long-term transformational change, aiming to reshape the way society approaches public safety.

    CZ provides robust, accurate and up-to-date data on police violence in the USA, which is critical given the absence of such efforts by the federal government. We develop comprehensive datasets that help identify where harm is being caused and pilot solutions to remove the harm. We prioritise transparency and make all our work public. The campaigns we develop are meant to be accessible so other organisations and activists can take the lead in implementing similar initiatives.

    We align with the transformational change perspective. We recognise that the current system is deeply flawed and requires radical rethinking. At the same time, we see the value in harm reduction as a necessary parallel strategy in the short term.

    Our efforts are concentrated in two main areas. First, we engage in harm reduction initiatives through several campaigns. For example, ‘8 Can't Wait’ focuses on reducing police killings by advocating for the adoption of eight policies that restrict the use of force. Since the launch of the campaign in June 2020, over 340 cities have restricted the use of force and 19 states have changed their policies. Some changes include the banning of chokeholds, implementing a duty to intervene, requiring de-escalation and exhausting all alternatives before using deadly force.

    A campaign aimed at reducing unnecessary police deployment, ‘Cancel ShotSpotter’, achieved the cancellation of contracts or the prevention of the expansion of contracts in several large metropolitan centres. ShotSpotter’s technology often mistakes loud noises for gunshots, leading to more police encounters with civilians, sometimes resulting in fatal outcomes. Another campaign, ‘End All No Knocks’, was launched after the tragic police killing of Breonna Taylor, and seeks the cessation of no-knock warrants. It has resulted in six states restricting their use.

    While running these campaigns, we also actively work towards systemic change, consisting of the dismantling and transformation of the policing system. Beyond harm reduction, our goal is to fundamentally transform public safety strategies. We advocate for a shift away from overreliance on policing and instead promote holistic, community-based solutions that prioritise safety and wellbeing for everyone.

    What challenges have you faced in doing your work?

    A common challenge relates to data inconsistencies, lack of data transparency and ensuring the accuracy of our data platforms and analyses.

    But one of the most severe challenges lies not in the data but in the ways it can promote harm rather than foster more thoughtful approaches. For example, when the crime rate increases, the system responds with enforcement and incarceration rather than human and restorative solutions. It’s devastating to see the same punitive strategies over and over again. Combating fear and punitive social responses deters us from our long-term work of dismantling oppressive systems, creating frustration and a sense of moving backward.

    At its core, the problem is that society doesn’t treat or view every individual as a human being of equal value. If it did, it wouldn’t support punitive responses to people experiencing crises. It can be frustrating to work towards dismantling this system while simultaneously mitigating harm from the same system we’re trying to dismantle.

    We confront challenges and failures daily, often facing more obstacles than successes. This is the nature of social justice and liberation work. So building resilience is critical. It’s vital to maintain faith and keep engaging in restorative practices. The commitment and joy in the work endure as long as hope is kept alive and a vibrant community surrounds you.

    How do you collaborate with other local and international stakeholders?

    Our work is with and for communities most impacted on by the US carceral system. Domestically, we collaborate with any stakeholder willing to advance solutions aligned with our values. Direct engagement with stakeholders of diverse ideologies is necessary for policy change. As noted by the intersectional feminist writer Audre Lorde, it is not our differences that divide us, but our inability to recognise, accept and celebrate those differences.

    We are just beginning to build international relationships. Over the past year, we’ve engaged with the international community through sharing our research and expertise in building robust data systems and contributed to the United Nations High Commissioner for Human Rights’ efforts to develop best practices on fatality counts and in-custody deaths.

    To achieve our mission, we need to keep building trust, and we do this by making our work as transparent, robust and easily accessible as possible. Partnerships will help us secure resources to sustain the work and gather the feedback we need to continuously improve.


    Civic space in the USA is rated ‘narrowed’ by theCIVICUS Monitor.

    Get in touch with Campaign Zero through itswebsite orFacebook page, and follow@CampaignZero on Twitter.

  • We need new ways to protect people in the digital era

    By Danny Sriskandarajah

    In an age of ever-advancing, ever-encroaching technology, how do we ensure that our basic rights are protected? New technologies and the speed of progress these days may have many positive impacts on our lives but the fact that they are poorly regulated and hardly understood by the public, poses serious threats.

    Read on: The Sydney Morning Herald 

     

CONNECT WITH US

DIGITAL CHANNELS

HEADQUARTERS
25  Owl Street, 6th Floor
Johannesburg,
South Africa,
2092
Tel: +27 (0)11 833 5959
Fax: +27 (0)11 833 7997

UN HUB: NEW YORK
CIVICUS, c/o We Work
450 Lexington Ave
New York
NY 10017
United States

UN HUB: GENEVA
11 Avenue de la Paix
Geneva
Switzerland
CH-1202
Tel: +41.79.910.34.28