CIVICUS speaks with Nadia Benaissa, legal policy advisor at Bits of Freedom, about the risks artificial intelligence (AI) poses to human rights and the role civil society is playing in developing a legal framework for AI governance.
Founded in 2000, Bits of Freedom is a Dutch civil society organisation (CSO) that aims to protect the rights to privacy and freedom of communication by influencing legislation and policy on technologies, giving policy advice, raising awareness and undertaking legal action. Bits of Freedom also took part in the negotiations of the European Union AI Act.
What risks does AI pose to human rights?
AI poses significant risks because it can exacerbate preexisting, deeply ingrained social inequalities. Among the rights impacted on are the rights to equality, freedom of religion, freedom of speech and the presumption of innocence.
In the Netherlands we have recorded several instances of algorithmic systems violating human rights. One such case was the Childcare Benefit scandal, in which parents receiving benefits for childcare were unfairly targeted and profiled. Profiling predominantly affected people of colour, people with low incomes and Muslims, whom the tax authority falsely accused of committing fraud. This resulted in suspension of benefits for selected parents and caregivers and hostile investigations of their cases, leading to severe financial repercussions.
Another example is the ‘Top400’ crime prevention programme implemented in the municipality of Amsterdam, which profiles minors and young people to identify the 400 who are most likely to commit offences. This practice disproportionally affects children from lower classes and children of colour, since the system’s geographic focus is skewed towards low-income and migrant neighbourhoods.
In these cases, the unethical use of AI tools resulted in immense distress for the people affected. The lack of transparency in how automated decisions were made only added more difficulties in the search for justice and accountability. Many of the victims found it challenging to prove the system’s biases and errors.
Are there any ongoing attempts to regulate AI?
There is an ongoing process at the European level. In 2021, the European Commission (EC) proposed a legislative framework, the European Union (EU) AI Act, to address the ethical and legal challenges associated with AI technologies. The EU AI Act’s main goal is to create a comprehensive set of rules to govern the development, deployment and use of AI across EU member states. It seeks to keep a balance between promoting innovation and ensuring the protection of fundamental rights and values.
This holds significant importance: it is a unique opportunity for Europe to distinguish itself by prioritising the protection of human rights in AI governance. However, the Act hasn’t yet been approved. A version of it was passed by the European Parliament in June, but there is still a final debate – a so-called ‘trilogue’ – to be had between the EC, the European Council and the European Parliament. The EC is pushing to finish the process by the end of the year so it can be submitted to a vote before the 2024 European Parliament elections.
This trilogue has considerable challenges to overcome to achieve a comprehensive and effective AI Act. Contentious issues abound, including AI definitions and high-risk categories, as well as implementation and enforcement mechanisms.
What is civil society, including Bits of Freedom, bringing to the table of negotiations?
As negotiations of the Act proceed, a coalition of 150 CSOs, including Bits of Freedom, is urging the EC, the Council and Parliament to prioritise people and their fundamental rights.
Alongside other civil society groups, we have actively collaborated to draft amendments and engage in numerous discussions with members of the European and Dutch Parliaments, policymakers and various stakeholders. We firmly pushed for concrete and robust prohibitions, such as those concerning biometric identification and predictive policing. Additionally, we emphasised the significance of transparency, accountability and effective redress in the use of AI systems.
We have made significant advocacy achievements, which include the prohibition of real-time and post-biometric identification, a better formulation of prohibitions, mandatory Fundamental Rights Impact Assessments, the recognition of more rights regarding transparency, accountability and redress, and the establishment of a mandatory AI database.
But we recognise that there is still work to be done. We’ll keep pushing for the best possible protection of human rights and we’ll continue to focus on the demands made in our statement to the EU trilogue, which boil down to empowering affected people with a framework of accountability, transparency, accessibility and redress, drawing limits on harmful and discriminatory surveillance by national security, law enforcement and migration authorities, and pushing back on Big Tech lobbying by removing loopholes that undermine regulation.
The journey towards comprehensive and impactful AI regulation is ongoing, and we remain dedicated to continuing our efforts to ensure that the final legislative framework encompasses our critical asks. Together, we aim to create an AI regulatory environment that prioritises human rights and protects people.
Get in touch with Bits of Freedom through its website or its Facebook page, and follow @bitsoffreedom on Twitter.