CIVICUS speaks about the potential of artificial intelligence (AI) for civil society’s work with Omran Najjar, Kshitij Sharma, Leen D’hondt, Nasilele Amatende and Petya Kangalova of Humanitarian OpenStreetMap Team (HOT).
HOT is an international civil society group dedicated to humanitarian action and community development through open mapping. It supports communities to undertake their own mapping and provides map data that is applied to disaster management, risk reduction and development work.
What does HOT do?
HOT is an international team dedicated to humanitarian action and community development through open mapping. We develop open-source apps and tools for collaborative mapping and geospatial data collection. We enable communities, civil society organisations (CSOs), international organisations and government partners to use and contribute to OpenStreetMap for locally relevant challenges by providing training, equipment, knowledge exchange and field projects. Our tools are free for all to use and employed by partners such as Red Cross societies, Médecins Sans Frontières, United Nations agencies and programmes, government agencies and local CSOs and communities.
Each year, disasters around the world affect, displace or kill millions of people. Missing maps can result in aid not reaching communities affected by disasters. Humanitarian mapping ensures that decision-makers are able to rely on maps and geodata to better allocate resources to respond to disasters and to better reach target communities. When major disaster strikes, thousands of HOT volunteers come together online and on the ground to create open map data that enables disaster responders to reach those in need. Through the Missing Maps project, the HOT global community creates maps of high vulnerability areas where data is scarce, putting an area home to millions of people onto the world map in OpenStreetMap (OSM), a community-driven free and editable map of the world.
How do you use AI in your work?
AI systems are computer software capable of behaving in ways that mimic and even surpass human capabilities. AI therefore refers to the ability of computers to emulate human thought and perform tasks that are usually performed by humans. Some people call it machine learning, but strictly speaking machine learning is a pathway to AI: it refers to the technologies and algorithms that enable systems to identify patterns, make decisions and improve through repetition and the accumulation of data.
AI systems are created with a variety of tools including machine learning, natural language processing – the ability to use human language, computer vision – the ability to interpret images – and neural networks, which function like a human brain to analyse data logically, learn complex patterns and make predictions.
AI-enabled programmes can analyse and contextualise data, accelerate decision-making and automate tasks, triggering actions without human intervention. However, we are looking into keeping the human in the loop in the humanitarian mapping movement in all HOT programmes. This aligns with the HOT core value of putting people first.
AI is not in the future – it is embedded in many technologies we use on a daily basis, such as ‘smart’ devices, and is used in the production of goods and the delivery of services, from customer service to healthcare. And it is used in volunteer mapping.
Map With AI uses a subfield of AI called computer vision, in which machines learn to spot complex patterns in images so they can analyse the same type of satellite and drone imagery that OSM volunteers have used for years. For instance, an AI system is trained to process images to identify possible roads, or buildings such as hospitals or schools, and highlight them on a map. Humans then review and confirm or modify the AI system’s suggestions. There is still a lot of human work involved, but it is believed that AI-assisted mapping makes mapping much more efficient.
Many people fear that AI will replace humans, but rather than replacing humans, what we try to do is facilitate and amplify human actions using AI assistance. That’s why we always include humans in the loop and gather feedback. This feedback is essential to assess the performance of an AI and increase the intelligence of an algorithm. AI models will always be learning as we use them. The more data we feed into an algorithm, the more intelligent it becomes.
AI does imply threats, but also opportunities that we can take advantage of for human rights and humanitarian work to support humanitarian and emergency responders on the ground as map users.
What threats does AI pose, and how do you tackle them?
The biggest challenges are the biases and the lack of transparency of the algorithms embedded in existing AI solutions.
Most current AI models are closed. It’s not clear how they were trained. They are like a black box in which you provide input, then magic happens and you obtain a certain output. The more you train it, the better it becomes, but the output will always depend on your input. And those bringing in the input are often biased.
The problem with existing models is you cannot even know if they are biased, or how they are biased because they are black boxes. You cannot know what’s inside and training data and processes are not traceable.
We seek to tackle biases by localising models, meaning we are not looking at the general model that works everywhere. And we counter the lack of transparency by using fully open-source AI models. In our case, it’s clear how our AI systems are trained and who is training them. The training data is available, so anyone could check how we get a certain output. And those bringing in the input are local people doing the mapping of their own space for their own purposes. The input is relevant in their context.
In our work, map data features like buildings or roads are extracted automatically from satellite or drone imagery and validated by humans – local humans who train the AI models by applying them in their own regions. HOT adds this local knowledge to the extracted data. For instance, if I am working in the health sector in Zimbabwe and the imagery shows a building that appears to be a hospital, locals will not only confirm whether this is the case but will also note whether this is a facility where pregnant women can get certain services – the kind of data that is not easily extractable from imagery with current technologies.
It is the local community who defines what they want to map and why. We don’t give directions. We have regional hubs that provide support, but the actual ask comes from communities. This could be mapping fishponds in India, water points in Niger, or buildings in Brazil. It’s very often about mapping certain buildings so that people know where to go in an emergency, but it very much depends on whether the maps are going to be used for development work or in the event of a disaster.
HOT grew and became known as a result of its mapping of buildings and roads, which was very useful in responding to earthquakes such as those in Haiti and Nepal. But we then expanded our focus to enable not only post-disaster action but also anticipatory action, which is increasingly related to climate change-related challenges such as flooding or drought. At the end of the day, it’s about what each community wants to take action on – we basically enable them to do it. We’re more of a catalyser; we don’t want to do everything ourselves, and that’s why all our software is free and open source.
How do you support communities so they can do their mapping?
We make sure that the software is in place and we provide training. We try to partner local organisations with our partners to get them funding, and we sometimes deliver grants to communities so they can do their mapping.
There is a technology aspect to it, but it is a lot more than that. A community, for instance in Brazil, may approach us because they face the challenge that the government doesn’t recognise them as living where they live and want our support to map their area. We enable them to use the software, help them visualise things on maps, train them to fly drones and help them do peer-to-peer training.
We are really an enablement organisation – our mission is to enable others so ideally the movement would start moving on its own and we wouldn’t need to exist anymore.
There seems to be a lot of human work involved, so what’s the part that’s done by AI?
AI assists mappers so that they can create the map data faster, more accurately and more efficiently. A person cannot map an area that is a hundred square kilometres, and here is where AI comes in. AI extracts features from available satellite imagery, identifies them as, say, a building, and then mappers validate this with local knowledge. For road mapping, there is also the option of going on a drive with a software that uses a GPS tracker to collect data with coordinates, which is then uploaded to the map.
The data mappers feed in to train the AI so it gets better and better at making predictions when used to map other parts of the same city or region. We work with local models that get to know a specific area and can perform very well in that area.
Now the volunteers who will be mapping, they don’t need to be in the same location, although we would love to have people from the same community doing it. There are always some, because the person leading the project is from the area, and they work with what we call civic volunteers. They will be doing the first mapping – what we call the base map – with roads, rivers and major buildings, without going into further details. Then it’s the turn of locals, who will know if a building is a hospital, a school or something else, because they live there.
We have a new product coming soon, the field mapping task manager, which helps complete the mapping of a certain area by adding more local knowledge.
Do you think AI should be regulated?
This probably depends on the kind of AI and the risk it poses. The European Union has recently started developing a regulatory framework, the EU Artificial Intelligence Act, to regulate AI systems on a sliding scale of risk. For instance, AI systems that carry unacceptable risk – such as social scoring systems and AI applications that remotely monitor people in real time in public spaces – would be prohibited. High-risk AI systems – such as AI deployed in medical devices, the management of critical infrastructure, employment recruitment tools, credit scoring applications and so on – would have to comply with very strict requirements to ensure transparency, data governance and record-keeping, and human oversight, among other things.
Get in touch with HOT through its website or its Facebook or Instagram accounts, and follow @hotosm on Twitter.