CorrespondentMary Steffenhagen speaks with Yeshi Milner, founder and CEO of Data for BlackLives, and Branka Panic, founding director of A.I. for Peace
Paul Ingles: Today on Peace Talks Radio, ArtificialIntelligence, some warn of AI’s risk to peace and social justice while otherssee potential for thoughtful use of AI to promote peace.
Mary Steffenhagen: An algorithm is a set of step-by-stepinstructions to solve a problem. What are we optimizing? That is the centralquestion that defines whether or not an AI system, which is a group ofalgorithms, is a weapon or a tool.
Yeshi Milner: We needto be especially careful about the situations when we want to do something goodand we have good intentions, but we are still creating unintended consequences.
Paul Ingles: Our future with AI through a peace lens todayon Peace Talks Radio.
[music]
Thisis Peace Talks Radio, the radio series and podcast on peacemaking andnonviolent conflict resolution. I’m series producer Paul Ingles today withcorrespondent Mary Steffenhagen.
Artificianintelligence, formerly a speculative futuristic sci-fi notion, is nowincreasingly a real part of our lives whether we notice it or not.
Ontoday’s Peace Talks Radio episode, correspondent Mary Steffenhagen speaks witha guest who expresses concern about AI’s usage as a weapon of oppressionsparking more conflict. That’s Yeshi Milner, a data activist cautioning aboutmisuse of AI technology and wants to rebuild something better.
Maryalso sits down with Branka Panic, political scientist and internationalpeacebuilding advocate. She is the coauthor of the recent book, “AI for Peace”which explores ways that AI can be used to foster a more peace-filled world.
Hereis correspondent Mary Steffenhagen.
MS: ArtificialIntelligence is all around us. You’re probably using something involving AIevery day without ever really noticing it. If you’re searching online,scrolling social media or using a GPS app, “You have arrived at yourdestination” you’re interacting with AI and AI isn’t limited to these smallapplications for personal use. It’s increasingly integrated into the systemsthat shape and govern our lives from education, the courts and healthcare topolicing, the military and warfare.
[news clips]
In recentyears, the U.S. Department of Defense has been increasing its funding for AItech into the billions of dollars! But warfare is not the only arena where AIcan be weaponized. Some countries take biometric data like fingerprints and eyescans from migrants seeking refuge while other authorities use facialrecognition systems to track those they deem suspicious.
Given all this, more voices have been warning of theoppressive powers of AI in the hands of the ultra-powerful, but at the sametime, there are many who believe that AI holds promise as a tool for peace, onethat can be developed and used democratically and ethically.
Now before we get into all that, a very quick primer on howthis works. AI generally refers to the ability of machines to perform cognitivefunctions that we usually ascribe to humans; perceive, reason, learn, problemsolve. AI uses algorithms which are – you know what? I’m just going to letsomeone who actually works with this stuff explain it.
YM: An algorithm isby definition a set of step-by-step instructions to solve a problem. A recipeis actually an algorithm. It’s a list of instructions or a process to make adish, the ingredients that make up the dish and a result based on what wedefined from the very beginning of the recipe as success.
MS: This is YeshiMilner.
YM: I’m the founderand CEO of Data for Black Lives and we are network of scientists and activistsworking to make data a tool for social change instead of a weapon of politicaloppression.
MS: I really likeYeshi’s metaphor here; if an AI system is a chocolate cake, then algorithms arethe recipe instructions, one cup of these, a tablespoon of that, etc. and theingredients, the flour, sugar, cocoa, those are the data, the information thesystem uses to learn how to think and make decisions. Just like baking a cake,if you add too much flour or substitute paprika for cocoa, you’re going to geta really skewed result.
YM: Whether we wantsomething healthier and less delicious but optimizing for health or somethingthat is super delicious and we don’t care whether or not it has the appropriatenutrients.
Withalgorithms, it’s a lot more complicated thinking about computationalalgorithms, but it comes down to the same question. In mathematics it’s called“The Objective Function,” the question is what are we optimizing? That is thecentral question that defines whether or not an algorithm or an AI system whichis a group of algorithms is a weapon or a tool.
MS: Yeshi is one ofthese folks who want to reframe how we see and use AI not as a weapon but as atool to pursue social good and peace. I spoke with her about some of the waysthat AI is currently shaping our lives.
YM: Data and AI isincreasingly affecting where we live, whether we have access to a job, if we’reable to become a citizen of our country if we’re an immigrant, and creditscoring models. Data and AI are increasingly defining how long we live inregard to medical algorithms. Data and AI influences who we will marry when wethink about the dating apps and the science behind them led by privatecompanies that create these products.
When we’retalking about the role of AI, the best way that we can really create consciousunderstanding is by humanizing the threat. It’s by talking about real stories.
I can only talk about my story. I grew up in a single parenthousehold. My mother was an immigrant, a nursing student, an artist and anextremely hardworking woman. All throughout my childhood, we struggled to findsafe, affordable, permanent housing and that was because of a three-digitnumber, the FICO credit score. So many people think that the Fair IsaacCorporation, which is behind the acronym FICO is a government agency, but theyare actually a private company that 30 years ago began using machine learningand now artificial intelligence to evaluate risk. This has become the goldstandard by which people are given access to important human rights, which isaccess to housing and also access to tools that build generational wealth.
When we were growing up, we didn’t know many other peoplewho were homeless, but right now, in the aftermath of the COVID-19 pandemic, inthe midst of extremely high inflation, people are working hard to make endsmeet, we’re seeing even more homelessness.
According to Congressional data from 2022, black people are13% of the population, but 55% of those are unhoused. Children are directlyimpacted. A lot of this is not only because of FICO credit scores, butincreasingly, automated decision-making systems that aren’t human, aren’tsentient, don’t consider people’s circumstances are not optimized to behumanitarian. They are optimized for profit and that’s why we see algorithmsmaking decisions that are actually harmful to large numbers of people.
MS: And it’s notjust private companies. How are we seeing public institutions using AI likepolice departments?
YM: In March of2022, Data for Black Lives announced that we were moving forward with a FOIA[Freedom of Information Act] records request lawsuit against The MetropolitanPolice Department in D.C. in order to procure documents that reflected theextent of social media monitoring of D.C. residence who are participating inprotests.
Now, in May of 2024, we closed out that lawsuit and we wereable to get over 700,000 records from email exchanges to contracts to trainingdocuments that show the ways in which The Metropolitan Police Department aswell as other agencies were buying high performance artificial intelligenceproducts from private companies in order to surveille and track peoplesprotests activities online.
These social media monitoring and detection tools are notjust making undercover accounts and following people’s pages, that is a part ofit, but they are using extremely high-powered analytical pattern recognitionsearch tools, sentient analysis as well as geographic location tools to be ableto track people’s online activities and match it to what they’re receiving ontheir end from police officers on the ground.
From what we know, none of these efforts regarding usingthese tools have contributed to any change in violent crimes, homicides,murders or assaults. It has only created an environment of so much morepolitical repression. Police are using social media as a way to represspeople’s political activity and quite frankly, target particular neighborhoods.
The otherpart of this is that they are using tools like First Alert which is ananalytical tool created in order to create a response during emergencypreparedness moments. Instead of using that tool to prevent casualties during adisaster, they’re using it to target and criminalize people for exercisingtheir First Amendment rights.
MS: Let’s zoom outfor a second. Police using AI to undermine civil rights isn’t the only problemhere. It’s even more basic than that. More often than not, the AI is just plainwrong. The data is bad. For example, a police department in Plainfield NewJersey used AI software taking data from crime incident reports to make dailypredictions on where and when crimes would most likely occur.
In 2023,The Markup, a publication covering tech looked at about one year’s worth ofthese predictions. They found out that they were accurate less than one percentof the time, but results like these haven’t stopped some police departmentsfrom going all in on predictive policing.
Back to myconversation with Yeshi Milner.
YM: One of thethings that I think is important for people to understand is that whenalgorithmic models or machine learning are developed, it has to be trained onexisting data. In the U.S. we don’t really have crime data. We have arrestdata. We have information on who is actually arrested. A lot of people commitcrimes in the U.S. Is every single person who commits a crime arrested? No.From our research and wealth of research it has been shown that there is adisparity in who is actually arrested and incarcerated. All we have to do islook at the current prison populations and the makeup of them.
Predictivepolicing is really just the use of mathematical analyses, analytical tools andexisting crime data in order to predict future crimes. Since the first predictivepolicing technologies were introduced about ten years ago now, there has been alot of research that has shown that none of these tools have actually preventedcrime. They have only resulted in the further criminalization and quite franklyturning entire neighborhoods into open air prisons. Police use things fromgunshot detection software, gang databases, mapping tools to be able topinpoint where a crime is happening and by de facto criminalize an entireneighborhood.
When youare building a model to predict crime, you are also building a model that isgoing to become a self-fulfilling prophesy because the assumption is alreadybaked into existing data.
MS: I just want topause here again because in a country where states incarcerate black Americans atabout six times the rate of white Americans, AI tools built on existing datacan basically supercharge discrimination. They are reflecting human biases.This is so widespread that back in January of 2024, seven members of Congressasked The Department of Justice to ensure that police departments were notusing federal grants to pay for predictive policing programs, at least notwithout an audit to confirm that they were not discriminatory. Here is YeshiMilner from Data for Black Lives again.
YM: I would also addthat a lot of these tools we call data weapons. That is our campaign slogan,“No More Data Weapons.” We defined it that way very intentionally because theybecome a weapon when they are used in a context of harm. A lot of thealgorithmic models, software, research tools that law enforcement agencies areusing are also used in journalism, emergency preparedness, sex trafficking andsituations where a law enforcement response should be focused, but in the handsof police agencies who already have a history a culture which is reflected inthe data of racism, sexism and abuse, they are only going to be weaponized.
[music]
PI: You’relistening to Peace Talks Radio, the radio series and podcast on peacemaking andnonviolent conflict resolution. I’m series producer Paul Ingles and we’ve beenhearing from correspondent Mary Steffenhagen and her conversation with dataactivist Yeshi Milner whose warning about the potential of and early examplesof artificial intelligence being misused to make things worse for alreadyoppressed sectors of humanity and to make war and conflict more likely insteadof less likely. Despite the many features of AI that can reinforce existingoppression, some believe that those very same features can make it a powerfultool for peacebuilders.
Here againcorrespondent Mary Steffenhagen.
MS: One particulararea where AI has been getting a lot of buzz lately is internationalpeacebuilding. You can see this in headlines like “Can AI Mediate ConflictBetter Than Humans?” “Is AI the Intelligent Answer to Climate Change?” “CouldAI Usher in a New Era of World Peace?” Those are also a few of the bigquestions driving my next guest.
BP: I’m BrankaPanic. I’m the Founder and Executive Director of AI for Peace and I’m Professorof Practice at The University of North Carolina.
MS: Branka is apolitical scientist with experience in international security, internationaldevelopment policy and humanitarian work. She spoke with me about the livedexperiences that led her to this field.
BP: I currentlylive in Mexico City, but I’m originally from The Balkans. I was born in Serbia,so I grew up in an area that went through several wars, conflicts and violence.I think that lived experience pretty much impacted my wish to work inpeacebuilding.
When Ithink about these beginnings, I remember the impacts of the Arab Spring forexample, news about the Arab Spring and about the utilization of social mediaas a technology where different platforms were utilized to spread voices towork towards democratization or to organize activists. That was probably thefirst moment when I started thinking about the positive application oftechnology in my field.
When Istarted working for Humanitarian Aid, for Refugee Aid in the Balkans, amovement was created to assist refugees coming from The Middle East and NorthAfrica on their way to their new countries in Europe. I saw how essential evenjust the phone was for these communities. Phones were lifesaving tools for themon this path and quite transformational.
For example, communities that were running away from wars inSyria or Afghanistan or North Africa often did not have documents oridentification. If they got sick throughout their path and needed medicalintervention or medical help, the protection of their data was crucial becausesome refugees were being prosecuted in the countries they were coming from.
The Refugee Movement stepped in and developed a digital IDor digital medical ID so that any doctor on a refugees’ path from Syria toGermany for example was able to see the medical history of this person withoutbreaching their privacy or exposing this person to any risks by revealinginformation that could be dangerous.
This was something that led me to think about thisintersection.
MS: When you talkabout peacebuilding as a field, what does that actually entail? It’s not simplyending wars, right?
BP: No, in fact,it’s quite broader than that. With peacebuilding, let’s call it a strategicactivity to sustain peace. It’s often not only about ending the war, but alsoabout preventing war from happening. It’s a strategic activity to strengthenpeace, to avoid violent conflict and to provide different tools for buildingsomething that is actually much more than the absence of war.
Often when we say “peace” we want to strengthen thecapacities of conflict management, conflict resolution, peacebuilding,peacemaking and to lay different foundations for sustainable peace, tostrengthen societies to be able to cope with the conflicts because conflictsare not necessarily bad. They are bad if they escalate into violence, if theyescalate into wars, but peacebuilding has an opportunity to approach conflictin a much more strategic and transformative way.
MS: You mentioned afew things that all go into creating peace like preventing violence, thequality of institutions, having reconciliation, all these different parts ofthe recipe of peace. When you get into using AI, how do you turn peace intoquantifiable data and numbers that you can actually measure and analyze?
YM: That’s the mostcomplicated part. It’s very hard to represent peace, conflict and all of theother social structures in numbers. We are not pretending that we can do that,but what we are saying is that there are certain elements of this entirepeacebuilding ecosystem that can be measured.
As you mentioned Mary, measuring the strength ofinstitutions or how we can help institutions that are defending human rights,strengthening actors of this ecosystem, strengthening peacebuilding activistsand democratic institutions. Elements of these things can be put in numbers andthen we can use different data science methodologies to bring moreunderstanding and visibility to some of these causes of conflicts, evenunderstanding conflicts themselves and start with that.
To prevent violent conflict, we need to understand, we needto know where the violence is happening, where the wars are escalating. All ofthese things are changing over time. What war is today is not what war was 20years ago. This is the change that can be captured in data and where datascience can potentially help us to understand trends.
We can map the incidents of violence in certain countries,in certain parts of the countries. We can feed both historical data and currentdata into models that can predict the probability of violence happening incertain countries and then we can hopefully better prepare for that violence.We can better prepare humanitarian relief. We can better prepare peacebuildingactivities themselves. We can inform peacebuilding actors to be ready to applywhatever they have been traditionally applying to manage the conflict;dialogue, reconciliation, different activities on the ground to make sure thatthe violence does not escalate further.
MS: How doesconflict prediction work with AI? What kinds of questions are you asking andtrying to answer?
YM: Conflictprediction is not a new thing, traditional methods of conflict prediction. Forexample, different countries and especially ministries of foreign affairs, havebeen investing in this intelligence gathering, realizing what is happening indifferent countries and understanding when conflicts might happen. Traditionallythis work has been done by human forecasters and different methodologies thathave been applied by humans or groups of experts.
What theentire field of conflict prediction is looking into is how big data or datascience can be helpful or is it helpful at all for conflict prediction. Onereview from 2023 of different conflict and violence forecasting systems showedthat have half of different forecasting systems that exist today are alreadyusing machine learning and algorithms to help identify different patterns andto help generate forecasts.
To behonest and completely transparent, we’re not techno-solutionists or overlyoptimistic about the capabilities of these tools. They are still not perfect.They are quite a long way away from predicting accurately what may happen.
To help theaudience understand how these models function, they use historical data onconflict or different drivers of conflict, some of which use hundreds of theseindicators to train the model to predict what will happen in the future. Somemodels are more successful than others in these predictions.
MS: Looking at thisas an outsider, it seems obvious to say that places where inequality, poverty,hunger are higher are more likely to see violence or unrest. It seems like anobvious prediction to make. What is the value added by using AI?
YM: That’s a greatquestion. It’s even simpler. What some of these models of conflict are showingus is that the best predictor of conflict is conflict itself. What is actuallymore complicated with conflict and wars (which we call “black swan events”) isthat they are not happening that often. We’re not talking about weather data.We are not talking about trade data. Something that happens all the time iseasier to predict because the model can be fed much data and is able to seepatterns.
How can wepredict a conflict in a country that has not experienced conflict before? Thisis where we are actually trying to see through other predictors a bigger orlesser way to insert cases. This is where the models, if we applyexplainability in this work, can help us understand so we can go beyond theprediction itself and actually see which of these specific predictors has moreweight in certain contexts and why.
We are hopeful that we can train AI for those nuances wherehuman eyes cannot be trained well enough, or we cannot pick up all of theseelements of the prediction. We cannot be replaced as human forecasters but havethis help from data science to think a step further by using data science tohelp us understand peace. It’s not only about predicting where wars willhappen, but what we are going to do with that prediction.
PI: That wasBranka Panic, Coauthor of the book AI for Peace. Earlier our correspondent MarySteffenhagen was speaking with Yeshi Milner, the Founder and CEO of Data forBlack Lives.
Mary willhave more from both in part two of our program just ahead on Peace Talks Radio.
[music]
This isPeace Talks Radio, the radio series and podcast on peacemaking and nonviolentconflict resolution. I’m series producer Paul Ingles today with correspondentMary Steffenhagen. This is part two of our program on artificial intelligenceand peacebuilding.
In part one, Mary introduced us to Yeshi Milner of Data forBlack Lives who spoke of how artificial intelligence is sometimes used in waysthat she said violate human rights and supercharge violence and war.
We also heard from Branka Panic founding director of AI forPeace and Coauthor of the book AI for Peace who believes that the very samemechanisms that can make AI an effective weapon that may challenge peace andsocial justice can also make AI a powerful element for building peace.
First here in part two, we’re back to Mary’s conversationwith Yeshi Milner with Data for Black Lives.
MS: You mentionedyour campaign, “No More Data Weapons.” Another campaign you’re involved in iscalled Abolish Big Data. What does that mean, to abolish big data, beyond aslogan?
YM: To abolish bigdata means to dismantle the structures that put the power of data into thehands of a very few actors and to actually reclaim data and put it into thehands of the people who really need it the most and who are directly impacted.
The way that we dismantle the structures that put the powerof data into the hands of a few is by using legal and organizing strategies tothen obtain data about how those impositions of extreme power and influence areweaponizing sophisticated technologies against their own residents.
We do that by going into spaces of power like Congress, likethe White House and speaking up about the ways in which it has been proventhrough research that credit scores are discriminatory, but a lot of times theburden of proof is on directly impacted communities. It’s on the people who areexperiencing these things firsthand. Knowing that, our strategy has been to doeverything we can through convenings, conferences, training, giving people theability to do their own data collection, their own data science.
MS: What do you andothers in your field see as the most feasible way to go about this dismantlingand restructuring? Is it legislation around privacy and data rights? Do we needmore regulation of these private companies? Should big tech be broken up? Whatavenues are the most effective?
YM: I would say thatit’s all of the above. As an organizer, we do a lot of work organizing peoplewithin these companies. There are a lot of people who work within thesecompanies who are very much invested in ensuring that the work that they do isnot going to harm anyone.
That’s how Data for Black Lives got started. We became anorganization and launched in the midst of the 2016 election and the goal was tobreak down the silos between the people working in tech, the researchers, themathematicians, the scientists, black communities and poor communities, peoplewho were directly impacted and fighting these David and Goliath fights.
So many of us are working in isolation whether we’re inresearch, science, technology or activism. We are all trying to addresstechnological issues that can only be solved by non-technical solutions.Movement building and organizing isn’t the most high-tech solution or approach,but it works.
There are so many harmful technologies whether it’s FICOcredit scores, discriminatory ads on Facebook that are developed and deployedbecause there is no one in the room to say that using natural hair careproducts is a proxy for race. By us choosing that as criteria in order to beable to target on this platform an ad about a predatory loan, that isunethical, discriminatory and actually violates multiple federal laws.
MS: When it comesto policy, Data for Black Lives has made some impact. Back in 2019, theyencountered a city that was trying to implement a tool that would havecollected data about students and used it to try to assess the risk of themcommitting a crime in the future. Yeshi explains what happened next.
YM: One of theearliest campaigns that we worked on was when educators, parents and teachersapproached me from the City of Saint Paul in Minneapolis, the Twin Cities at atime in Saint Paul prior to even what happened with George Floyd.
Native students, black students and Latinx students weredisproportionately being suspended and arrested. It was announced that a ratiowas going to be introduced. A joint powers agreement was being signed by themayor’s office, the sheriff’s office, the school district and all these otheragencies and offices that would share data across these agencies for thepurpose of developing this risk ratio.
This would only be used to justify existing policies andcoming from people who knew nothing about algorithms or data or risk ratios,they just knew that this would not be good for their community. They fought andpushed back against that policy. We were able to get it dismantled.
Most importantly, through that process, they were able tobuild a whole coalition that became an organization called The Data for PublicGood Network in Minneapolis and they’ve been since working with the mayor’soffice, the school district and other agencies to put into place some of thethings that they had already been asking for such as restorative justice,counselors not cops and better resources for teachers, parents and students.That has been shown to be much more impactful than the promise of these riskratios.
A lot of times, it doesn’t take using a high-powered computingdevice or large language models or anything that sophisticated to make changeand to rebuild.
MS: If people gogoogle you after they listen to this episode, the algorithm is going to feedthem results that call you a data activist. What does that mean and what doesthat personally mean to you?
YM: To me, dataactivism is reclaiming data as protest, data as accountability and data ascollective action.
I started off doing this work in high school and that wasactually where I first started using data and not in the classroom but aftersome students at a neighboring high school organized a peaceful protest inresponse to a student being put into a choke hold by a vice principal. It wasresponded to with police cars, swat teams. I’ll never forget watching onnational news the headlines, “Riot at Miami Edison Senior High.”
It was from there that I knew that in addition to nonviolentprotests as well as going into school board meetings during public hearingsessions about police brutality but also just really exclusionary disciplinarypolicies in our schools that we needed other ways to have our voices heard.
I did this by working with an organization and we hit theground running and collected over 600 surveys asking young people in Miami DadeCounty about their experiences in their schools. We turned those findings intoa comic book and that comic book was used about ten years ago now and continuesto be used to support restorative justice polices in schools.
I learned very early on how to be a data activist out ofnecessity, and little did I know that ten years later, data science andartificial intelligence would really be the tool and the way to chart a newfuture and is currently shaping society.
MS: What do you mean by “reclaiming”? When andhow has data been a protest tool?
YM: There are somany examples, even just throughout black history here in America of peoplereclaiming data as a protest tool.
When we look at the life of Ida B. Wells, she is known as ajournalist, an author, but she was one of the first data scientists ever. Shedocumented and tracked the instances of lynching in the American South at atime where this was happening rampantly, but there was no documentation of thenumber of people who were being murdered this way.
We also look at the story of W.E.B. Du Bois who we know as ascholar, historian but through his work visualizing black life in American andpresenting it to international audiences, he was able to really speak to theexperiences of black people in a country that was at that point grappling withthe Reconstruction Era and the end of slavery addressing issues around civiland human rights in an early era.
Those are two examples that I look to that came even beforeme. In our work with Data for Black Lives, so much of it is equippingorganizations, individuals, scientists, activists, parents, communities,families with the information and most importantly, the tools to get theinformation that they need to advocate for themselves.
MS: I love thatreframing. I also wonder if you feel any tension while you do your work andpromote AI’s uses for peace and social good. You have this hope about thefuture of AI while also condemning its very blatant current uses for injusticeand violence. How do you hold that tension if you feel it at all as you do thiswork?
YM: There is alwaystension existing even existing in this highly technological world. The worldpays attention to genocides worldwide. This very Mac Book Pro and iPhone thatwe’re using to record right now is the result of real people being murdered andkilled for minerals and materials to make this technology possible. That to meis already the attention that I have had to grapple with as a technologist andsomeone who exists at this moment.
I look back in history at people like Ida B. Wells andW.E.B. Du Bois and the many others who came before me. I think being able tothink about technology outside of the corporate colonial context is really,really important to liberate it for such use.
Part of the obfuscation and part of the weaponization is thefact that the people who have the funding, the resources and the money todeploy a lot of these technologies are the military, are big tech, are lawenforcement agencies and they do so for a very specific reason. What reallygives me hope and keeps me going is real people, real lives and real situationswhere data is a force for good and we are reclaiming it as a tool.
MS: Yes, you’vementioned the culture of how we view AI and the possibilities. What can peopledo in their everyday lives to help bring about that culture shift?
YM: I think thereare concrete ways that people can be a part of the culture change. When you’reat the airport and they’re asking to do a facial recognition scan of your face,feel comfortable and confident in opting out of that. That is one way todemonstrate not just to TSA agents to educate them but also within theirdatabase, this is something that people are not participating in. We arecampaigning on our end to ensure that these are no longer proliferated, so thatis one way.
In theeducation and culture sense, on an everyday basis, feeling empowered andunderstanding that these technologies are being marketed at sentient, allknowing and omnipotent, but they’re only based on information that we alreadyhave, and they are very limited in their ability to solve problems, to organizepeople and to govern. They are not able to do that at all.
Human beings are still very much at the helm of all of theseinventions and developments in technology. I believe knowing that andunderstanding that is important to demystify and to pull the curtain back fromthe Wizard of Oz. People should understand that behind these technologies arepeople and a lot of these people are very much working in spaces where they’rein bubbles, often times in isolation and they need pushback and feedback. Theyneed to know that these technologies that are being built are not the best forsociety.
It’s our role to create spaces for people to be able to comeinto community with each other and on an everyday basis being conscious ofwhere algorithms are being used, how they’re being used and feeling empoweredto opt out when appropriate.
It really begins with education and realizing that we verymuch still have the power. We’re at a place where we must be able to decide inand assert what only humans can do and what computers can enable and help us todo at this moment.
MS: That was YeshiMilner, the Founder and CEO of Data for Black Lives.
Next, morefrom my conversation with Branka Panic, an international peacebuilding expertwho is exploring how AI can be used to prevent violence and defend humanrights. Branka and her colleague Dr. Page Arthur have coauthored a book calledAI for Peace. It explores four key areas where they believe AI can beeffective, including hate speech, climate change and human rights.
I asked Brankato tell me about some of the ways that AI is being used in human rights worktoday.
BP: This is veryinteresting. In the human rights field, what we see are so many negativeconsequences, so many applications that are either intentionally malicious applicationsof AI violation human rights or even unintended consequences. We recognize thisin the book, but our intention is actually to look into another application,this potential positive application and utilization of different AItechnologies to assist the work of human rights defenders.
I am personally very excited about this because I see generalhuman rights and international law as another way to prevent wars. The fieldstarted applying different technologies even before artificial intelligence. Forexample, satellites and how big of a change satellites and their images havemade in the human rights field. They have been used to detect mass graves in Burundi,to support the case against Sudanese President Omar al-Bashir at TheInternational Criminal Court for abuses in Darfur. We are writing about howsatellite images help detect Boko Haram activities in Nigeria. This has beenhappening for the past several decades.
What is new now is overlapping this technology with machinelearning and allowing human rights defenders to process a much bigger amount ofcontent and to see the patterns that are sometimes not visible with human eyes.
There are other examples as well we see from more recentconflicts and wars in Ethiopia and Ukraine. We are writing about a specific project,a very interesting example of reframing organization that collaborated withYemeni archives and Syrian archives and collected vast amounts of evidence fromwar areas in these war-torn countries including things that citizens themselvesare recording.
Now we are living in this world of citizen activism as wellwhere anyone who has a phone can record or take a picture of things that arehappening around them. You can imagine the number of materials that thesearchives collected. For a group of researchers to go through all this, it wouldtake years. This is where we train the model to process all of this information.
In the specific case of this project, to detect instances ofthe use of illegal munitions in war zones. The model is helping us catchinstances of violations of international law and potentially bring this asevidence to the courts.
For example, human trafficking is one of the fields of workthat can benefit from facial recognition systems. There AI can be helpful todetect people who are misused through human trafficking or even in cases ofabducted children or missing children, these systems can be helpful.
I mention this as an example because this technology is adoubled edged technology. We can not only see situations of misuse of AI andintentional harmful applications, but situations where this technology can bebeneficial as well.
One of the facial recognition policing models has beenbanned in many cities in the United States but has been allowed in cases ofinternational human rights protection, specifically, human trafficking.
MS: You mentionedthat AI is a doubled edged sword. There are so many severe violations of humanrights using AI, so how do you bill this as a tool that can’t be misused inthese ways? Can you?
BP: That’s a greatquestion. There are regulations, huge movements to regulate AI technologiessuch as the adoption of the EU AI Act in the European Union and the effect thatthis law can have on other countries through the Brussels Effect is a verypositive development in the field.
For example, facial recognition technology will be banned orbiometrics data utilization in social scoring, all of these uses that can bemalicious and especially target certain communities will be banned.
Unfortunately, the applications in war, the applications fornational security by the military are not considered in this law. This does notleave us in a very positive world. We are not even talking about a double-edgedsword. We are talking about the direct utilization of facial recognitiontechnology in the military. For example, the application of AI to targetPalestinians in Gaza.
These applications use biometrics and target specificcommunities in civilian applications as well. For example, China is a big casestudy. Amnesty International and many other human rights organizations werereporting about the misuse of AI against the Uyghu Community in XinjiangProvince in China. This is something that we simply have to figure out how tofight against. We have laws in the EU that protect the EU citizens, but thereare so many other malicious applications.
When we look at other countries’ regulatory attempts oradoption of strategies for artificial intelligence, we see so often thatnational security has been valued more than human security. This is how Iperceive the situation. We have exceptions for AI use for military purposes.
We see the example of how workers in big tech companies areprotesting now, for example, Google has a big project collaborating providingGoogle technology for military utilization. Now workers are complaining andprotesting demanding that their work does not go into war applications. As apeace activist, it’s a bit of a positive signal that there is something that wecan do outside of the military field to advocate for our work as datascientists not to be used for military purposes.
MS: What would youconsider an ideal ethical framework for these companies, governments and otherentities to use as they continue to implement AI?
BP: I think thereare so many principles adopted now, so many documents and guidelines either onthe national level or the international level. I can highlight one example.UNESCO AI ethics principles have been adopted by more than 170 countries, theonly document out there that has been predominantly adopted by the majority ofthe world. Some basic principles are transparency, accountability, dataresponsibility, data protection, privacy and they cross different fields. Regardlessof which field you’re applying AI, these principles should guide the work. Idon’t think there is a perfect model.
As a practitioner of peacebuilding, we need to be carefulwhen applying these technologies in fragile settings. We work with vulnerablepopulations and there is the potential to use the knowledge that we astraditional peacebuilders have and combine it with AI ethics and principles toaspire to ideal guidelines. Even when we secure ourselves and our communitiesfrom malicious utilization of AI, we will always have unintended consequences.
MS: As you look tothe future of peacebuilding with AI, where are you most hopeful and where areyou tempering your expectations?
BP: We talk abouttraining models and how they are dependent on large amount of data. There is nomore data to train the models. We can’t go any further than we’ve gone.Researchers were more hopeful about training these models and used artificiallycreated data, but we see problems with that. The field is looking intogenerating AI. We didn’t have time to talk about that; the impact thatgenerating AI has on peacebuilding. I think there is a potential in thatdirection to use generated AI for peace speech as a counter narrative to hatespeech. We should not only use models to predict hate speech online or to trackwhat is happening, but to actually create peace speech through generative AI.This is one example that comes to mind.
There isthe potential to advance these applications and cases for human rightsdefenders, for hate speech detection, for climate; understanding therelationship between climate and conflict and even climate and peace and usingmoments of natural disaster for more effective peacebuilding. Those are all theapplications where I think that data science can be beneficial.
PI: That wasBranka Panic, Founding Director of AI for Peace and coauthor of the book of thesame name AI for Peace.
Earlier in our program, correspondent Mary Steffenhagen wasspeaking with Yeshi Milner, the Founder and CEO of Data for Black Lives
You can find both parts of our program on both the threatsto peace and social justice from artificial intelligence and the use of AI topromote and build peace at our website, www.peacetalksradio.com. Look for season 22, episode 8 to find this program and tofind and hear all the programs in our series dating back to 2002.
Also at www.peacetalksradio.comyou can find out more about all our guests, see photos of them, read and sharetranscripts of the program, sign up for our podcast and make a donation to keepthis program going into the future all at www.peacetalksradio.com.
Nola Daves Moses is our Executive Director. Jessica Ticktin isour Supervising Producer. Ali Aldeman composed and performs our theme music.For correspondent Mary Steffenhagen, Series Cofounder Suzanne Kryder and therest of our team, I’m Paul Ingles thanks for listening to and supporting PeaceTalks Radio.
[music]