Artificial Intelligence Now Decides Who Lives and Dies

Posted By : Telegraf
7 Min Read
The armed drones carry relatively simple artificial intelligence (AI) that can identify human forms and target them with missiles. AP Photo

Lets start with the uncomfortable truth. We have lost control of artificial intelligence. This shouldnt be too surprising, considering we likely never had any control over it. The maelstrom at OpenAI over the abrupt dismissal of its chief executive, Sam Altman, raised accountability questions inside one of the worlds most powerful AI companies. Yet even before the boardroom drama, our understanding of how AI is created and used was limited. 

Lawmakers worldwide are struggling to keep up with the pace of AI innovation and thus cant provide basic frameworks of regulations and oversight. The conflict between Israel and Hamas in Gaza has raised the stakes even further. AI systems are currently being used to determine who lives and dies in Gaza. The results, as anyone can see, are terrifying. 

In a widespread investigation carried out by Israeli publication +972 Magazine, journalist Yuval Abraham spoke with several current and former officials about the Israeli militarys advanced AI war program called the Gospel.” According to the officials, The Gospel produces AI-generated targeting recommendations through the rapid and automatic extraction of intelligence.” Recommendations are matched with identifications carried out by a human soldier. The system relies on a matrix of data points with checkered misidentification histories, such as facial recognition technology. 

The result is the production of military” targets in Gaza at an astonishingly high rate. In previous Israeli operations, the military was slowed by a lack of targets because humans took time to identify targets and determine the potential of civilian casualties. The Gospel has sped up this process with dramatic effect. 

Read More:  On a quest for games that deliver true terror

Thanks to the Gospel, Israeli fighter jets cant keep up with the number of targets these automotive systems provide. The sheer gravity of the death toll over the past six weeks of fighting speaks to the deadly nature of this new technology of war. According to Gaza officials, more than 17,000 people have been killed, including at least 6,000 children. Citing several reports, American journalist Nicholas Kristof said that “a woman or child has been killed on average about every seven minutes around the clock since the war began in Gaza.” 

Look at the physical landscape of Gaza,” Richard Moyes, a researcher who heads Article 36, a group that campaigns to reduce harm from weapons, told the Guardian. Were seeing the widespread flattening of an urban area with heavy explosive weapons, so to claim theres precision and narrowness of force being exerted is not borne out by the facts.”

Militaries around the world with similar AI capabilities are closely watching Israels assault on Gaza. The lessons learned in Gaza will be used to refine other AI platforms for use in future conflicts. The genie is out of the bottle. The automated war of the future will use computer programs to decide who lives and who dies. 

While Israel continues to pound Gaza with AI-directed missiles, governments and regulators worldwide need help to keep up with the pace of AI innovation taking place in private companies. Lawmakers and regulators cant keep up with the new programs and the programs being created.

Read More:  Over 100 million Indians took to e-gaming in ’20

The New York Times notes that that gap has been compounded by an AI knowledge deficit in governments, labyrinthine bureaucracies, and fears that too many rules may inadvertently limit the technologys benefits.” The net result is that AI companies can develop with little or no oversight. This situation is so dramatic that we dont even know what these companies are working on. 

Consider the fiasco over the management of OpenAI, the company behind the popular AI platform ChatGPT. When CEO Sam Altman was unexpectedly fired, the internet rumor mill began fixating on unconfirmed reports that OpenAI had developed a secret and mighty AI that could change the world in unforeseen ways. Internal disagreement over its usage led to a leadership crisis at the company.

We might never know if this rumor is true, but given the trajectory of AI and the fact that we cannot understand what OpenAI is doing, it seems plausible. The general public and lawmakers cant get a straight answer about the potential of a super-powerful AI platform, and that is the problem. 

Israels Gospel and the chaos at OpenAI mark a turning point in AI. Its time to move beyond the hollow elevator pitches that AI will deliver a brave new world. AI might help humanity achieve new goals, but it wont be a force for good if it is developed in the shadows and used to kill people on battlefields. Regulators and lawmakers cant keep up with the pace of the technology and dont have the tools to practice sound oversight. 

Read More:  Boom times for Silicon Valley’s elite investors

While powerful governments around the world watch Israel test AI algorithms on Palestinians, we cant harbor false hopes that this technology will only be used for good. Given the failure of our regulators to establish guardrails on the technology, we can hope that the narrow interests of consumer capitalism will serve as a governor on the true reach of AI to transform society. Its a vain hope, but it is likely all we have at this stage.

Joseph Dana is a writer based in South Africa and the Middle East. He has reported from Jerusalem, Ramallah, Cairo, Istanbul, and Abu Dhabi. He was formerly editor-in-chief of emerge85, a media project based in Abu Dhabi exploring change in emerging markets. Twitter: @ibnezra

Share This Article
Leave a comment