The AI Regulation Battle Is Only Just Beginning

Posted By : Telegraf
7 Min Read
Outmanned, and out-resourced by Russia, Ukraine is hoping smart use of artificial intelligence will turn the tide in the war, both on the battlefield and on the messaging front. Getty Images

Given the pace of development in artificial intelligence in recent years, it’s remarkable that the United States has only just released clear regulations concerning the technology. At the end of October, President Joe Biden issued an executive order to ensure “safe, secure, and trustworthy artificial intelligence.” The directive sets out new standards for all matters of AI safety, including new privacy safeguards designed to protect consumers. While Congress has yet to enact comprehensive laws dictating the use and development of AI, the executive order is a much-needed step toward sensible regulation of this rapidly developing technology.

Casual observers might be surprised to learn that the US didn’t already have any such AI protections on the books. A gathering of 28 governments for the AI Safety Summit in the UK last week revealed that the rest of the world is even further behind. Held at the historic former spy base Bletchley Park, those attending managed to agree to work together on safety research to avert the “catastrophic harm” that could come from AI. The declaration, whose signatories include the US, China, the EU, Saudi Arabia and the United Arab Emirates, was a rare diplomatic coup for the UK but light on detail. The US used the event to brandish its own new guardrails as something that the rest of the world should follow.

You don’t need a degree in computing to understand that AI is a crucial part of one of the most profound technological shifts humanity has ever experienced. AI has the power to change how we think and educate ourselves. It can change how we work and make certain jobs redundant. AI systems require massive amounts of data generally collected on the open internet to deliver these results. Chances are that some of your data is being fed into large language models that power AI platforms like ChatGPT. 

Read More:  Realme Narzo 30 5G, Realme Buds Q2 to go on first sale. Price, offers, other details

This is just the tip of the iceberg. AI is currently being deployed in Israel’s operations in Gaza to help make decisions of life and death. Israel’s Military Intelligence Directorate said the military uses AI and other “automated” tools to “produce reliable targets quickly and accurately.” One unnamed senior officer said the new AI-powered tools are being used for the “first time to immediately provide ground forces in the Gaza Strip with updated information on targets to strike.”

This is a grave escalation in the use of AI, not just for Palestinians but for the international community. The technology being tested in Gaza will almost certainly be exported as part of Israel’s large and powerful weapons technology sector. Put simply, the AI algorithms used to attack Palestinian targets could soon crop up in other conflicts from Africa to South America. 

Biden’s executive order specifically addresses issues related to AI safety, consumer protection, and privacy. The order requires new safety assessments of new and existing AI platforms, equity and civil rights guidance, and research on AI’s impact on the labor market. Some AI companies will now be required to share safety test results with the US government. The Commerce Department has been directed to create guidance for AI watermarking and a cybersecurity program that can make AI tools that help identify flaws in critical software.

While the US and other Western countries have been slow to draft comprehensive AI regulations, there has been some movement in recent years. Earlier this year, the National Institute of Standards and Technology (NIST) outlined a comprehensive AI risk management framework. The document became the basis for the Biden administration’s executive order. Critically, the Biden administration has empowered the Commerce Department, which houses the NIST, to help implement aspects of the order. 

Read More:  UMC agrees to expand mature chip capacity to address semiconductor shortage

The challenge will now be securing buy-in from leading American technology companies. Without their cooperation and a legal framework to punish companies that don’t follow the rules, Biden’s order won’t amount to much.

There is still a lot of work to be done. Technology companies have largely been able to develop with little oversight over the past two decades. This is partially due to the interconnected world of tech, where companies have created new products or services outside the US. Amazon’s groundbreaking AWS cloud hosting technology, for example, was created and developed at the University of Cape Town in South Africa, far from the reach of American regulators. 

With honest buy-in from leading companies, the Biden administration could seek more comprehensive laws and regulations. Direct government involvement in technology always runs the risk of stifling innovation. Yet, there is a clear opportunity for smaller countries with knowledge economies to step in. Countries like Estonia and the UAE that have invested in their knowledge economies and have small populations (and regulatory environments) can follow Biden’s lead with AI safeguards. This would have a powerful effect in cities like Dubai, where multinational tech companies have set up regional offices. Because there is less red tape in these smaller countries, AI regulations can be pushed through quickly and, perhaps more importantly, amended if they stifle development too aggressively.

Given the hyper-connected world of technology development, the international community can’t wait for larger countries or blocs like the United States and the European Union to push through legislation first. Instead, new markets that have their tech economies to consider should push ahead with regulations that work for their needs. 

Read More:  Google must face Voice Assistant privacy lawsuit: US judge

The development of AI technology is happening at a remarkable pace. Because it is so essential to the overall technology sector, we don’t have the luxury of waiting for world leaders to act first. It’s time to lead by example, and AI regulations are an ideal place to start.

Joseph Dana is a writer based in South Africa and the Middle East. He has reported from Jerusalem, Ramallah, Cairo, Istanbul, and Abu Dhabi. He was formerly editor-in-chief of emerge85, a media project based in Abu Dhabi exploring change in emerging markets. Twitter: @ibnezra

Share This Article
Leave a comment