Keys to making AI a force for social good

Posted By : Telegraf
8 Min Read

[ad_1]

Artificial-intelligence systems are shaping the contours of our lives. With applications in agriculture, health care, education, transportation, manufacturing and the media, AI has become as pervasive as the Internet.

While it can significantly improve the well-being of humanity, it also has certain downsides – reinforcement of human biases, displacement of jobs and industries, and privacy risks. Thus, like all technologies, it requires governance to create an enabling environment and regulatory policies that maximize its benefits and reduce risks. 

AI governance, however, poses many challenges. There is no single, universal idea of what its goals and outcomes should be. For instance, an aviation safety system seeks to prevent accidents. But AI regulators cannot have a similar exclusive aim. 

Moreover, since AI is not a single application but an underlying technology with diverse uses, terms like “good AI” or “bad AI” are as meaningless as “good electricity” or “bad electricity.” Thus governance must take into account the range of contexts and uses of AI.  

Further complicating matters is the speed with which AI learns and evolves, often in ways that are not understood. As our current regulatory models cannot deal with these rapid changes, they might end up stifling innovation and fail to prevent harmful AI applications.  

Confronted with these unprecedented challenges, what must we do? 

To explore potential roadmaps, we build upon insights shared by industry experts, government officials, leading thinkers and practitioners at two critical events we at the Rockefeller Foundation were part of: the AI for Social Good Summit and the Innovating AI Governance Symposium.

Here’s how policymakers can negotiate some of the challenges posed by AI governance and harness its transformative potential. While each pilot use case may require a bespoke approach until the consequences of that model are fully realized, here are a few examples. 

Strategic regulation 

To promote innovation, governments must create safe, enabling spaces for experimentation. They can do this by deploying AI applications at a limited scale under the observation of regulators. Such pilot tests can help determine the potential benefits and downsides, and further fine-tune technologies before releasing them in the public sphere through an effective feedback loop. 

Read More:  Chinese ship withdrawal shows diplomacy works

AI localism, the governance of AI use within a community or city, can nurture a bottom-up regulatory approach. It allows policies to be adapted to local conditions and the needs of communities as opposed to a cookie-cutter approach. At the local level, citizens can also closely observe and have more of a say in how AI is used.   

For such regulation to be effective, policymakers must understand the development process of AI systems, their strengths and weaknesses, and the types of data used. They must become more technology-literate and form working groups that enable collaboration among regulators, developers, and users. 

In such collaborative efforts, however, policymakers must regulate acceptable outcomes of AI use rather than specific technologies and applications. 

Take the case of AI systems that determine if applicants are eligible for a loan or a job. There have been incidents of algorithms discriminating against people based on their race, gender or address. In such cases, governance should find and stem biases rather than regulate the mechanism the AI system uses to make decisions. 

They can do this through peer reviews with diverse participants who can challenge each other’s presumptions and ensure representation of different points of view. 

Manage job disruptions 

While AI will create many jobs, it will also upend old ones, which has prompted a pushback against certain technologies. For instance, taxi drivers have lobbied against self-driving vehicles. While various studies show that job losses to automation could be in the millions, quite a few, such as by the World Economic Forum, also point out that AI will create more jobs than it displaces.  

Thus policymakers need to address the concerns of those who might lose jobs and create alternatives such as reskilling, job transition support, and employment guarantees. They must also strengthen social safety nets to cushion the impact of job losses. 

In the long run, they must overhaul the education system to focus on life-long learning and helping workers transition rather than preparing them for a single career. Besides, with the demand for AI workers exceeding the supply, governments will have to develop and retain talent to capitalize on the AI revolution.   

Read More:  Price of oysters set to rise dramatically as farmers despair at $17million crops being wiped out

Nurture AI markets 

While governments have been investing in research, technologies, and infrastructure to promote AI, these measures alone are insufficient. They should further boost the AI ecosystem by identifying potential uses of AI applications and encouraging prospective customers within and outside the government to adopt them.

By becoming a market player and facilitator, governments can create a demand for AI applications that contribute to social good. India, for example, is setting up a National Center for Artificial Intelligence to incorporate AI in government applications and public service delivery with support from a team of young professionals from the International Innovation Corps.  

Address privacy concerns 

Some regard stronger privacy protections as an impediment to the growth of AI as they could limit data availability. However, that does not have to be the case.

For instance, policymakers can safeguard privacy by legislating frameworks to anonymize data. This ensures that sensitive data is available to AI applications without compromising the privacy of individuals. Japan has taken this route with a law permitting research institutions to use the anonymized medical data of patients collected by hospitals. 

Policymakers can also explore relevant data stewards that can make sensitive data accessible by anonymizing and centralizing it. Such stewards could ensure proper management and sharing of data with informed consent and requisite permissions from data providers.   

Global cooperation 

The adoption of these strategies to govern AI varies across nations. Their socio-economic conditions, access to technology, and ethical approaches determine to a large extent whether they can channel AI for social good. Thus to ensure that AI systems create a more inclusive playing field and do not deepen disparities among countries, we require global governance.  

Read More:  Pressure mounts on Beijing in the South China Sea

It could incorporate aspects from models of international cooperation on climate change, arms control, trade, and finance. Just as the Bretton Woods Conference toward the end of World War II resulted in new regulatory frameworks and multilateral institutions such as the World Bank, global efforts could address the challenges of AI governance.  

There already are initiatives in this direction. In 2019, the Council of Europe set up the Ad-hoc Committee on Artificial Intelligence, which is working toward a legally binding international treaty. Last year saw the establishment of the Global Partnership on AI. The Covid-19 pandemic has further reinforced the need for global cooperation. 

The next step in the evolution of establishing flexible yet robust governance, as we transition from principles to practice, is the need to work together with a common set of goals to spur inclusive, innovative AI and better govern the world of tomorrow as described in the Rockefeller Foundation’s report AI+Governance: Bold Action and Novel Approaches.  

Deepali Khanna is the managing director of the Rockefeller Foundation’s Asia Regional Office and oversees the foundation’s policy, advocacy, grant-making, and strategic partnerships in Asia. She has more than three decades of experience, across Asia, Africa and North America, leading social impact and initiatives such organizations as the Mastercard Foundation, Plan International and UNICEF.

[ad_2]

Source link

Share This Article
Leave a comment