[ad_1]
The promise of artificial intelligence systems is that they are faster, cheaper and more accurate than dim-witted humans. The danger is they become an unaccountable and uncontestable form of power that only reinforces existing hierarchies and human biases.Â
As we have seen with protesting students, angry at automatically allocated exam grades, and frontline hospital staff put near the back of the queue for Covid-19 vaccinations, the natural human reaction is to rage against the machine. Ensuring that AI systems are used appropriately is one of the wickedest challenges of our times given their increasing complexity and ubiquity in services as varied as search engines, online marketplaces and hiring applications. How can we do so? Here are five, albeit imperfect, ideas.
The first problem, highlighted in Coded Bias, a documentary film released by Netflix this week, is the shocking lack of diversity among the algorithm-writing classes. The activist academics featured in the film, including Joy Buolamwini, Cathy O’Neil and Safiya Noble, have done an outstanding job of exposing the embedded biases of systems based on inadequate human understanding of complex societal issues and imperfect data.Â
Part of the issue stems from the skewed demographic composition of the tech industry itself. It must be a priority of public policy, private philanthropy and tech industry practice to encourage more under-represented groups to work in tech. Why has the number of women earning computer science bachelor’s degrees at US universities more than halved to just 18 per cent since 1984?
Second, automated systems should only ever be deployed when they have demonstrable net benefits and are broadly accepted by the people most affected by their use. Take AI-enabled facial recognition technology, which can be both useful and convenient in the right contexts. When fully informed, the public tends to accept that trade-offs are sometimes necessary between privacy and safety, especially during security or health emergencies. But people rightly reject the indiscriminate use of flawed technology by unaccountable organisations.Â
Third, the tech companies that develop AI systems must embed ethical thinking in the entire design process and consider unintended consequences and possible remedies. To their credit, many tech companies have signed up to industry codes of practice, focusing on transparency and accountability. But their credibility has been damaged after two leading ethics researchers at Google left the company after accusing senior leadership of empty rhetoric.
Fourth, the tech industry can help rebuild trust by subjecting data sets and algorithms to independent scrutiny. The finance industry saw the sense of funding external credit rating agencies to assess the riskiness of various financial instruments and institutions. As became clear during the financial crisis of 2008, such agencies can get things badly wrong. Nevertheless, algorithmic auditing would be a useful discipline.
Fifth, when it comes to the use of AI systems in the most critical areas, such as self-driving cars or medical diagnoses, it is clear that broader regulation is now needed. Some experts have persuasively argued for the creation of the algorithmic equivalent of the US Food and Drug Administration, established in 1906 to regulate standards.Â
Some machine-learning algorithms that are trained to find solutions, rather than designed to do so, pose a particular challenge. How they work cannot always be understood or reliably predicted. Their harms can also be diffuse, making remedial litigation difficult. Just as the FDA preapproves pharmaceutical drugs, so this new regulator should scrutinise complex algorithms before they are deployed for life-changing uses.
Even so, we will never be able to solve the issue of algorithmic bias in isolation, especially when there is no societal consensus about the uses of AI, says Rashida Richardson, a visiting scholar at Rutgers Law School. “The problem with algorithmic bias is that it is not just a technical error that has a technical solution. Many of the problems that we see stem from systemic inequality and partial data,†she says.
One hope is that AI-enabled tools can themselves help interrogate such systemic inequality by highlighting patterns of socio-economic deprivation or judicial injustice, for example. No black box computer system compares with the unfathomable mysteries of the human mind. Yet, if used wisely, machines can help counter human bias, too.
[ad_2]
Source link