Europe’s push on AI rules aims to recast global tech

Posted By : Tama Putranto
6 Min Read

[ad_1]

In making sure companies act in socially desirable ways, regulators generally seek to focus their attention on actions that might cause harm. But what happens when, instead of just looking at the output of corporate decision-making, they start also trying to control the inputs?

That is essentially what the European Commission is proposing to do in groundbreaking rules on artificial intelligence outlined this week. This first attempt to create a legal framework covering the development and use of AI delves into murky areas previously beyond the scope of external oversight.

In future, it seems, it won’t just be what companies and governments do that matters, but how they arrive at key decisions in the first place.

This makes Europe’s draft rules a notable extension of regulation into the guts of technology systems. As AI is embedded into more and more business processes, the rules get close to regulating a core part of the decision-making apparatus inside companies. And it is purposefully open-ended: starting with only a handful of the most sensitive uses of AI, the commission reserves the right to extend its oversight in future.

The attempt to wrestle AI systems into some form of regulatory framework is understandable. Opaque and often mistrusted, algorithms are coming to play a part in shaping the way many important government and corporate decisions are made. But the implications are profound.

The commission’s announcement this week drew attention mainly for proposing to ban the most controversial uses of AI, such as virtually all deployments of facial recognition in public places. Cracking down on automated surveillance is likely to draw wide support.

Read More:  Making a good job of remote work

Less noted were the rules meant to govern a wider class of systems, which will have an impact on companies in many industries — and not just those based in the EU.

The severest fines, reserved for the most serious breaches of the rules — a fine of up to 6 per cent of a company’s global revenue — might seem valid for those who use AI for something the EU has tried to ban outright. But that heavy penalty also applies to those not meeting the quality requirements for data sets used to train machine-learning models — one of the most common, and insidious, ways for bias to creep in.

Not surprisingly, comparisons have been drawn between the new AI rules and the General Data Protection Regulation, Europe’s far-reaching data protection regime. As with the privacy rules, it forces companies outside the EU to follow European procedures if the output of their systems affect European citizens. Europe may be well behind the US and China in the AI race, but it is still keeping its nose ahead in tech regulation. This is guaranteed to increase tension with American companies.

It is hard to quibble with the commission’s goal of setting high standards for the development of AI. The question of how to build “ethical AI” systems that treat all subjects fairly has been a hot topic for some time, and making sure all users of the technology follow best practice has merit.

But when that goal is enshrined in laws backed up with heavy penalties, harder questions arise. It is unclear, for instance, exactly how Europe will draw the line in deciding which systems are most risky and in need of regulation, or even whether clear lines are possible.

Read More:  Politicians are still using yesterday’s tools for today’s tech challenges

AI, for instance, will be prohibited outright if it is deemed to involve manipulating people using subliminal techniques in ways that might cause them harm. That sounds laudable. Language, however, is malleable. In the minds of many Facebook critics, this might sound like a fair description of the way social media algorithms already stoke extreme political polarisation.

Similar issues arise with trying to define a “high-risk” AI system — a category deemed by the commission to include anything that might impinge on a citizen’s rights.

At the outset, it explicitly covers the use of AI inside companies for hiring, or for assessing workers for promotion. It also includes AI that affects access to “essential” services, like the granting of credit. A Europe-wide AI board will decide how far this definition of high-risk AI is stretched in future. The rules that apply to systems like these will be exhaustive.

As with all sweeping EU regulations, this one is likely to be years in development and subject to heavy lobbying. But in its bid to shape how automated decisions are made, Brussels has set a clear direction.

richard.waters@ft.com

[ad_2]

Source link

Share This Article
Leave a comment