[ad_1]
The pandemic forced employers to think hard about how much they trust their staff. Vast numbers of companies had to implement remote working last year at a scale they would never otherwise have contemplated. Many were pleasantly surprised. They discovered they could rely on employees to get on with their jobs without slacking. Indeed, studies suggest that — far from taking it easy — home workers are putting in more hours than ever.
But other employers couldn’t let go. Daunted by the thought of losing visibility over staff, they panic-bought software to surveil them, much of which claims to use artificial intelligence to monitor infractions and measure productivity. There is now a booming market in cloud systems which promise to keep remote workers in line. Some, such as Controlio, offer a “stealth mode†which makes the system “completely invisible for the user — no icons on the bar or processes in Task Manager.†(Controlio told me it also offers a “GDPR compliance mode†which limits data collection, and an option to warn employees they are being surveilled.)
With AI spreading quickly but quietly into people’s homes, it is timely that the EU published draft rules last week on how it should be used in a range of different settings. The proposed rules say that AI used for “the recruitment and selection of persons, for making decisions on promotion and termination and for task allocation, monitoring or evaluation of persons in work-related contractual relationships†should be classed as “high riskâ€. Providers of “high risk†AI systems will have to meet certain obligations, such as giving clear information about how it works, using high-quality data sets, and allowing for human oversight.
Brussels is right to describe AI at work as risky. There is often a big power imbalance between employer and employee, especially in workplaces without collective bargaining. If you want to get or keep a job, you might agree to things you wouldn’t otherwise be comfortable with.
What happens at work can also have a large impact on the rest of your life, and vice versa. When I spoke to a call centre employee in the US who was monitored by AI, she couldn’t work out why the algorithm was rating her poorly when her human supervisors had always given her good assessments. She suspected the AI was struggling with her accent but she had no way to challenge it. The ratings affected her bonus payments, a big part of her monthly pay. The blurring line between work and home is another reason to view workplace AI as potentially problematic. The international federation UNI Global Union has picked up a sharp increase in the number of call centre staff working from home and being monitored by AI-enabled webcams, for example.
This is not to say that all AI at work is a problem. Human managers often make biased decisions. AI might help them do a better job of hiring a diverse workforce, and promoting on merit rather than favouritism. But employees should have the right to know when AI is being used, how it claims to work, and be able to appeal its judgments. “Computer says no†isn’t sufficient when your job is at stake.
More transparency might also help to curb the spread of what Valerio De Stefano, a labour law professor in Belgium, calls AI “snake oilâ€: the number of products on the market which claim to use AI but are really nothing of the sort. Plenty of so-called AI systems claim to calculate individual “productivity scores†for workers, for example, but the metrics under the hood are as basic as how often the person sends emails (personally, I would view this as a contra-indicator of true productivity). Employees should not be used as guinea pigs for pseudoscientific appraisals which often substitute for proper management.
But the EU regulations as currently drafted might not be equal to this challenge. The onus will initially be on the providers of the AI systems to assess their own compliance with the rules. Member states, meanwhile, will be expected to designate a national authority to “supervise the application and implementation†and provide “market surveillanceâ€. “Self-assessment is not a joke, nobody is saying it is completely useless, but it might not be enough in the workplace,†says De Stefano. He also argues the rules must not become a new ceiling on regulation in the bloc, since other countries such as France and Germany have already gone further to curb some types of surveillance.
The EU is right to worry about how to protect workers from the risks posed by intelligent machines. It is even more important to protect them from stupid ones.
[ad_2]
Source link