Skip to main content
Best News Website or Mobile Service
WAN-IFRA Digital Media Awards Worldwide 2022
Best News Website or Mobile Service
Digital Media Awards Worldwide 2022
Hamburger Menu

Advertisement

Advertisement

Commentary

Commentary: Never mind Big Tech - ‘little tech’ can be dangerous at work too

One organisation believed in computers over humans, which led to hundreds of employees being wrongly prosecuted. And workplace algorithms can impact work allocation and pay too, says the Financial Times' Sarah O'Connor.

Commentary: Never mind Big Tech - ‘little tech’ can be dangerous at work too

Photo illustration showing a webcam mounted on top of a computer. (Photo: AFP/Magan Crane)

LONDON: An inquiry opened last week into a scandal that has horrified Britons: How hundreds of people running UK post offices were wrongly prosecuted for false accounting and theft on the basis of a faulty computer system.

A judge has called the Post Office’s insistence that the software was infallible “the 21st-century equivalent of maintaining that the Earth is flat”. Yet many other companies are in danger of believing computers over humans as well.

A swath of new technology products has come on to the market in recent years, many of which promise to use artificial intelligence (AI) to manage, coach, score or monitor a company’s employees.

So-called algorithmic management has spread from gig platforms and logistics warehouses into a range of other sectors and occupations, hastened by the pandemic.

NEW TECHNOLOGY TO MANAGE OR MONITOR EMPLOYEES?

Coworker.org, a worker organising platform, has compiled a database of more than 550 such products. About 30 per cent emerged between 2020 and 2021, while the rest were developed between 2018 and 2020.

It calls this ecosystem “little tech”: Companies that don’t get the attention the Big Tech giants do, but are nonetheless having an impact on people’s working lives.

The rationale for many of these products is to protect employees’ health and safety. There are temperature checkers, for example, and cameras that monitor whether workers are keeping two metres apart. Others promise to monitor productivity or maintain an employer’s data security in a world where work has shifted from the office to the home.

RemoteDesk, for example, promises to help managers create “an office-like environment through continuous webcam monitoring to secure employee identity and ensure productivity in a remote workspace”.

It says the webcam monitoring “detects suspicious expressions, gestures, or behaviour of a remote agent” and can “capture eating and drinking and flag them as violations” if “food and drink at your desk is prohibited by company policy”.

Appriss Retail, another product, says it uses “machine learning methods that detect employee deviance” in retail stores. Verensics, a third, says its automated interviews for prospective or current employees provide insights to employers that “reflect a person’s true character, for better or for worse”.

WORKERS FIGHT BACK AGAINST ALGORITHMS

The European Union has proposed regulations that would designate workplace AI products “high risk”, requiring providers to allow for human oversight.

In the meantime, some gig workers are challenging automated decisions in the courts using the EU’s General Data Protection Regulation (GDPR). Last year, for example, a court in Amsterdam ordered Ola Cabs to give drivers access to anonymised ratings on their performance, to personal data used to create their “fraud probability score” and to data used to create an earnings profile that influences work allocation.

In the UK, some politicians want to tilt regulation the other way. Last year, a government task force set up to look for deregulatory dividends from Brexit argued that GDPR should “be reformed to permit automated decision-making and remove human review of algorithmic decisions”.

The task force said the rules “should also be changed to permit basic explanations of how automated decisions are made rather than obliging organisations to detail complex information about how their systems work and the logic involved”.

User ratings (Photo: iStock/marchmeena29)

That would be a dangerous approach, seemingly predicated on the idea the system is perfect and that inserting a human or explaining what it is doing would only slow it down.

The risk is not only of more injustices like the Post Office affair, but also of instilling fear and mistrust in people towards systems they don’t understand.

Already, unions in a variety of countries are reporting an upsurge in people approaching them with concerns about workplace technology.

In the US, some gig workers are collecting and comparing their own data to try to establish the impact of their employers’ algorithms on their pay. And it is no longer just gig workers who are fighting back.

Prospect, a UK union with mostly white-collar members, has started to train “data reps” to negotiate over these issues with employers. It says well-paid engineers are beginning to encounter the sort of task-allocation software often used in low-paid warehouse jobs.

“Lots of these were products developed with warehousing and logistics in mind, now they’re translating into the white-collar economy,” says Andrew Pakes, Prospect’s director of communications and research. “Largely we didn’t talk about it, they were jobs in Amazon warehouses, out of sight, out of mind. Now they’re coming for the rest of us.”

Source: Financial Times/pn

Advertisement

Also worth reading

Advertisement