The European Parliament has finally reached an agreed position on the proposed AI Act, which will regulate the use of AI across the European Union. As with other legislation covered in this newsletter (see item on Platform Workers) this open the door to negotiation between the Parliament, the Council of Ministers, and the Commission to see is an agreed, common text can be found.
The proposed amendments to the text proposed by the European Commission include:
- A duty to consult with workers and their unions/representatives before introducing AI to the workplace.
- A duty to carry out an assessment of the impact on fundamental rights of the introduction of AI.
- An opening clause for national legislators to limit the use of AI systems to protect workers’ rights.
The AI Act, as amended by Parliament, would continue with a risk-based-on-use, tiered regulatory framework. The highest risk tier is reserved for certain AI uses that are considered to pose an “unacceptable risk” to society, including uses like scraping Internet images from social media and other sites to build facial recognition databases, social credit scoring, real-time facial recognition technology, predictive policing, and emotion recognition in governmental, educational, and employment contexts. These uses are outright banned.
Uses of AI that are considered “high-risk,” such as uses in aviation, vehicles, medical devices, and eight other specifically enumerated categories, including human resource decision making, are permitted, but subject to heavy regulation. Operators will need to register their AI systems in an EU-wide database and will be subject to extensive regulatory requirements around risk management, transparency, human oversight, and cybersecurity, among others.
Uses of AI that are considered “limited risk,” such as systems that interact with humans (like chatbots) and AI systems that could produce “deepfake” content would be subject to a limited set of transparency obligations. Uses of AI that do not fall into any of the prior categories are considered “low or minimal risk” and are not yet subject to any regulation.
If a company fails to comply with these regulations, the draft rules impose significant penalties ranging from 2% to 7% of a company’s total worldwide revenue. The unions are still pushing for a stand-alone AI Directive for the workplace. European Trade Union Confederation (ETUC) Deputy General Secretary Isabelle Schömann said:
“Today is a further key step towards ensuring that artificial intelligence is regulated to better protect users in line with European values and which respects human rights. “The Parliament has made important improvements, such moves must be upheld in trialogue negotiations, while the introduction of ‘significant risk’ must be deleted. “AI at work must deliver for workers as much as for business: this is the reason why a new dedicated directive is needed to ensure the ‘human in control’ principle is made practice in European workplaces, in consultation with workers through their trade unions, and to secure workers’ rights and protection.’
It is unlikely that such a Directive will be proposed anytime soon, especially with European Parliamentary elections scheduled for 2024, and the new Commission to be appointed later in the year. For now, workplace issues involving AI will be regulated by the AI Act.
This is an interesting article in the UK newspaper The Observer of the impact of AI in the workplace. Some interesting data on AI from the Boston Consulting Group can be found HERE.