The secretary-general of Amnesty International, Anges Callamard, launched a statement on Nov. 27 in response to a few European Union member states pushing again on regulating synthetic intelligence (AI) fashions.
France, Germany and Italy reached an agreement that included not adopting such stringent rules for basis fashions of AI, which is a core part of the EU’s forthcoming EU AI Act.
This got here after the EU acquired multiple petitions from tech industry players asking the regulators to not over-regulate the nascent business.
However, Callamard mentioned the area has a chance to point out “international leadership” with sturdy regulation of AI, and member states “must not undermine the AI Act by bowing to the tech industry’s claims that adoption of the AI Act will lead to heavy-handed regulation that would curb innovation.”
“Let us not forget that ‘innovation versus regulation’ is a false dichotomy that has for years been peddled by tech companies to evade meaningful accountability and binding regulation.”
She mentioned this rhetoric from the tech business highlights the “concentration of power” from a small group of tech corporations who wish to be in control of the “AI rulebook.”
Related: US surveillance and facial recognition firm Clearview AI wins GDPR appeal in UK court
Amnesty International has been a member of a coalition of civil society organizations led by the European Digital Rights Network advocating for EU AI legal guidelines with human rights protections on the forefront.
Callamard mentioned human rights abuse by AI is “well documented” and “states are using unregulated AI systems to assess welfare claims, monitor public spaces, or determine someone’s likelihood of committing a crime.”
“It is imperative that France, Germany and Italy stop delaying the negotiations process and that EU lawmakers focus on making sure crucial human rights protections are coded in law before the end of the current EU mandate in 2024.”
Recently, France, Germany and Italy have been additionally a part of a new set of guidelines developed by 15 nations and main tech corporations, together with OpenAI and Anthropic, which recommend cybersecurity practices for AI builders when designing, growing, launching and monitoring AI fashions.
Magazine: AI Eye: Get better results being nice to ChatGPT, AI fake child porn debate, Amazon’s AI reviews