EU privacy regulation stifling AI innovation, claims open letter masterminded by Meta
The European Union risks becoming an artificial intelligence backwater thanks to a “fragmented and unpredictable” regulatory environment that is damaging the technology’s development.
That’s according to a blunt open letter and newspaper advertisement signed by 49 executives, researchers and industry organizations, notably including the CEOs of SAP, Spotify, and Ericsson.
But it is the appearance on the list of two other signatories — Meta’s CEO Mark Zuckerberg and its chief AI scientist, Yann LeCun — that frame the whole communication, which appears to have been the Meta’s idea.
“We are a group of companies, researchers and institutions integral to Europe and working to serve hundreds of millions of Europeans,” opened the letter, before getting to the meat of its complaint.
“If companies and institutions are going to invest tens of billions of euros to build Generative AI for European citizens, they require clear rules, consistently applied, enabling the use of European data,” it said.
Unfortunately, it continued, “in recent times, regulatory decision making has become fragmented and unpredictable, while interventions by the European Data Protection Authorities have created huge uncertainty about what kinds of data can be used to train AI models.”
The EU thinks it is protecting citizens from the dangers of AI development when what it is really doing is sowing chaos by layering different regulations on top of one another, it seems to be suggesting.
This centers around two pieces of legislation; the EU’s centrepiece AI Act, which recently came into effect, and 2018’s GDPR, which enforces controls around data privacy.
The former isn’t mentioned while the latter, GDPR, is referred to only in passing. The core of the complaint, then, is really the general one that the application of these rules, however well-intentioned and carefully drafted, is hurting tech companies trying to do great things with AI.
Often the complaint with AI is that there are too few rules and that by the time the world wakes up to this it will be years down the line and too late to head off the worst excesses.
This letter takes the opposite view. There are too many of the wrong sort of rules that don’t understand how AI works and are therefore hampering the sector in very specific ways.
Getting Meta
In June, Meta told its users it planned to use their Facebook and Instagram data to train its AI models, a controversial move that received pushback from regulators in the EU and UK as well as GDPR complaints from privacy organizations. The company has since modified and restarted its AI data program in the UK but has not been able to do so in the EU.
That restrictive regulation in the EU is likely the context for this latest letter. Although signed by multiple companies, the letter bears the fingerprints of Meta, which hosted the letter on its servers (as outlined in its privacy policy), and paid for the corresponding advertisement in the Financial Times newspaper
The text of the letter also references Facebook’s Llama large language model (LLM): “Frontier-level open models like Llama — based on text or multi-modal — can turbocharge productivity, drive scientific research, and add hundreds of billions of euros to the European economy.”
However, it warns, “Without them, the development of AI will happen elsewhere — depriving Europeans of the technological advances enjoyed in the US, China and India.”
This, together with its hosting, implies that the letter was drafted by Meta with the approval of the other signatories. Inevitably, this has generated cynicism that the apparently open letter is really a thinly disguised PR campaign by Meta in its showdown with the EU.
In support of the letter, Meta’s president of global affairs, Nick Clegg, tweeted: “Today, dozens of leading European companies, researchers, and developers are calling on the EU to adopt a simplified approach to data regulation, or risk being left behind on AI innovation.
His message received short shrift from Robert Maciejko, US-based co-founder of the INSEAD AI community, and a strident critic of Meta’s approach.
“Really sad that you are part of this Orwellian doublespeak. Translated to English: Mark Zuckerberg wants the right to use YOUR data and property for his own gain forever, without asking or compensating you. And by ‘you,’ I mean everyone in the world – individuals, companies, brands, countries, etc. If his reproductions end up replacing your work, that’s your problem, not his,” Maciejko responded on X.
The counterargument to the letter is that far from making life harder for AI developers, regulations such as the EU AI Act offer a firm footing for organizations to make decisions going forward. In other regions, much of this remains up in the air.
None of this should obscure the concern felt by other signatories to the letter, however. But exactly how deep this runs is hard to tell. Computerworld contacted several of the letter’s signatories, receiving only one response, from SAP.
“SAP encourages policymakers to adopt a risk-based, outcome-oriented approach to AI policy and develop a legal framework that builds on existing laws and avoids duplicating or creating any conflicting requirements. SAP’s ethos is deeply rooted in European values, and our focus is to help close the innovation gap in Europe in a responsible way that safeguards citizens’ wellbeing.”
This steers clear of directly criticising current EU AI regulations or appearing to side with the argument that private companies should be given free rein.
As a sign off, the open letter offers a way for organizations or individuals who agree with its contents to add their signatures to “join us in calling for AI regulatory certainty in the European Union” by filling in a form that sends their details to Meta.
It’s not clear how these ‘signatures’ will be verified.