Europe wants platforms to label AI-generated content to fight disinformation

Europe wants platforms to label AI-generated content to fight disinformation

Image credits: Bryce Durbin/TechCrunch

The European Union is leaning on signatories to its code of conduct on online disinformation to label deepfakes and other AI-generated content.

In yesterday’s remarks following a meeting with the Code’s more than 40 signatories, EU Values ​​and Transparency Commissioner Vera Jourova said those who have signed up to fight disinformation should put the technology to work to recognize AI content and label it clearly for users.

New AI technologies can be a force for good and offer new avenues for greater efficiency and creative expression. But, as always, we have to mention the dark side of this matter and they also present new risks and potential negative consequences for society, she warned. Even when it comes to creating and spreading disinformation.

Advanced chatbots like ChatGPT are capable of creating complex and seemingly well-substantiated content and images in seconds. Image generators can create authentic-looking images of events that never happened. Speech generation software can imitate a person’s voice based on a sample of a few seconds. New technologies also pose new challenges for the fight against disinformation. So today I asked the signatories to create a dedicated and separate track within the code to discuss this.

The current version of the Code, which the EU strengthened last summer also confirming its intention to transform the voluntary tool into a mitigation measure that counts for compliance with the (legally binding) Digital Services Act (DSA), does not commit currently identifying and tagging deepfakes. But the Commission hopes to change that.

The European Commissioner said he sees two main talking points on how to include mitigation measures for AI-generated content in code: One would focus on services that integrate generative AI, such as Microsoft’s New Bing or research with Google’s Bard artificial intelligence that they should undertake to create the necessary guarantees so that these services cannot be used by bad actors to generate disinformation.

A second would engage signatories that have services with the potential to spread AI-generated disinformation to deploy technology to recognize such content and clearly label it for users.

Jourova said she spoke to Google’s Sundar Pichai and was told that Google has technology that can detect AI-generated text content, but also that it is continuing to develop the technology to improve its capabilities. .

In further remarks during a question and answer session with the press, the commissioner said the EU wants labels for deepfakes and other AI-generated content to be clear and quick so that regular users can immediately understand what a content that is presented to them was created by a machine, not a person.

He also specified that the Commission wants platforms to implement labeling immediately.

The DSA includes some provisions requiring very large online platforms (VLOPs) to tag manipulated audio and images, but Jourova said the idea of ​​adding the tagging to the Disinformation Code is that it can happen even before the Aug. 25 deadline. for VLOP compliance under DSA.

I have said many times that our primary duty is to protect free speech. But when it comes to the production of artificial intelligence, I see no right for machines to have freedom of speech. And so that too is returning to the good old pillars of our law. And that is why we want to work further on this also within the Code of Conduct based on this fundamental idea, he added.

The Commission also expects to see action on reporting AI-generated disinformation risks next month with Jourova saying relevant signatories should use the July reports to inform the public about the safeguards they are putting in place to avoid the misuse of generative AI to spread disinformation.

The Code of Disinformation now has 44 signatories in all, including tech giants like Google, Facebook and Microsoft, as well as smaller adtech entities and civil society organizations, a tally up from the 34 who signed the pledges as of June 2022.

However, late last month Twitter took the unusual step of withdrawing from the voluntary EU Code.

Other big issues Jourova noted she raised with the remaining signatories in yesterday’s meeting, urging them to take more action, included Russia’s war propaganda and pro-Kremlin disinformation; the need for consistent moderation and fact-checking; election security efforts; and access to data for researchers.

There is still too much dangerous disinformation content circulating on the platforms and too little capacity, he warned, highlighting a long-standing complaint from the Commission that fact-checking initiatives are not comprehensively applied to all content aimed at all languages ​​spoken in EU member states, including smaller nations.

Central and Eastern European countries in particular are under constant attack from mostly Russian sources of disinformation, he added. There is a lot to do. It’s about skills, it’s about our knowledge, it’s about our understanding of the language. And also an understanding of why fertile ground or prepared ground exists in some Member States for the absorption of much disinformation.

Access for researchers is still insufficient, he also stressed, urging platforms to step up their research data efforts.

Jourova also added a few words of warning about Elon Musk’s chosen path, suggesting that Twitter has put itself in the crosshairs of EU enforcement, as a designated VLOP under the DSA.

DSA imposes legal obligation on VLOPs to assess and mitigate social risks such as misinformation, so Twitter calls for censorship and sanction by flipping the bird to EU code (Fines under DSA can go up to 6% of global annual turnover).

From August this year, our structures, which will play the role of enforcers of the DSA, will review Twitter’s performance if they are compliant, if they are taking the necessary steps to mitigate risks and to take action against particularly illegal content, he also warned.

The European Union is not where we want to see California law imported, he added. We’ve said this many times and that’s why I also want to come back and appreciate the collaboration with the former people who work on Twitter, who have collaborated with us [for] already several years on the Code of Conduct against Hate Speech and the Code of Conduct [on disinformation] Also. So sorry about that. I think Twitter had very knowledgeable and determined people who understood that there had to be some accountability, much greater accountability on site than platforms like Twitter.

When asked whether Twitter community notes come close to crowdsourcing (thus essentially outsourcing) fact-checking to Twitter users if enough people step in to add a context consensus to contested tweets it might alone be enough to satisfy the legal requirements to counter disinformation under the DSA, Jourova said it will be up to Commission enforcers to assess whether or not they are compliant.

However, he pointed to Twitter’s withdrawal from the Code as a significant step in the wrong direction, adding: The Code of Conduct will be recognized as the very serious and reliable mitigating measure against harmful content.

#Europe #platforms #label #AIgenerated #content #fight #disinformation

Previous articleEmbedded Computing market key companies, key trends and innovations – KaleidoScot
Next articleNext Generation Computing Market Size and Forecasts: ABM, SAP, Agilent Technologies, Advanced Brain Monitoring, Bosch, Amazon, Hewl – KaleidoScot


Please enter your comment!
Please enter your name here