The fight to control the AI

The fight to control the AI

Press play to listen to this article

Voiced by artificial intelligence.

LULE, Sweden Top European and American officials met in Sweden on Wednesday for talks on technology and trade and tried to find an answer to one of the most difficult problems facing the world: how to control artificial intelligence.

Over an hour-long lunch of cod loin and chocolate praline, officials in Washington and Brussels drafted a voluntary code of conduct designed to prevent harm, including from the most advanced artificial intelligence technology known as generative AI like ChatGPT by OpenAI and Bard by Google. In just a few months, technology has taken the public by storm, sparking hopes and anxieties for the future of humanity.

While some have raved about AI’s potential to generate computer code and solve medical problems, others fear it will put millions out of work and could even threaten safety.

Democracy must show that we are as fast as technology, Margrethe Vestager, Europe’s digital commissioner, told reporters as she entered the EU-US Trade and Technology Council (TTC) summit in the small industrial town of Lule, 150 kilometers south of the Arctic Circle.

The TTC has become a biannual meeting where senior transatlantic leaders like US Secretary of State Antony Blinken and European Union trade chief Valdis Dombrovskis brainstorm common approaches on everything from semiconductors to green technology investments. The fourth edition this week is dominated by how to push back China, where the two sides are still struggling to agree.

But when it comes to the rise of artificial intelligence, the US and the EU are increasingly eager to move forward together.

It’s coming at a pace like no other technology, said Gina Raimondo, US Commerce Secretary, referring to generative AI. It will take some time for the US Congress or the (country’s) parliament or our other regulatory agencies to catch up.

But the joint plan is still in draft form, at best. Vestager told POLITICO that the voluntary code of conduct was currently a two-page information note produced by the European Commission that she personally delivered to Raimondo on Wednesday.

The goal, second the Danish politician, must develop non-binding standards on transparency, risk audits and other technical details for companies developing the technology. This would then be presented to G7 leaders as a joint transatlantic proposal in the autumn.

With mandatory AI regulations years away, a voluntary code is, at best, a fallback until binding legislation is in place.

Democracy must show we are as fast as technology, said European Digital Commissioner Margrethe Vestager as she joined the EU-US Trade and Technology Council | JOnas Ekstromer/TT News Agency/AFP via Getty Images

We agree that we will work on this, bringing colleagues on board, in order to insert ourselves into the G7 process, Vestager told reporters.

If that effort fails, it could potentially leave an opening for China to promote its own authoritarian version of the technology around the world.

Where Europe and the United States diverge

Yet a huge AI-shaped gap remains between Washington and Brussels on rules.

The EU, buoyed by a track record of writing much of the digital regulation that now dominates the Western world, is moving forward with mandatory AI rules that would require companies not to use the technology in predefined harmful ways. By the end of December, European officials hope to complete the EU’s AI law, after difficult political negotiations that have dragged on for more than two years.

But European countries and Members of the European Parliament, who both have to agree on a final text, are at loggerheads over some key aspects of the text, particularly on facial recognition in public places. The tech industry, meanwhile, has resisted what it sees as too onerous oversight of generative AI.

The effort in Brussels has had US industry, which is investing billions of dollars in artificial intelligence, eyeing the EU for concrete legislation, just as it did when the bloc started legislating privacy and security. online content.

The US, on the other hand, prefers a more straightforward approach, relying on industry to find its own safeguards. Continued political divisions within Congress make it unlikely that any specific AI legislation will be passed before next year’s US election.

The Biden administration has made international collaboration on AI a policy priority, especially since most of the major AI companies like Google, Microsoft, and OpenAI are based in the United States. For Washington, helping these companies compete against China’s rivals is also a national security priority.

In recent weeks, the White House has thrown its doors wide open for the industry, hosting the CEOs of four major AI companies in early May for a private discussion. He has initiated efforts to get tech companies to commit to voluntary rules on responsible behavior. And when it comes to setting international standards, he pushed the risk management framework developed in the United States by the National Institutes of Standards and Technology.

Building Wests approach

On Wednesday, senior US and EU officials sought to work around these loopholes with an approach based on existing global principles proposed by the Organization for Economic Co-operation and Development. They aimed to go beyond the OECD by specifically pointing out the potential pitfalls of Generative AI.

A framework agreement would give companies more certainty about how this emerging technology will be controlled by the two largest Western economic blocs. The goal is to speed up a voluntary code, though it will likely build on existing European standards for AI, and it’s unclear whether US officials and companies will support such an approach.

Regulatory clarity will be a good thing, Sam Altman, chief executive officer of OpenAI, the technology company behind ChatGPT, said at an event in Paris last week, during a European tour that also included Warsaw, Madrid, Munich and London. The technology chief met with Vestager virtually on Wednesday during which they discussed the proposed voluntary code of conduct.

However, there are doubts that the EU is speaking with one voice.

Some officials in Brussels are hoping to pre-empt some of the lockdown rules in a so-called AI Pact a separate voluntary commitment that companies can join in relation to the forthcoming European AI law likely to come into force in 2026.

Thierry Breton, the EU’s internal market commissioner, said any regulatory coordination with like-minded partners like the US would build on the existing approach in Europe. If others want to draw inspiration, of course, they are welcome, he said.

#fight #control

Previous articleNo AI in my classroom unless a human verifies its accuracy Ars Technica
Next articleWhy banks need to start planning to use quantum computing


Please enter your comment!
Please enter your name here