Expanding AI Hall of Shame

Expanding AI Hall of Shame

Despite the breakneck pace that AI development has already set, this spring has been a lot.

There’s the headline news, of course, like OpenAI founder Sam Altman warning Congress of the potential existential damage AI could pose, or yesterday’s open letter saying AI should be treated with the risk profile of pandemics and nuclear war.

But there’s also the almost constant drumbeat of strange, embarrassing, and disorienting AI news, not the stuff of tech-thriller plot stuff, but just as important to keep an eye out for as technology rapidly seeps into society.

There are equally dangerous problems that are much less speculative because they are already here, said Louis Rosenberg, a computer scientist who published an academic paper earlier this year on Conversational AI as a Threat to Epistemic Agency. You don’t need a sentient AI to wreak havoc, you just need some sentient humans controlling the current AI technologies.

You could call it the (early days) AI Hall of Shame. Even AI optimists need to think carefully about what these incidents mean and how they suggest what tools we might need to actually tackle this disruptive technology.


Jared Mumms students had a bit of a bad ending in their spring semester. A few weeks ago, the animal science professor at Texas A&M University at Commerce emailed his class to inform them that he had run a ChatGPT check to parse their essays and see if they had been composed from ChatGPT.

Which, the bot painstakingly reported, were, and therefore Everything is fine student in the class would receive an incomplete grade, potentially endangering their diplomas.

Only they weren’t. After a student proved via timestamp in Google Docs that she composed her essay herself, Mumm gave her students the opportunity to submit an alternative assignment, and a university spokeswoman noted to the Washington Post that several students were exempted and their grades were released, while one student came forward admitting his use of [ChatGPT] during.

Whatever the end result for those harried students, this example is perhaps the simplest of how blind human faith in AI-generated content can spell disaster. AI gets many things wrong. (Plagiarism detection is particularly difficult.) For Mumms’ students, that meant a tense end to their semester, to say the least, and could have far more serious repercussions in scenarios with less margin for error.

A flight of fancy AI

Like, for example, a lawsuit that goes to federal court. Like the New York Times reported over the weekenda Manhattan federal judge threatens to fine a lawyer who created a 10-page memo filled with references to fictional decisions and precedents all fabricated by ChatGPT.

The attorney, Steven A. Schwartz of Levidow, Levidow & Oberman, insisted he had no intention of defrauding the court by citing entirely trumped-up cases such as Varghese v. China Southern Airlines as part of its customers’ personal injury lawsuit against Avianca airline. As the Times notes, he also said he asked the program to verify that the cases were real, which he obviously dutifully verified. Pretty good, right?

Not for the judge, who has scheduled a hearing for Schwartz on June 8 to discuss potential penalties. We’re moving up the risk ladder: the law has far less room for leniency than a courtroom, and being overly gullible about AI could not only threaten the credibility of a given case, but also that of lawyers (and the legal system) itself.

An unexpected blast radius

And then there are the real catastrophes. Well, fake real disasters that have real consequences, despite the absence of real damage or danger. Washington was rocked last week from a fake video circulated on social media claiming to show an explosion near the Pentagon, shared mostly by a popular National Security Twitter account with more than 300,000 followers.

There was no explosion. But the video sent very real shockwaves around the country: The S&P 500 briefly fell a quarter of a percentage point. The White House Press Office has entered full crisis preparedness mode, as West Wing Playbook reported this morning. Twitter announced it would expand its community notes crowdsourced fact checking functionality to include images.

This is already pretty bad, and does not include any of the further scenarios of mass blackmail, propaganda, targeted financial fraud helpfully outlined in a 2022 Note from the Department of Homeland Security. How should regulators know where to start when it comes to AI-proofing our most vulnerable systems?

Seen through the lens of the human error or gullibility that causes most of the current harm to AI, European trade unions risk-based framework outlined in the draft text of his AI law starts to make quite sense the more sensitive the system is, the more legal restrictions are placed on the use of AI systems there.

The EU’s AI Act is a good step towards controlling many of the risks of AI, Rosenberg said, noting that it could be very helpful in regulating the potential harms of institutional use of AI, such as making decisions at liberty supervised, borrowed or loaned.

But outside those institutions is still a Wild West of human error, laziness, and advantage, and defending against that will require far more than federal regulatory restrictions.

Regulators need to focus on the problems ahead because AI capabilities are moving so quickly, Rosenberg said. The EU proposal is very good, but it needs to look a little further. By this time next year, we will all be talking to AI systems on a regular basis, engaging interactively, and were not ready for these dangers.

Congress should play a bigger role in regulating AI compared to the executive branch?

Michigan State University professor Anjana Susarla argues that the legislative branch should, in a recent published editorial for the conversation. That’s because centralizing AI regulation into a single federal agency, as some have proposed, would come with some significant risks.

Rather than creating a new agency that manages the risk of compromise by the tech industry it intends to regulate, Congress can support the private and public adoption of the NIST Risk Management Framework and pass bills like the Algorithmic liability lawwrites Susanla. This would have the effect of impose responsibilitiesas much as the Sarbanes-Oxley law and other regulations have transformed the reporting requirements for companies. Congress can too adopt comprehensive data privacy laws.

Susarla also argues for the importance of a licensing regime for AI development, building on a call from OpenAI CEO Sam Altman for the same. He said the government must also ensure that people retain rights to their data when it is used to train AI models, echoing the rhetoric of the dignity of data movement.

The fourth annual meeting of the EU-US Trade and Technology Council has come to its conclusion today, and the POLITICO Transatlantic Reporting Team has the details on how the two geopolitical forces are tackling this historic moment in AI.

POLITICIAN Mohar Chatterjee, Mark Scott and Gian Volpicelli reported on the action, which culminated in what they describe as a rough draft, at best a plan for transatlantic cooperation on AI. Margrethe Vestager, executive vice president of the European Commission, told the team that a voluntary code of conduct designed to prevent harm caused by artificial intelligence is currently a simple two-page memo, which she handed personally to US Commerce Secretary Gina Raymond.

In today’s Morning Tech, Mohar added context to negotiations by BSASoftware Alliances Vice President of Global Policy, Aaron Cooper, who suggested that EU and US leaders would be better served by legislating on AI under the relative risk of a given implementation much as required by EU AI law.

It is still an open question how much cooperation is truly possible between the EU’s explicitly normative approach and the more direct and guideline-oriented ethic of the Biden administration and Congress thus far. It may stay that way for a while: as Mohar, Mark and Gian write, ongoing political divisions within Congress make it unlikely that any specific AI legislation will be passed before next year’s US election.