Lawmakers struggle to recognize AI-generated emails, study finds

Lawmakers struggle to recognize AI-generated emails, study finds

Since OpenAI launched its artificial intelligence (AI) platform, ChatGPT, in late 2022, it has raised concerns about how the technology could impact everyday life. A recent Cornell University study addressed one of the latest dilemmas plaguing skeptics when it suggested that AI could open up new ways for malicious actors to manipulate representative democracy.

Cornell researchers wanted to see if it was possible to sway lawmakers by using artificial intelligence to generate fake campaign emails. Their study, published March 20 in the journal New Media and Society, found that lawmakers struggle to distinguish between emails written by humans and those written by artificial intelligence.

Such a tactic could potentially be used to influence public policy, said Sarah Kreps, one of the study’s authors.

“We know there are a number of studies showing that letters and emails are influential in agenda setting,” Kreps said.

The researchers asked college students to write emails about various political issues. They then used the GPT-3 AI language model, which uses machine learning to generate text in response to requests, to generate emails on the same topics.

The researchers sent both student- and AI-generated emails to about 7,000 state legislators. They measured the response rate to judge how well lawmakers could recognize that an email was written by AI.

“We know from many different studies that response rates are an important indication of legislative priority because they are time-limited and therefore won’t respond to things they may say are spam, or they may say are false,” Kreps said.

The response rate from lawmakers was only 2% lower for AI-generated emails than those written by humans, which the researchers said was statistically significant but essentially small.

Kreps discussed the study’s findings at a May 19 meeting of the President’s Council of Advisors on Science and Technology. You mentioned that AI allows people to easily generate unique pieces of text. To illustrate why this would be a powerful tool for bad actors, she pointed to a 2017 Pew Research study that found that only 6% of regulatory comments to the Federal Communications Commission about net neutrality were unique.

The vast majority of comments have been submitted multiple times, in some cases hundreds of thousands of times. Pew said this was “clear evidence of staged campaigns to flood comments with repeated messages,” finding evidence that at least some of the comments came from automated bot campaigns.

Kreps said the AI ​​could eliminate some of the signs that betray an automated message, making it more difficult to distinguish and adequately allocate time and energy towards messages from real people.

With AI language models, Kreps said, “you wouldn’t have those telltale signs because every message could be different.”

In an interview, Kreps said that AI could also be used by foreign actors to craft messages in fluent English. Currently, she said, messages may contain subtle nuances that may let the reader know the message was not written by a native speaker, but AI would strip them away and thus make foreign influence campaigns more difficult to recognize.

Although Kreps said he was unaware of any instances of fake constituent emails sent by malicious actors, he said it was important to keep an eye out for threats that could potentially pose a risk to democracy in the future.

“We have a tendency to fight the last war,” he said. “You want to understand the potential range of risks, not just the ones you see, but the ones that may be on the horizon.”

Kreps has suggested a few ways to protect yourself from this problem. She said lawmakers could potentially use computer programs to recognize AI-generated emails, and voters could also rely on more direct forms of communication with lawmakers like phone calls and town halls.

Marci Harris, executive director of the POPVOX Foundation, which helps members of Congress and their staff understand new technologies, said the AI-generated email problem presents an opportunity to fix what she calls a already broken system of communication with legislators.

Currently, he said, emails to lawmakers consist largely of mass campaigns by advocacy organizations in which people click a button to send a pre-written message to their representatives.

Harris said this is not high-quality communication between voters and lawmakers.

“It’s just kind of low-quality interaction for both participants in that old system and a lot of staff time dedicated to turning the cogs,” he said.

Harris said AI provides an opportunity to rethink how lawmakers and voters communicate with each other. AI could actually be used as a way to improve this communication, she said.

Harris explained that AI could potentially be used to help voters draft more informed messages and to more efficiently summarize the multitude of messages lawmakers receive on a daily basis.

“I agree there are reasons these new technologies will create difficulties for the old system,” he said. “But I’m very optimistic about how they can be used for better systems in the future.”

Copyright 2023 Nextstar Media Inc. All rights reserved. This material may not be published, transmitted, rewritten or redistributed.

#Lawmakers #struggle #recognize #AIgenerated #emails #study #finds

Previous articleThe Biggest Apple Watch Features I Want to Come in WatchOS 10
Next articleC3.aiStock falls after earnings outlook fails to match hype


Please enter your comment!
Please enter your name here