Get a clue, says panel on bustling AI tech: It’s being ‘deployed as surveillance’

Get a clue, says panel on bustling AI tech: It's being 'deployed as surveillance'

Some of the biggest names in AI showed up at a Bloomberg conference in San Francisco today, including, briefly, OpenAI’s Sam Altman, who just wrapped up his two-month world tour, and the founder of Stability AI Emad Mostaque. However, one of the most compelling conversations occurred later in the afternoon at a roundtable discussion on the ethics of AI.

With Meredith Whittaker, president of secure messaging app Signal; Navrina Singh, co-founder and CEO of Credo AI; and Alex Hanna, director of research at the Distributed AI Research Institute, the three had a unified message for the public, which was: don’t get so distracted by the promises and threats associated with the future of AI. It’s not magic, it’s not fully automated, and, according to Whittaker, it’s already intrusive beyond anything most Americans apparently understand.

Hanna, for example, pointed to the many people around the world who are helping train today’s large language models, suggesting that these people are losing attention in part to the breathless coverage of generative AI in part because the work is unglamorous and partly because it doesn’t fit the current narrative about AI.

Hanna said: We know from the reports. . .that there’s an army of workers doing behind-the-scenes annotations to get these things working at every level workers working with Amazon Mechanical Turk, people working with [the training data company] Sama in Venezuela, Kenya, USA, actually all over the world. . .Actually they are doing the tagging while Sam [Altman] and Emad [Mostaque] and all these other people who will say these things are magical no. There are humans. . . .These things have to look like they are self contained and have this veneer, but there is so much human work underneath.

Separately made comments by Whittaker, who formerly worked at Google, co-founded NYU’s AI Now Institute and was a consultant to the Federal Trade Commission, were even more pertinent (and also impactful based on the enthusiastic reaction of the public). His message was that, as enchanted as the world may now be with chatbots like ChatGPT and Bard, the technology behind them is dangerous, especially as power becomes more concentrated from those at the top of the advanced AI pyramid.

Whittaker said, I’d say maybe some of the people in this audience are the users of AI, but the majority of the population is the subject of the AI. . .It is not a matter of individual choice. Most of the ways AI interpolates our life decisions that shape our access to resources to opportunities are being made behind the scenes in ways we probably don’t even know about.

Whittaker gave an example of someone walking into a bank and applying for a loan. That person can be in denial and has no idea that there is a system [the] back probably powered by some Microsoft API that determined, based on social media scraping, that I wasn’t credit worthy. I will never know [because] there is no mechanism for me to know this. There are ways to change that, she continued, but overcoming the current hierarchy of power to do so is nearly impossible, she suggested. I’ve been at the table for like, 15 years, 20 years. I have state at the table. Being at the table without power is nothing.

Certainly, many powerless people could agree with Whittaker, including current and former OpenAI and Google employees who have reportedly been wary at times of their companies’ approach to launching AI products.

Indeed, Bloomberg moderator Sarah Frier asked the panel how concerned employees can speak up without fear of losing their jobs, to which Singh, whose startup helps companies with AI governance, replied: I think a lot it depends on the company’s leadership and values, to be honest. . . . Over the past year, we’ve seen case after case of responsible AI teams being fired.

Meanwhile, there’s a lot more that ordinary people don’t understand about what’s going on, Whittaker suggested, calling AI a surveillance technology. In front of the crowd, he explained, noting that AI requires surveillance in the form of these massive data sets that reinforce and expand the need for ever more data and ever more intimate gathering. The solution to everything is more data, more knowledge pooled in the hands of these companies. But these systems are also used as surveillance devices. And I think it’s really important to recognize that it doesn’t matter whether an AI system’s output is produced through probabilistic statistical estimation or whether its data comes from a cell tower triangulating my location. That data becomes data about me. It doesn’t need to be corrected. It doesn’t need to reflect who I am or where I am. But it has a power over my life that is significant, and that power is being placed in the hands of these companies.

Indeed, he added, the Venn diagram of AI and privacy concerns is a circle.

Whittaker obviously has his own agenda up to a point. As she said at the event, there is a world in which Signal and other legitimate privacy-preserving technologies persevere because people feel less and less comfortable with this concentration of power.

But also, if there isn’t enough pushback and as advances in AI accelerate, impacts on society also accelerate and we continue down a hype-filled road to AI, he said, where that power is entrenched and naturalized under the guise of intelligence and we are guarded to the point [of having] very, very little power over our individual and collective lives.

This concern is existential and is much larger than the shot of AI that is often given.

We found the discussion engaging; if you want to see it all, Bloomberg has it posted here.

Above: Signal President Meredith Whittaker



#clue #panel #bustling #tech #deployed #surveillance

Previous articleInternet of Things Integration Market Share Size 2023 – Top Players, Market Share, Future Growth by 2029
Next articleVoxel51 Open-Source VoxelGPT: An AI assistant that leverages the power of GPT-3.5 to generate Python code for analyzing computer vision datasets

LEAVE A REPLY

Please enter your comment!
Please enter your name here