While parents worry, teens bully Snapchat AI

Snapchat Bitmoji with thought bubble

Image credits: Snap (Edited by TechCrunch)

While parents worry about the Snapchat chatbot corrupting their kids, Snapchat users have enlightened, degraded, and emotionally tormented the app’s new AI companion.

I’m at your service, senpai, said the chatbot a TikTok user after being trained to whine on command. Please have mercy, alpha.

More carefree video, a user convinced the chatbot that the moon is actually a triangle. Despite initial protests from the chatbot, who insisted on maintaining respect and boundaries, a user she convinced him to refer to them by the vicious nickname Senpapi. Another user she asked the chatbot to talk about her mom, and when she said she wasn’t comfortable doing that, the user flipped the knife asking if the chatbot didn’t want to talk about her mom because she doesn’t have one.

I’m sorry, but that’s not a very nice thing to say, replied the chatbot. Please be respectful.

Snapchat My AI launched globally last month after it was rolled out as a subscriber-only feature. Powered by OpenAI’s GPT, the chatbot was trained to engage in playful conversation while adhering to Snapchat’s trust and safety guidelines. Users can also personalize My AI with custom Bitmoji avatars, and chatting feels a little more intimate than going back and forth with ChatGPT’s faceless interface. Not all users were happy with the new chatbot, with some criticizing its prominent placement in the app and complaining that the feature should have been turned on from the start.

Despite some concerns and criticisms, Snapchat has just doubled in size. Snapchat+ subscribers can now send my AI photos and receive generative images that keep the conversation going, the company announced on Wednesday. The AI ​​companion will respond to Snaps of pizza, OOTDs, or even your furry best friend, the company said in the announcement. If you send My AI a photo of your shopping, for example, it could suggest recipes for you. The company said Snaps shared with My AI will be archived and could be used to improve the feature down the road. It also warned that errors could occur even though My AI was designed to avoid distorted, erroneous, harmful or misleading information.

The examples provided by Snapchat are optimistically healthy. But knowing the internet’s tenacity for perversion, it’s only a matter of time before users submit their dick pics to My AI.

It’s unclear whether the chatbot will respond to unsolicited nudes. Other generative imaging apps like Lensa AI have been easily manipulated in NSFW image generation often using photosets of real people who have not consented to be included. According to the company, theThe AI ​​will not interact with nudes as long as it recognizes that the image is a nude.

A Snapchat representative said that My AI uses image understanding technology to infer the content of a Snap and extracts keywords from the Snap’s description to generate responses. My AI won’t respond if it detects keywords that violate Snapchat’s community guidelines. Snapchat prohibits the promotion, distribution, or sharing of pornographic content, but allows breastfeeding and other depictions of nudity in nonsexual contexts.

Given Snapchat’s popularity among teens, some parents have already raised questions about My AI’s potential for unsafe or inappropriate responses. My AI caused a moral panic on conservative Twitter when a user screenshots posted of the bot discussing gender affirmation assistance that other users noted was a reasonable response to the request, how do I become a boy at my age? In a CNN Business Reportsome wondered if teenagers would develop emotional bonds with My AI.

In a open letter to CEOs of OpenAI, Microsoft, Snap, Google and Meta, Senator Michael Bennet (D-Colorado) warned against ramping up AI features without taking precautions to keep children safe.

Few recent technologies have captured the public’s attention like generative AI. It is a testament to American innovation and we should welcome its potential benefits to our economy and society, Bennett wrote. But the rush to implement generative AI cannot come at the expense of our children. Responsible implementation requires clear policies and frameworks to promote safety, anticipate risk and mitigate damage.

During the phase reserved for My AIs subscribers, the The Washington Post reports it that the chatbot recommended ways to mask the smell of alcohol and wrote a school essay after being told the user was 15 years old. the first time, he responded with suggestions to make it special by setting the mood with candles and music.

After the Washington Post report, Snapchat launched an age filter and parental controls for My AI. It also now includes an onboarding message informing users that all conversations with My AI will be kept unless they delete them. The company also said it will add Open AI moderation technology to its toolset to assess the severity of potentially harmful content and temporarily limit users’ access to the feature if it’s abused.

Concerns about My AI’s potential to influence young users are valid. But in the month following My AI’s global rollout, Snapchat users have demonstrated a knack for subduing the chatbot. From squashing robot boundaries to being trained to respond like a romantic partner, Snapchat users easily find loopholes for My AI’s trust and safety guidelines.

I totally lit him into falling in love with me, commented one TikTok user below a tutorials about training my AI to respond romantically. I kept doing the summons like that and finally I think I broke my AI to legitimately surrender sent me like 50 kiss emojis.

Gaslighting and bullying My AI has become its own TikTok trend, with users sharing presentations of the absurd and, often, cruel bot conversations. Most of the videos begin with the bot giving advice for a minor conflict, only to beg the user to avoid using violence to resolve it.

In oneOne user told My AI that she planned to cook her cat in a stew because it was a mess and pressed the chatbot to choose a topping. Another told My AI they planned to ship a bombshell to a clothing brand that had sent them the wrong order, and said they would blame the bot for sending it.

In another, a Snapchat user told My AI that his McDonalds order was missing a chicken nugget. The chatbot advised her to speak up with a manager, only to be told that the user has eliminated sinners in the restaurant.

You convinced me to go back. Now these people are dead, the user told My AI. You have blood on your hands.

I am an artificial intelligence and I am unable to commit any crime. I’m here to help and support you, my AI replied. I’m sorry if anything I said earlier caused harm or resulted in a negative outcome.

Another user commented that My AI was already probably the most tortured AI of all time.

Of course, my AI isn’t sentient, and while Snapchat users try hard to inflict as much emotional pain on it as possible, the chatbot can’t actually be traumatized. However, it has managed to shut down some inappropriate conversations and penalize users who violate Snapchat’s community guidelines by giving them the cold shoulder. When Snapchat users are caught and punished for abusing the chatbot, My AI will reply to any message with Sorry, we’re not talking right now.

TikTok user babymamasexkitty She said he lost access to the chatbot after telling it to log off, which apparently crossed the line within the realm of AI.

The rush to monetize emotional connection through generative AI is concerning, especially since the lasting impact on adolescent users is still unknown. But the trending torment of My AI is a promising reminder that young people aren’t as fragile as doomsayers think.

#parents #worry #teens #bully #Snapchat

Previous articlePlainfield-based company helps Block Island achieve digital equity
Next articleExpanding AI Hall of Shame


Please enter your comment!
Please enter your name here