November 21, 2024

Snapchat’s “My AI” chatbot poses threats to user safety

Marley Brennan
Executive Opinion Editor
Kayla Kinsey
Opinion Editor

 

Prominent social media platform Snapchat recently released an artificial intelligence
chatbot to all users. Concerns continue to grow regarding its safety as user privacy is being
exploited. “My AI” poses threats to user safety through its invasion of personal data, tendency to
develop biased responses to reinforce user behaviors, and provides inappropriate responses
with an alarming custom disposition.

On April 20, Snapchat introduced a new chatbot, “My AI,” that is powered by renowned
research and development company Open AI. According to Snapchat, the new artificial
intelligence chatbot can answer trivia questions, offer advice, and help plan events. Snap, the
company behind Snapchat, claims that the chatbot warns users, upon initial contact, that it may
utilize personal information the user tells it to develop an individualized response.

According to Forbes, a main concern regarding “My AI” is privacy. Through various tests,
it has become apparent that the chatbot knows the location of its users, according to CNN, and
gains and holds onto information on the user. Furthermore, the safety of “My AI’s” users is at
risk of private data collection and storage without the ability to prevent it.

Per CNN, clinical psychologist Alexandra Hamlet is concerned regarding Snapchat’s young age
demographic and how artificial intelligence can be extremely detrimental to the mental health of
these young users. “My AIs” responses to mental health prove to be susceptible to confirmation
bias, which is the tendency to interpret situations and selectively absorb information that
confirms individual beliefs. Hamlet’s fear is that users of Snapchat might seek out answers from
“My AI” for reassurance on individual beliefs.

Furthermore, according to FOX news, it may be difficult for young users to distinguish
the AI’s human-like responses from natural human connections. Psychological consultant Dr.
Zachary Ginder of Riverside claims that the AI speaks with clinical authority, meaning that the
platform sounds and can be interpreted as a reliable resource. Yet Marshall says that the AI also
has a tendency to fabricate responses, which can lead to false interpretations of information.

The knowledge gained from experiments is significant to the idea of AI because a biased
platform may not contradict immoral behavior performed by users. This proves that the AI
presented on snapchat is not appropriate for its users and could cause serious negative mental
health effects.

Although Snapchat told CNN that the company is continuously making improvements as
feedback is gathered, the chatbot still poses a threat to the privacy and safety of all its users.
According to the Washington Post, “My AI” directly endangers teenagers as they are able to use
the chatbot in order to get approval for their actions and advice on how to go through with and
act upon immature behaviors, for which the term is confirmation bias.

Overall, the new development of artificial intelligence and its implementation into Snapchat is
relatively experimental. Yet by introducing this technology to young, impressionable

demographics, it is unpredictable and can lead to potentially dangerous outcomes as the
responses are occasionally inappropriate and foster confirmation bias.

Be the first to comment

Leave a Reply

Your email address will not be published.


*