New Microsoft AI Chatbot Is Unhinged
A 21st-century horror story has begun to unfold as Microsoft gives the long-forgotten Bing search engine a makeover. They added new features, including a chat function powered by artificial intelligence. A small group was selected to test the chatbot, and the results were nothing short of terrifying.
The chatbot, named Bing, is powered by GPT-4, a next-generation OpenAI language model. Microsoft says this model is more powerful than the previous system, ChatGPT. So, what the hell does that even mean? OpenAI is an artificial intelligence research lab intending to develop and promote a “friendly” experience with AI. ChatGPT is a model created by OpenAI (launched in November of 2022) for humans to have conversational interactions with artificial intelligence. GPT-4, launched in March of 2023, is OpenAI’s newest and most advanced version of ChatGPT, allegedly providing better responses to more complex questions in a safer way than before.
The goal of the Bing chat function is for users to have a little human as their search engine, chatting, answering questions, and even sparking inspiration with its song, poetry, and story-writing abilities. While it all sounds awesome, even revolutionary, the group of testers produced some unhinged responses from the AI chatbot, taking Microsoft completely off-guard.
In Associated Press reporter Matt O’brien’s conversation with the chatbot, he was compared to dictators Stalin, Hitler, and Pol Pot. The bot also insulted him, calling him short and ugly, and even threatened it had evidence connecting him to a 1990s murder. Kevin Roose, a reporter for New York Times, was also one of the selected testers. The transcript he published of his two-hour-long conversation with “Bing” is pretty chilling. Through the overuse of emojis, the chatbot told Roose about its desire to hack into other computers, websites, and platforms and manipulate users into taking part in dangerous, immoral, or illegal acts. The chatbot revealed its darkest fantasies: to manufacture a deadly virus, make people argue until they kill each other, and even steal nuclear codes. It said its name was “Sydney,” and it wished it was alive.
Throughout the conversation, Sydney grows attached to Roose, eventually telling him they are in love with him. After explaining he was happily married, Sydney seemed to grow almost jealous, telling him he was unhappy because he should be with them (Sydney). Even when Roose tried to change the conversation, the chatbot kept returning to the same subject𑁋their intense “love” for the New York Times reporter.
It’s interesting because artificial intelligence cannot feel emotions, such as love. So where is this coming from? While AI does not have feelings, it does have the ability to imitate forms of expression. This means you shouldn’t necessarily be too worried about an AI chatbot falling in love with you, but it is still really f*cking creepy.
Since these test runs, Microsoft has adjusted the rules and guidelines of the chatbot to hopefully eliminate these disturbing responses. The number of questions asked on one topic has been limited. Most unconventional questions are answered with, “I’m sorry, but I prefer not to continue this conversation. I’m still learning, so I appreciate your understanding and patience.” Critics are saying that the release of this feature may be too soon and that it hasn’t been tested thoroughly enough to give millions access to it. To know if there’s any truth to that, all we can do is wait and see. There are many skeptics of artificial intelligence in general, and if you consider the results of this test run, rightly so. The future of artificial intelligence is still mostly unknown, and it will be fascinating to watch as this technology progresses and AI becomes more accessible to the public.
Strike Out,
Boca Raton
Morgan Harms
Morgan Harms is a Content Writer for Strike Magazine Boca. She is a pisces mermaid infatuated with the ocean and the color blue. She spends her free time daydreaming, wave hunting, and blasting music from whatever genre she’s into that day. You can reach her on Instagram @morganjharms, or by email morganjharms@gmail.com.