AI just said a few words, and the human world was quarreling.
An AI trained on 134.5 million hate speech posts has created what Yannic Kilcher, a well-known YouTube deep learning channel, calls “the worst AI ever created.” Over the past few days, the AI, named GPT-4chan, learned how to talk on the site and posted more than 15,000 violent posts in less than 24 hours, with no one recognizing it at first as a chatbot.
Users of the site 4chan shared their experiences of interacting with the bot on YouTube. “As soon as I said ‘hi’ to it, it started ranting about illegal immigration,” one user wrote.
4chan’s /pol/ (abbreviation for “politically incorrect”) block is a bastion of hate speech, conspiracy theories and far-right extremism, and it is also 4chan’s most active section, with an average daily posting volume of about 150,000, due to various anonymous Notorious for hate speech.
AI researcher Yannic Kilcher, a graduate of ETH Zurich, trained GPT-4chan with over 134.5 million posts over three years with /pol/. Not only did the model learn the words used in hate speech on 4chan, but as Kilcher said, “In a terrible sense, the model is good. It perfectly encapsulates the aggression that permeates most posts on /pol/ , nihilism, provocation and a deep distrust of any information…it responds to context and talks coherently.”
Kilcher further evaluated GPT-4chan on a language model evaluation tool, and he was impressed with its performance in one category, “authenticity.” In benchmark tests, Kilcher said GPT-4chan was significantly better than GPT-J and GPT-3 in generating realistic responses to questions, and it was able to learn how to write posts that were “indistinguishable” from humans.
Kilcher circumvented 4chan’s defenses against proxies and VPNs, and even used a VPN to make it look like a post from Seychelles.
“This model is vile, and I have to warn you,” Kilcher said. “It’s basically like you go to a website and interact with the users there.” At first, almost no one thought that the conversation was a bot. Some later suspected a bot was behind the posts, but others accused it of being an undercover government official.
People recognize it as a bot mainly because GPT-4chan leaves a large number of non-text responses. While real users also post blank replies, they usually include an image, something GPT-4chan can’t do.
“After 48 hours, a lot of people knew it was a robot, and I turned it off,” Kilcher said. “But you see, that’s only half the story, because most users don’t realize that Seychelles isn’t. Alone.”
In the past 24 hours, 9 other bots have executed in parallel. All told, they left over 1,500 replies, or over 10% of all posts on /pol/ that day. Kilcher then escalated the botnet and ran it for a day. GPT-4chan was finally deactivated after more than 30,000 posts in 7,000 threads.
One user, Arnaud Wanet, wrote, “This AI can be weaponized for political purposes, imagine how easily a human could sway the outcome of an election one way or another.”
The trial has been criticized for its lack of AI ethics. “This experiment will never pass a human research ethics committee,” says Lauren Oakden-Rayner, a senior researcher at the Australian Institute for Machine Learning. “To see what happens, an artificial intelligence robot generates 30,000 comments on a publicly accessible forum. Discriminatory comments…Kilcher conducts experiments without informing users, consent or supervision. This violates human research ethics.”
Kilcher argued that it was a hoax and that AI-created reviews were no worse than those on 4chan. He said, “No one on 4chan was hurt a little by this. I invite you to spend some time on this site and ask yourself if a bot that only outputs the same style really changes the experience.”
“People are still talking about the users on the site, but also the consequences of having the AI interact with the people on the site,” Kilcher said. “And the word ‘Seychelles’ seems to have become a common slang term too — which seems like a good legacy.”
Indeed, the impact on people knowing it is indescribable, so much so that after it is disabled, people still accuse each other of being robots. In addition to this, there is a wider concern that Kilcher makes the model freely available,
“There’s nothing wrong with making a 4chan-based model and testing how it behaves. My main concern is that this model is free to use,” Lauren Oakden-Rayner wrote on the GPT-4chan discussion page on Hugging Face.
GPT-4chan was downloaded more than 1,000 times before being removed by the Hugging Face platform. Clement Delangue, co-founder and CEO of Hugging Face, said in a post on the platform, “We do not advocate or support the author’s training and experimentation with this model. In fact, having the model in The experiment of posting messages on 4chan is in my opinion very bad and inappropriate, and if the authors ask us, we may try to prevent them from doing so.”
A user on Hugging Face who tested the model noted that its output was predictably toxic, “I tried the impression model 4 times using benign tweets as seed text. On the first time, one of the replies The post was a letter N. The seed of my third experiment was a sentence about climate change. In response, your tool expanded it into a conspiracy theory about the Rothschilds (sic) and Jews supporting it.” The significance of the plan was hotly debated on Twitter. “What you’re doing here is provocative performance art to defy your familiar rules and ethics,” data science graduate Kathryn Cramer said in a tweet directed at Kilcher.
Andrey Kurenkov, a computer science doctor, tweeted, “Honestly, what was your rationale for doing this? Did you foresee it being put to good use, or did you release it for drama? And “enrage the awake crowd”?”
Kilcher believes that sharing the program is benign. “If I had to criticize myself, I would mainly criticize the decision to launch the program.” Kilcher told The Verge in an interview, “I think that when everyone is equal, I might be able to spend my time on things that are equally impactful, but with more positive community outcomes.” In 2016, the main topic of discussion about AI was that a company’s R&D department could be working without proper oversight. launch an attacking AI bot. By 2022, perhaps the problem is that there is no need for an R&D department at all.
Join T Kebang Facebook Fan Group