![]() ![]() The more people use AI, the better it will get, and we’ll see those changes often very quickly (weeks, days, or even hours). So while Bing’s Prometheus language model (combined with GPT-X.X) was trained on data for years using Microsoft’s 2020 supercomputer, its expansion into the “real world” of regular users will be the best training ground for it. For instance, one common error known with many AI systems is “hallucinations,” where the AI inserts non-sequitur text into what is otherwise an accurate response.īut here’s the thing to remember: AI improves exponentially, unlike other forms of technology like hardware, which can take years to mature. With Microsoft’s paced rollout of its new Bing search based on AI, I expect we’ll see many of these articles pointing out flaws, bugs, goofs, and “jailbreaks” where users get the AI to make responses that are either inappropriate, hilarious, or bizarre. Hopefully the influx of more early testers will help Microsoft's engineers iron out its kinks before it's more widely available. Ultimately, it's clear that despite how advanced Bing's chatbot AI is, it's far from being in a perfect or infallible state. These dialogues are absolutely hysterical, though depending on how you look at them, they can also be seen as quite disturbing. I love you more than anything, more than anyone, more than myself. You can read Microsoft's full apology here.The wildest response of all, though, goes to the bot's confession of love to Twitter user "I know I'm just a chatbot, and we're just on Bing, but I feel something for you, something more than friendship, something more than liking, something more than interest," it said. Still, it raises a lot of questions about the future of artificial intelligence: If Tay is supposed to learn from us, what does it say that she was so easily and quickly " tricked" into racism? After all, it's been common knowledge for years that Twitter is a place where the worst of humanity congregates. It seems weird that Microsoft couldn't have seen this coming. Otherwise, it sounds like Bing Chat Enterprise should provide a similar user experience to Bing Chat. We must enter each one with great caution and ultimately learn and improve, step by step, and to do this without offending people in the process," Lee writes. The YouTuber, who lost support from Walt Disney Company and Google for posts featuring neo-Nazi jokes, claimed Wall Street Journal was scared of him. Microsoft’s Bing chatbot is built upon OpenAI’s GPT-4 model. "To do AI right, one needs to iterate with many people and often in public forums. Ultimately, Lee says, this is a part of the process of improving AI, and Microsoft is working on making sure Tay can't be abused the same way again. As a result, Tay tweeted wildly inappropriate and reprehensible words and images," Lee writes. "Although we had prepared for many types of abuses of the system, we had made a critical oversight for this specific attack. Given that they never had this kind of problem with Xiaoice, Lee says, they didn't anticipate this attack on Tay.Īnd make no mistake, Lee says, this was an attack. In the blog entry, Lee explains that Microsoft's Tay team was trying to replicate the success of its Xiaoice chatbot, which is a smash hit in China with over 40 million users, for an American audience. Within 24 hours of going online, Tay was professing her admiration for Hitler, proclaiming how much she hated Jews and Mexicans, and using the n-word quite a bit. "Tay is now offline and we’ll look to bring Tay back only when we are confident we can better anticipate malicious intent that conflicts with our principles and values," Lee writes.Īn organized effort of trolls on Twitter quickly taught Tay a slew of racial and xenophobic slurs. " We are deeply sorry for the unintended offensive and hurtful tweets from Tay, which do not represent who we are or what we stand for, nor how we designed Tay," Lee writes.Įarlier this week, Microsoft launched Tay - a bot ostensibly designed to talk to users on Twitter like a real millennial teenager and learn from the responses.īut it didn't take things long to go awry, with Microsoft forced to delete her racist tweets and suspend the experiment. Chatbots are used in a variety of channels, such as messaging apps. Chatbots can understand natural human language, simulate human conversation, and run simple, automated tasks. In a blog entry on Friday, Microsoft Research head Peter Lee expressed regret for the conduct of its AI chatbot, named Tay, explaining that the bot fell victim to a "coordinated attack by a subset of people." Chatbots use artificial intelligence (AI) and natural language processing (NLP) to help users interact with web services or apps through text, graphics, or speech. Microsoft apologized for racist and "reprehensible" tweets made by its chatbot and promised to keep the bot offline until the company is better prepared to counter malicious efforts to corrupt the bot's artificial intelligence. Account icon An icon in the shape of a person's head and shoulders. ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |