It has been in the news since since Microsoft’s Bing AI conversation product for the Edge browser went live last week, but not always for the right reasons. Our first thoughts were positive because it effortlessly provided fitness schedules, trip itineraries, and more.
Users soon began to notice, however, that Bing’s bot provided false information, chastised them for wasting its time, and even displayed “unhinged” behaviour. In a peculiar exchange, it declined to provide listings for Avatar: The Way of the Water, claiming that film hadn’t yet been released because 2022 was still a ways off. When they attempted to correct Bing, it responded by labelling them as “unreasonable and stubborn,” among other things.
Microsoft has since published a blog post outlining what has been going on and how it is resolving the problems. The business said that it initially didn’t see Bing’s AI being used for “broad world exploration and for social enjoyment.”
Bing subreddit has quite a few examples of new Bing chat going out of control.
Open ended chat in search might prove to be a bad idea at this time!
Captured here as a reminder that there was a time when a major search engine showed this in its results. pic.twitter.com/LiE2HJCV2z
— Vlad (@vladquant) February 13, 2023
Things can get out of hand during those “long, protracted chat sessions of 15 or more questions.” The company warned that Bing “may become repetitive or be prompted/provoked to offer responses that are not always useful or in line with our planned tone.”
That seems to be the case because asking too many questions can make the bot “forget” what it was originally attempting to answer. Microsoft might include a tool that allows you to reset the search context or start over in order to correct that.
The second problem is more complicated and intriguing. According to Microsoft, “the model often tries to reply or reflect in the tone in which it is being asked to deliver answers, which can lead to a style we didn’t design.” The engineers believe that by giving people greater authority, they might be able to fix the issue because it takes a lot of urging to get that to happen.
Although it has to improve with “extremely recent data like live sports scores,” testers have generally given Bing’s AI decent rates on citations and references for search despite these shortcomings, according to Microsoft. By increasing grounding data by four times, it also aims to improve factual responses for things like financial reports.
They’ll also “include a toggle that offers you more control on the precision vs. inventiveness of the answer to personalise it to your inquiry,” according to their final announcement.
Users’ testing to date has been appreciated by the Bing team, which said it “helps us enhance the product for everyone.” They also expressed amazement that people would engage in two-hour discussion sessions at the same time. We could be in for a fascinating ride over the coming months because users will undoubtedly be just as diligent in trying to break any future improvements.
Must Check:
- Disney Is Delaying And Spreading The Release Of Its Marvel Series
- Microsoft’s Bing Is An Emotionally Manipulative Liar