The technology behind OpenAI’s ChatGPT, Microsoft’s (MSFT) Bing, and Google’s (GOOG, GOOGL) Bard is known as generative artificial intelligence (A.I.). The emergence of generative AI, which gets its name from the fact that it creates “new” material using data it gathers from the web, is under growing scrutiny from academics and users alike.
Questions concerning the platforms’ accuracy and capabilities are being raised due to concerns that the software could be used to assist students in cheating on exams and give users’ queries weird, erroneous answers. And other people are speculating as to whether the products were made available too soon.
Rajesh Kandaswamy, distinguished VP and fellow at research firm Gartner, stated to Yahoo Finance that “the genie is out of the bottle.” “How are you going to keep them under control now?”
In an effort to address the chatbot’s unusual responses, Microsoft has added new restrictions to Bing, which is still in restricted preview, to limit the number of inquiries users can make per session and per day. The idea is that the bot will behave less irrationally if users ask it fewer questions. Meanwhile, Google continues to test its Bard software on a small group of people it claims to be trustworthy.
According to Kandaswamy, “it varies quite a bit what it means for people to adopt [these technologies] or not.” “[The backlash] will reinforce the idea that AI is unreliable and scary.”
However, don’t anticipate the criticisms to end the hype surrounding generative A.I. any time soon. It’s actually just getting started.
Users Are Increasingly Embracing Generative AI
The fact that generative A.I. platforms like ChatGPT and Bing respond to user questions in a human-like manner is what makes them so intriguing. If you ask Bing if the Mets will win the World Series in 2023, it will provide you with information on the odds of that happening, the other teams that will contend for the championship, as well as the important players the Mets signed during the offseason.
That kind of feedback is what makes chatbots useful. But it’s especially startling when they’re incorrect or give you an odd response, like when Bing told one user that it could watch Microsoft employees’ webcams when it couldn’t.
Professor Kim Yoon of electrical engineering and computer science at the Massachusetts Institute of Technology said, “This was inevitable insofar as we’re still very far away…from having a truly…human-like A.I. system.” So, it is evident that these systems have their limitations. And these restrictions would manifest.
The fact that these drawbacks are becoming more widely known, however, does not imply that OpenAI, Microsoft, or Google will scale back their efforts.
“There will be those who will examine this untamed power and attempt to harness it. It’s always the case, according to Kandaswamy. Even before the general public fully adopts it, technology will continue to advance. Then it will arrive at a stage where it is secure enough for widespread adoption.
Millions of people already use Bing and ChatGPT. With 100 million users after its December public launch, ChatGPT is expanding more quickly than the short-form video app TikTok. Bing? The Microsoft preview has already attracted about 1 million users from 169 nations. Also, the business declared on Wednesday that its chatbot would soon be available on mobile devices running both iOS and Android.
More news related to AI:
- The Kindle Store Now Features A New Prolific Author In Form Of ChatGPT
- How Can You Build A ChatGPT Chatbot For Your Website In Minutes?
The Criticism Will Continue
We’ll probably continue to see clumsy, incorrect answers as more consumers sign up for these services and ask them more and more queries. More criticism will result from that.
Professor of media studies at Queens College of the City University of New York, Douglass Rushkoff, said, “This is what happens when you employ an experimental tool for something serious.”
“Most artificial intelligences are essentially probability engines that attempt to reproduce the past using what has already occurred. They don’t take into account a lot of other factors, such as facts or copyright. Hence, how the AI is being used, not the AI itself, is the issue here. Not every technology is appropriate for every use.
Not only about weird and sarcastic reactions from generative AI, however. There are concerns about the type of content that Bing, ChatGPT, and Bard have been trained on as well. After all, these generative systems draw information from the web, some of which includes posts from social media users or articles written by news organisations.
Readers won’t have to visit news organisations’ websites, which will reduce their advertising revenue, if A.I. platforms are taking content from them and summarising it for them. Do you have any options if a chatbot is trained on your work as an independent artist? It’s still not entirely clear.
Due to their use of third-party computer code, Microsoft, its subsidiary Github, and OpenAI are already being sued. Media outlets like Bloomberg and The Wall Street Journal have also criticised chatbot developers for utilising their work to train models without paying for them.
According to Kandaswamy, “I don’t necessarily see these difficulties getting fixed, perhaps, in the next month or two.”
You can bookmark our website, americantechjournal.com, to ensure you get the most recent News updates as soon as they become available. You can also Follow us on Twitter (@AmericantechJ) to stay updated about Current news