A controversial claim made in a leaked Google employee document is gaining traction in Silicon Valley and beyond: Big Tech’s lead in AI is rapidly eroding.
The tech research firm SemiAnalysis published the memo on its website last Thursday. It quickly became a top story on AI forums like HackerNews and Reddit’s /r/MachineLearning community, with over 2.6 million members.
A Google representative acknowledged the memo’s authenticity but clarified that it reflected the views of a single high-ranking employee and not the company at large.
The memo said-
“We’ve done a lot of looking over our shoulders at OpenAI.But the uncomfortable truth is, we aren’t positioned to win this arms race, and neither is OpenAI. I’m talking, of course, about open source. Put, they are lapping us.While our models still hold a slight edge in quality, the gap is closing astonishingly quickly.”
After years of AI development by tech corporations in a secretive arms race, OpenAI emerged with ChatGPT. The demand for and quality of other generative AI systems that can generate material in response to a human request has increased dramatically.
In January, OpenAI revealed a massive partnership with Google’s rival, Microsoft. Google quickly responded with an analog called Bard. Even if these firms make strides to incorporate AI into many aspects of their operations, this does not guarantee they will further strengthen their hold on the technological market.
The Google employee claimed that both Google and Microsoft had ignored the “open-source” programming community and businesses that use freely available AI code and models to develop smaller, more practical projects.
Experts and analysts in the field generally agreed with the memo’s cautionary tone. According to OpenAI co-founder Andrej Karpathy, who returned to the firm in February, an industry shakeup is beginning. He made his comments on Twitter on Saturday.
He said that the biosphere is “experiencing early signs of a Cambrian explosion,” referring to a time in Earth’s history, more than 500 million years ago, when life evolved rapidly and in great variety.
“open source” describes programs whose source code is freely available online. Whereas most major computer businesses keep their software locked away as a trade secret, members of open-source communities are free to discuss, modify, and work together on projects. The Firefox web browser and the VLC media player are two examples of widely used open-source software.
Earlier this year, an anonymous user on the online forum 4chan disclosed Facebook owner Meta’s equivalent of ChatGPT, LLaMa, before it had been formally introduced to the public. This was a vast and unexpected gift to the open-source AI community.
Thanks to the leak, open-source AI developers now have a starting point for their own, more tailored projects.
Tweeted Pedro Domingos, a professor emeritus of computer science at the University of Washington-
TL;DR: AI can’t be stopped because anyone can play with it, and the whole discussion of “guardrails” and “moratoria” is academic.https://t.co/MAHIYAzrER
— Pedro Domingos (@pmddomingos) May 5, 2023
“TL;DR: AI can’t be stopped because anyone can play with it, and the whole discussion of ‘guardrails’ and ‘moratoria’ is academic.”
The quantity of data required to train an AI system was once considered impossible by independent programmers.
Even while ChatGPT-4, OpenAI’s main product, is still at the forefront of the industry, not every AI product needs to be based on the vast troves of data that it has been until now, according to Simon Willison, a programmer, tech analyst, and blogger, who spoke with NBC News.
Willison said-
“I don’t think I need something as powerful as GPT-4 for a lot of things that I want to do.I want models that can do the thing that Bing and Bard does, where if it doesn’t know something, it can run a search.”
He said-
“The open question I have right now is, how small can the model be while still being useful? That’s something which the open source community is figuring out really, really quickly.”
Prof. Mark Riedl of Georgia Institute of Technology opined that the public good would likely result from large tech corporations handing their AI edge to individual programmers and small companies. However, this development also carried the risk of harmful exploitation.
He said-
“Largely, I think people are trying to do good with these things, make people more productive or make experiences better, You don’t want a monopoly or even a small set of companies controlling everything. And I think you’ll see greater creativity by putting these tools into the hands of more people.”
Riedl said-
“It now becomes the question of what people will use these things for; There’s no restrictions on making specialized models designed specifically to create toxic material, misinformation, or spread hate on the internet.”