Research shows a sharp drop in AI excitement this year and a rise in concerns about its social impact.
The U.S. is losing trust in AI.
More than half (52%) of Americans are now more “concerned” than “excited” about the effects of AI, according to a new Pew Research Center survey. 36% of respondents are equally excited and concerned, while 10% are more excited than concerned. The data from Pew demonstrates a rapid shift in public opinion regarding artificial intelligence: in December 2017, only 38% of those surveyed were more concerned than excited about the technology. This represents a 14-point increase in just eight months (the most recent survey was conducted by Pew in early August).
Context may be crucial here. The first Pew survey was conducted shortly after the introduction of OpenAI’s ChatGPT chatbot. And it’s possible that some of the respondents were aware of the bot’s “surprising” conversational skills, or had even used it themselves. In the months since, however, the mainstream media has given AI’s near- and far-term dangers a great deal of coverage. Quite possibly, some survey respondents were also aware of prominent AI researchers like Geoffrey Hinton sounding the alarm on AI’s capabilities, or had read the industry-wide letter urging a halt to AI development.
In fact, the most recent survey results suggest that the more people learn about new AI systems, the more skeptical they become of them. Respondents who had “heard a lot about AI” are now 16% more likely than they were in December 2022 to express greater concern than enthusiasm about the technology. Pew reports that among this group, concern now outweighs excitement by 47% to 15%.
All of this points to the need for reasonable regulation to ensure that we can reap the benefits of AI without experiencing its negative outcomes. The government could, for instance, require tech companies to apply for a permit before developing AI models with parameters exceeding a certain size threshold. It could also mandate that all image generators add an irremovable watermark to every image they produce.
In a separate survey, Pew researchers found that 67% of individuals who had “heard of ChatGPT” expressed concern that the government will not regulate chatbot use sufficiently. On the other hand, 31% of respondents expressed concern that the government would go too far, possibly stifling innovation in the burgeoning industry.
THE AI INDUSTRY IS A FINANCIAL PIPELINE FOR NVIDIA
There was a time when Apple’s hardware (iPhones) generated the most revenue from the mobile computing revolution. Similarly, in the AI revolution, the company that provides the majority of the hardware, Nvidia, is the most profitable. Nvidia, which manufactures the $10,000 A100 graphics processing units used to train 95% of the largest AI models, anticipated the AI boom years ago, well before ChatGPT, and invested accordingly in R&D. Additionally, the company developed a software layer that quickly transfers data around the GPU chip and between chips on different servers, so that the chips share compute tasks evenly and operate continuously.
In order to realize their goals of expanding their model sizes and developing new, lucrative uses for AI, this year’s AI startups have been rushing to acquire Nvidia servers. According to an article by Dylan Patel in the SemiAnalysis newsletter, the AI industry has split into the “GPU-poor” and “GPU-rich,” with only the largest and wealthiest companies able to afford enough Nvidia servers to do meaningful, boundary-pushing research. This is essentially true.
Following the signing of a partnership agreement with Google earlier this week, Nvidia’s stock prices have increased by 234% this year. It is the best-performing S&P 500 stock of 2023. However, the company’s stock price has fluctuated wildly as investors struggle to predict how fast and far Nvidia’s rocket ride can travel. Ben Bajarin, the chief analyst at Creative Strategies, informs me that a large number of new funds are investigating or entering Nvidia. Therefore, they [Nvidia] are expanding their investor pool due to their advantageous position, but this can sometimes increase volatility.
CHATGPT: COINCIDENCE OR NOT?
It’s strange how things work in technology. It’s tempting to think of the release of OpenAI’s ChatGPT in November 2022 as the technological equivalent of a “big bang” that unleashed the dawn of the AI era. However, both OpenAI’s CEO Sam Altman and chief scientist Ilya Sutskever have publicly acknowledged that the chatbot’s meteoric rise to fame seems to have been accidental or at least unexpected. By the looks of the ChatGPT website in November of 2022, OpenAI considered the chatbot to be nothing more than a toy for its software engineers to play around with. “Well, certainly we didn’t expect the success that we had,” OpenAI COO Brad Lightcap says.
Large language model (LLM) technology wasn’t exactly unheard of before this. At Google I/O 2021, more than a year and a half before the advent of ChatGPT, Google CEO Sundar Pichai showcased the company’s Lamda models’ conversational and task-doing abilities. So why hasn’t Google made available a chatbot built with their Lambda service? According to the story, Google was very worried about the legality and safety implications of that.
By adding a user-friendly interface to a large language model, OpenAI may have sparked widespread interest in artificial intelligence. Lightcap says it’s “maybe a little more obvious in retrospect that you make these systems more personal.” To paraphrase, “You give them an interface in a format that is more intuitive for people, and people will use them.”