In November 2022, when OpenAI unleashed the “beast” that is ChatGPT, the rate of market competition between AI-focused technology companies accelerated exponentially.
The price of products and services, their quality, and the rate of innovation are determined by market competition, which has been remarkable in the AI industry. However, some experts believe that we are deploying the world’s most advanced technology far too rapidly.
This could hinder our ability to detect significant problems before they cause damage, which would have profound societal repercussions, especially since we cannot predict the capabilities of something that may one day be able to train itself.
But AI is nothing new, and while ChatGPT may have surprised many, the germs of the current uproar over this technology were planted years ago.
Is AI new?
Modern AI can be traced back to the 1950s, when Alan Turing worked to solve complex mathematical problems to evaluate the intelligence of machines.
Available resources and computing capacity at the time hindered expansion and adoption. In contrast, advancements in machine learning, neural networks, and data accessibility fueled a revival of AI in the early 2000s. This prompted numerous industries to adopt AI. It was utilized by the finance and telecommunications industries for fraud detection and data analytics.
Later, the proliferation of data, the advent of cloud computing, and the availability of vast computing resources all contributed to the development of AI algorithms. This had a significant impact on what was possible, such as image and video recognition and targeted advertising.
Why is AI currently receiving so much attention? AI has been utilized in social media for years to suggest pertinent posts, articles, videos, and advertisements. According to technology ethicist Tristan Harris, social media is humanity’s “first contact” with artificial intelligence.
And humanity has learned that AI-powered algorithms on social media platforms can propagate disinformation and falsehoods, polarizing public opinion and nurturing online echo chambers. In both the 2016 US presidential election and the UK Brexit vote, campaigns spent money to target voters online.
Both events increased public awareness of artificial intelligence and how technology can be used to manipulate political outcomes. These high-profile incidents have sparked concerns regarding the evolving capabilities of technologies.
However, a novel type of AI emerged in 2017. This device is called a transformer. It is a machine learning model that processes language in order to generate its own text and carry on conversations.
This innovation enabled the development of large language models, such as ChatGPT, that can comprehend and generate text that resembles that written by humans. Transformer-based models, such as OpenAI’s GPT (Generative Pre-trained Transformer), have demonstrated remarkable text generation capabilities.
Transformers are distinct in that as they assimilate new information, they learn from it. This enables them to potentially acquire new capabilities that engineers did not program into them.
Greater issue
The processing power now available and the capabilities of the most recent AI models indicate that unresolved concerns regarding the impact of social media on society, particularly on younger generations, will only intensify as a result of the advancements in AI and computing power.
Lucy Batley, the CEO of Traction Industries, a private company that helps businesses integrate AI into their operations, predicts that the type of analysis that social media companies can perform on our personal data – and the level of detail they can extract – “going to be automated and accelerated to a point where big tech moguls will potentially know more about us than we consciously do about ourselves”.
In contrast, quantum computing, which has experienced significant advancements in recent years, may significantly outperform conventional computers for specific tasks. This, according to Batley, would “allow the development of much more capable AI systems to probe multiple aspects of our lives”.
The situation of “big tech” and AI-leading nations can be compared to the “prisoner’s dilemma” as described by game theorists. This is a situation in which two parties must either collaborate to solve a problem or undermine one another. They are faced with a difficult choice between an event that benefits only one party – bearing in mind that betrayal typically yields a greater recompense – and one that has the potential for mutual gain.
Consider a scenario in which there are two competing technology corporations. They must choose whether to collaborate by sharing their research on cutting-edge technology or whether to keep their research covert. If both businesses collaborate, they could make significant strides forward. However, if Company A shares while Company B does not, it is likely that Company A loses its competitive advantage.
This is comparable to the current circumstance the United States finds itself in. The United States is attempting to accelerate AI to outpace foreign competition. As a result, policymakers have been slow to discuss artificial intelligence regulation, which would help safeguard society from harms caused by the technology’s use.
Uncharted territory
This potential for AI to contribute to societal issues must be avoided. We have a responsibility to comprehend them, and we require a concerted effort to avoid the social media errors of the past. We missed the window for regulating social media. By the time this discussion entered the public sphere, social platforms had already become intertwined with the media, elections, businesses, and users’ lives.
Later this year, the United Kingdom will host the first significant global summit on AI safety. This is an opportunity for policymakers and world leaders to consider the immediate and future risks posed by artificial intelligence and how they can be mitigated through a globally coordinated strategy. This is also an opportunity to invite a broader range of society’s voices to discuss this important issue, resulting in a more diverse range of perspectives on a complex issue that affects everyone.
AI has enormous potential to improve the quality of life on Earth, but we all have a responsibility to promote the creation of responsible AI systems. Collectively, we must also urge brands to adhere to ethical principles within regulatory frameworks. The most effective time to influence a medium is at the beginning of its voyage.