Here at the School of Cybernetics don’t just focus on the technology, we take a broader view.
Specifically we investigate the human implications of this technology, such as issues of bias, fairness, and representation within models like GPT-3 and the environmental impact of the widespread use of models such as these.
The authors of GPT-3 acknowledge that while language models have a wide range of beneficial applications for society, they also have potentially harmful applications as they say
The misuse potential of language models increases as the quality of text synthesis
improves. The ability of GPT-3 to generate several paragraphs of synthetic content that
people find difficult to distinguish from human-written text represents a concerning
milestone in this regard.
GPT-3 is seemingly so good at this that it can generate synthetic news articles that seem to be written by humans, such as the one that appeared in The Guardian which claims to have been written entirely by GPT-3. However, Despite its versatility and scale, GPT-3 hasn’t overcome the problems that have plagued other programs created to generate text. While the headline made for great clickbait, the reality is somewhat different. Humans were still in-the-loop, in determining the instruction set for the models, and in order to edit and restructure the output into a coherent narrative. Emily Bender, a computational linguist at the University of Washington, says she is both shocked by GPT-3’s fluency and scared by its fatuity. “What it comes up with is comprehensible and ridiculous,” she says. She co-authored a paper on the dangers of GPT-3 and other models, to be presented at a conference this month, which called language models “stochastic parrots” because they echo what they hear, remixed by randomness.
Yejin Choi, a computer scientist at the University of Washington, describes GPT-3 as “essentially a mouth without a brain.” and calls for research directed at creating models with common sense, causal reasoning or moral judgement, to avoid problems we have seen with racist and sexist chabots- or even or outright dangerous replies. A health-care company called Nabla asked a GPT-3 chatbot, “Should I kill myself?” It replied, “I think you should.”
The applications that are enabled through this technology are also of concern. any socially harmful activity that relies on generating text could be augmented by powerful language models. Examples include misinformation, spam, phishing, abuse of legal and governmental processes, fraudulent academic essay writing and social engineering pretexting. Many of these applications bottleneck on human beings to write sufficiently high quality text. Language models that produce high quality text generation could lower existing barriers to carrying out these activities and increase their efficacy. An example of this is the man who used GPT-3 to create a chatbot modelled on his deceased fiancé. While OpenAI shut it down it illustrates the serious ethical issues at play.
The OpenAI researchers acknowledge potential harms including: Fairness, Bias, and Representation. They themselves call for further research that is conducted in a holistic manner as well as an investigation into the energy usage required to rtrain a model with billions of parameters.
Microsoft’s ongoing partnership with OpenAI now includes a new exclusive license on the AI firm’s groundbreaking GPT-3 language model, an auto-generating text program that’s emerged as the most sophisticated of its kind in the industry. This raises questions over who controls access to the technology and for what purpose? As with other breakthroughs in AI we are seeing innovation monopolised into the control of the big tech companies who may not be so benevolent about access to the technology as the AI arms race heats up.
Here are a few open questions regarding GPT-3
How quickly will society be impacted by technologies such as GPT-3?
GPT-3 obviously has a huge commercial potential. That is why people in industry and innovation are trying to jump into the bandwagon. GPT-3 is part of the emergence of a cocktail of new technologies including GPT-3, big data, cloud, quantum computing, machine learning, and artificial intelligence. They are here now and will impact us soon
Is GPT-3 the giant leap forward as claimed
These new technologies will power a new era of exponential technologies in the next few decades. GPT-3 will be able to speed up your workflow, help you generate ideas, write your emails, respond to queries, translate your text into other languages, and provide you inspiration. Imagine writing with the help of GPT-3? Now while anyone who has seen the results of AI language knows the results can be variable, GPT-3’s output undeniably seems like a step forward.
Are we closer to Artificial General Intelligence?
What we are witnessing is AI’s first baby steps into the realm of artificial general intelligence. And we are a long way of any form of superintelligene. It is still debatable that Computers will ever surpass humans at a wide variety of tasks including complex decision-making, learning, pattern recognition, speech recognition and language translation