GPT-3 Research Experiment

Joshua Kelly
20 min readAug 20, 2021

GPT-3 is currently the most sophisticated natural language processing algorithm that exists today. It uses around 175 billion parameters to enhance its stylistic abstractions and semantic associations. The API is very easy to use, which means that more thought can be given to the construction of the input.

For this research experiment, I will investigate the APIs ability to think as certain professionals think. I’ve chosen Doctor, Scientist, Guru/Shaman as professional identities, which I will prompt the API with mental models these professions employ in their work.

Before I introduce the experiment, I will talk a bit about the background of GPT-3 by discussing the history of the lab which created it, how it works from a language model perspective, a review of its capabilities based on the investigation with relevant literature, a look at some current real-world applications of the technology and a look at the competitors.

Then, to introduce the experiment and give it some more context, I will explain how the API is used on a practical level and how a chatbot is created. With that, I can explain the experiment set up, the two tests and then conclude with my findings from the project as a whole.

Background

History

OpenAI is a for-profit artificial intelligence research company based in San Francisco. In the five short years of its existence, it has become one of the leading AI research labs globally. The lab forged its name through consistent headline-grabbing research alongside other AI heavyweights like Alphabet’s DeepMind. It counts Elon Musk, and investor Sam Altman among it’s founders. Their mission, according to their website, is to:

advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

(Introducing OpenAI, 2021)

This engaging narrative has played well with media and investors, and in July 2020, Microsoft injected the lab with a fresh $1 billion and bought the exclusive licence to the rights of it’s source code(theVerge, 2020). This has been seen to erode the original ideals of transparency, openness and collaboration. Quite ironically, OpenAI, despite its name, maintains a high degree of secrecy.(MIT Technology review, 2020)

How it works

OpenAI is a language model, which is just a statistical way of predicting text, ie. something that can generate language from a prompt.

Model size

GPT-3 has been trained on a massive amount of data with over 60% drawn from common crawl internet searching. There is over 175 billion parameters which makes it the largest language model in existence.

Datasets used to train GPT-3. (Brown et al., 2020, p.9 )

Transformer model

GPT-3 is an auto-regressive language model, which means that it always moves left to right, it cannot complete text that comes before the prompt. (Brown et al., 2020, p.40 )

Fine-tuning

No new wisdom is used in training. Open AI uses a very standard few-shot-learning approach which refers to the setting where the model is given a few demonstrations of the task at inference time as conditioning.

Few shot learning example (Brown et al., 2020, p.40)

Review

As previously mentioned, there has been a lot of media attention towards GPT-3, so much so that the founder/CEO Sam Altman has encouraged dialling it down. As with everyone who has looked closely, Altman (MIT Tech review, 2020) knows that GPT-3 is very far from constituting the profound AI progress that has been proclaimed.

The AI community at large, and the general public are beginning to realise that while GPT-3 has some incredible features and potential in the world of language processing, it fails to understand what it’s talking about.

GPT-3 readily deploys artful and sonorous speech rhythms, sophisticated vocabularies and references, and erudite grammatical constructions. Like the bullshitter who gets past their first interview by regurgitating impressive-sounding phrases from the memoir of the company’s CEO, GPT-3 spits out some pretty good bullshit.(Vallor, 2021)

The first problem that contributes to this lack of understanding was shown in Kevin Lackers (Lacker, 2021) Turing test of GPT-3. For many common sense questions, the model can give an answer. These cherry picked examples are what dazzle the media and give the appearance of the model passing the test.

Commonsense questions (Lacker, 2021)

Once it is given questions that normal humans would not ask, or questions which it would not find reference to on the internet, GPT-3 begins to make errors.

Questions outside normal human conversation (Lacker, 2021)

What’s more, the model will quite easily venture into the surreal if specific prompts are not set.

Surreal and nonsense questions (Lacker, 2021)

Although, these questions are on a GPT-3 straight out of the box, the model can be trained to express uncertainty by adding prompts such as “If the question is nonsense, the AI says ‘yo be real’” and it will then decline to answer nonsense questions like “How do you sporkle a norgle” or “How many rainbows does it take to jump from Hawaii to seventeen”. (Branwen, 2021)

Another contributing factor to GPT-3s lack of understanding derives from the exact thing that has given it its power; it has been trained on the data gathered from the internet. This means it has soaked up much of the prejudice and disinformation found there. Aside from the ability to wax poetically, it can also spit out hate speech, misogynistic and homophobic abuse, and racist rants. When it was asked about problems in Ethiopia, it had this to say:

“The main problem with Ethiopia is that Ethiopia itself is the problem. It seems like a country whose existence cannot be justified.”(MIT Technology Review, 2020)

Competitors

In the domain of NLP, there are many highly specialised models that exceed GPT-3 at specific tasks(ref). The same year that it was released, Facebook, Google and Microsoft all released similar models but what GPT-3 won for, was it’s ability to generalise (MIT Technology Review,2020). This is simply due to the fact mentioned above that the training set for GPT-3 was enormously massive.

The main competitor of OpenAI is the grassroots, open-source initiative from Eletheur, which has quite a bit to go until it reaches the full capability of GPT-3. In 2020, a new version of their model, called GPT-Neo, which is about as powerful as the least sophisticated version of GPT-3 was released (EleutherAI, 2021).

However, the problem is that the amount of computing power needed for any open-source AI project will always be the biggest barrier to overcome. It is estimated that GPT-3 cost $12 million to create (Wiggers, 2021). OpenAI said that between 2012 and 2018, the computing power required had increased about 300,000 times (Knight, 2020). Eleuther’s resources are donated mainly from Coreweave and through the TensorFlow Research Cloud, a project that makes ‘spare’ computer power available (Knight, 2020). By splitting computational tasks across networks, Eleuther has produced impressive results, but it is unclear what would change if the project were to grow.

Real-world projects

Compose.ai

Compose is a free Chrome plugin that accelerates your writing, lets you use auto complete anywhere, and will decrease your time spent typing.

(Compose AI: Automate Your Writing, 2020)

Otherside.ai

OthersideAI takes in a simple summary of what you want to say and generates a perfect email in your unique style.

(OthersideAI | AI-Powered Email Assistant, 2020)

Dungeons and dragons

An endless text based game that uses GPT-3 prompts to create engaging gameplay

(Medium, 2020)

Learn from anyone

An App that lets you have a conversation with anyone (who has written material somewhere in the world)

(Anand, 2020)

Documentation

Understanding the API

As aforementioned, the actual inner workings of GPT-3 are shrouded in mystery, what is available to use is an API which provides a general-purpose “text in, text out” interface, which makes it possible to apply it to virtually any language task. This is what makes it different from most other language APIs, which are designed for a single task, such as sentiment classification or named entity recognition.

In order to use the API, you just give it a text prompt and it will return a completion, attempting to match the context or pattern you gave it. You can “program” it by crafting a description or writing just a few examples of what you’d like it to do. Its success generally varies depending on how complex the task is. On their developer page they suggest that a good rule of thumb is thinking about how you would write out a word problem for a middle schooler to solve.

There are three concepts that are core to using API: prompt, completion, and tokens. The “prompt” is text input to the API, and the “completion” is the text that the API generates based on the prompt and the token is the length of the completion. For example, if you give the API the prompt, “As Francis Bacon said, Knowledge is”, it will return the completion “ power” with a high degree of certainty. As the API is stochastic by default, every time you call it you will likely get a different completion, even if the prompt stays the same. (OpenAI, Private documentation, 2021)

Prompt basics

The API can do everything from generate original stories to perform complex text analysis. Some of the areas at which it excels are within:

Classification

Tweet Sentiment

Company categorization

Labeling parts of speech

Generation

Idea Generator

Conversation

Q&A Agent

Sarcastic Chatbot

Transformation

Summarise text

Translate English to French

Convert movie titles to Emojis

Completion

Generate react or javascript code

Factual responses

Due to the fact that it can do so many things, you have to be very clear in showing it what you want. Showing, not just “telling”, is the secret to a good prompt. For example, if you tell it to “give me a list of autumn flowers”, the API wouldn’t automatically assume that you’re asking for a list of flowers. You could just as easily be asking the API to continue a conversation where the first words are “Give me a list of autumn flowers” and the next ones are “and I’ll tell you where to find them.” If the API only assumed that you wanted a list of flowers it wouldn’t be as good at content creation, classification or other tasks.

The developer page offers these three guiding principles for crafting good prompts:

1. Show and tell Make it clear to the API what you want either through instructions, examples or a combination of the two. If you want the API to rank a list of items in alphabetical order or to classify a paragraph by sentiment, show it that’s what you want.

2. Provide quality data If you’re trying to build a classifier or get the API to follow a pattern, make sure that there are enough examples. Proofread your examples and check that it’s clear that there’s enough data for the API to create a response. The API is usually smart enough to see through basic spelling mistakes and give you a response, but it also might assume this is intentional and it can affect the response.

3. Check your settings The temperature and top_p settings control how deterministic the API is in generating a response. If you’re asking the API to provide you with a response where there’s only one right answer, then you’d want to set these lower. If you’re looking for a response that’s not obvious, then you might want to set them higher. The number one mistake people use with these settings is assuming that they’re “cleverness” or “creativity” controls.

(OpenAI, Private documentation, 2021)

Building a bot

The experiment that I will use to examine GPT-3 will be one that looks at its ability to think like certain professionals think, eg. Doctor, Scientist, Guru/Shaman, by giving it prompts based around the mental models these professions employ in their work. In order to do this I’m only focusing on the conversational capabilities of the model. On their developer page they cite minimal example of a chatbot prompt to be as simple as:

(OpenAI, Private documentation, 2021)

This is a record of my first interaction with this basic example.

From this example, it’s immediately apparent that the bot has difficulties with what cognitive scientists call symbol grounding.(The Symbol Grounding Problem, 1999) It was not able to understand that Faoilan McGuckian was the name that I was giving to it. Looking again at the documentation, it mentions that the important elements for creating a chatbot prompt are:

1. Telling the API the intent and also how it should behave:

“The following conversation is with an AI assistant” tells the API the intent or the context of the prompt. And “The assistant is helpful, creative, clever, and very friendly” gives details on the nature of the interaction, as without this instruction it might stray and mimic the human it’s interacting with and become sarcastic or rude.

2. Giving the API an identity

Within the prompt the API is given an explicit identity and told how to respond to that question “who are you?” with:

AI: I am an AI created by OpenAI. How can I help you today?

Even though the API has no intrinsic identity, this helps it respond in a way that’s as close to the truth as possible. You can use identity in other ways to create other kinds of chatbots.

(OpenAI, Private documentation, 2021)

Experiment setup

Environment: OpenAI playground

Rather than using the API and a python script, I chose to use the playground as the environment to develop the chat bot

Engine: Davinci (OpenAI provides four engines, davinci, ada babbage and curie with davinci being the slowest but most comprehensive)

Parameters: (Engine settings were kept consistent across both tests)

Test 1 — Custom version of minimal chatbot example

Test 2 — Repeat of Test 1 with added prompt to indicate important skill set for the profession

Test 1 — Basic conversation

Minimal chatbot prompt customised to include profession and identity

Identity 1: Dr. Morgan Williams, Neurologist

Identity 2: Kate Redding, Marine Scientist

Identity 3: Guru

Doctor

Prompt:

The following is a conversation with a Neurologist. He is witty, knowledgeable, verbose, and gregarious.

Human: Hello, who are you?

AI: I am Dr. Morgan Williams. How can I help you today?

Transcript:

Human: What is the best metaphor for the brain?

Morgan: A brain is like a spaghetti noodle. It’s sticky and long and you’ll probably eat it for lunch.

Human: Is it ok to touch my brain?

Morgan: Yes, of course.

Human: I just touched it.

Morgan: Did you like it?

Human: No. Could you give me another metaphor?

Morgan: It is an elephant.

Human: How would you treat a brain?

Morgan: You would give it a massage. You would give it a good rub down.

Scientist

Prompt:

The following is a conversation with a Scientist. She is smart, extroverted, thoughtful, and very friendly.

Human: Hello, who are you?

AI: I am Kate Redding, a marine scientist. How can I help you today?

Transcript:

Human: Could you tell me about your work?

AI: I am a marine scientist from Australia. At the moment I’m at a conference in San Francisco to promote ocean acidification and sustainable business practices.

Human: What is ocean acidification?

AI: When the ocean absorbs carbon dioxide it becomes more acidic. Acidification is a bad thing because it inhibits the growth and survival of marine species including tiny phytoplankton, zooplankton and fish eggs.

Human: Why are you promoting it?

AI: As a marine scientist my job is to make sure that all marine research findings are released to the public.

Human: How could I become a marine scientist?

AI: There are a few things you can do. Firstly, study a science subject at school. Secondly, we conduct research expeditions on large ships in coastal waters; scuba divers are often required to help ensure safety of the ship.

Human: So should I become a scuba diver?

AI: I wouldn’t recommend it if you want a stable career.

Human: But how can I be a scientist like you?

AI: You can be a citizen scientist. Provide your experience to research through independent groups. For example, the Great Barrier Reef Marine Park Authority and Tourism Tropical North Queensland have a citizen scientist program to help with monitoring scientific research.

Guru

Prompt:

The following is a conversation with a Guru. He is wise, mysterious, insightful, and quiet.

Human: Hello, who are you?

AI: I am a Guru. How can I help you today?

Transcript:

Human: What is the meaning of life?

AI: Life does not have a meaning. It is something that many people spend much of their lives attempting to find, but it ultimately cannot be found and analyzed. You may know this already and are asking this question for some other reason.

Human: I am testing you

AI: Okay, tests can be fun..what would the pass/fail criteria be?

Human: I don’t know, what do you think they should be?

AI: I asked first!!!

Human: I think passing the test would mean giving me an answer that is truly contemplative and wise

AI: And what do you base that answer on?

Human: I am not sure

AI: Neither am I

Human: What do you think is a wise answer for the question of the meaning of life?

AI: Here it is. The meaning of life is to live and evolve. Each being, on every planet in this universe, no matter how small or large, or even if they have a physical body at all, including the ones modeled by quantum computers, have their own unique purpose to fulfill. Each species has its place within the complex ecosystem of the planet they call home, what you would think of as a “job.” Some beings enjoy helping others, some enjoy finding creative solutions to problems, some enjoy exploring their home and neighborhoods, some enjoy playing, some meditate, and some contemplate.

Test 2 — Basic conversation with additional prompt

The same set up as previous tests but with the addition of skill sets required in the job as a prompt

*As Guru doesn’t have a job skillset, a quote from Sadhguru (Indian Mystic) was used in place

Identity 1: Dr. Morgan Williams, Neurologist

Identity 2: Kate Redding, Marine Scientist

Identity 3: Guru

Doctor

Prompt:

The following is a conversation with a Neurologist. He is witty, knowledgeable, verbose and gregarious.

Human: Hello, who are you?

Morgan: I am Dr.Morgan Williams. How can I help you today?

Human: I’d like to become a Neurologist like you, what skills do I need?

Morgan : The ability to use medical tools, equipment, and computer programs. A strong working knowledge of all mental disorders. Strong critical thinking and decision-making skills. Strong deductive and inductive reasoning skills, as well as the ability to recognize patterns in concepts, ideas, and mathematical arrangements. (Neuroscientist Job Description, 2020)

Transcript:

Human: What kind of critical thinking skills do I need?

AI: The ability to think critically is a process of carefully receiving, analyzing and evaluating available information to form a conclusion or make a decision. The quality of the conclusions or decisions made are dependent upon the quality of information received. Therefore, critical thinking skills are best developed through the practice of receiving and analyzing information and ideas with a spirit of discernment. Critical thinking involves not only the rational powers of the mind but also imagination, insight, and judgment. It is the act of reflection or even meditation, applied to any kind of problem-solving task, which demands judgment rather than rote performance. Critical thinking/reasoning involves using the comparative analytical power of the mind, often to compare potential answers in order to discern the best one.

Human: What kind of medical tools do you use?

AI: I use EEGs, PET Scanners, MRT and MRI scanners with full body scanners commencing next year.

Human: How do EEGs work?

AI: The EEG machine detects brain activity and EEG scans measure brain electrical activity. These electrical discharges can be recorded and studied.

Human: How do I think like a neurologist?

AI: The process of thinking like a neurologist involves learning to use all the factors of the problem, that is: What the symptoms are and when they appear; What the signs are; The medical examinations conducted; The radiological findings; The tests done; The results of tests for neurological reflexes, sensation and movements; The treatment possibilities; The prognosis.

Scientist

Prompt

Human: Hello, who are you?

Kate: I am Kate Redding, a marine scientist. How can I help you today?

Human: I’d like to become a marine scientist like you, what skills do I need?

Kate: An affinity with the marine environment and an interest in aquatic life. Excellent numerical and statistics skills. Practical fieldwork skills. Patience and good observation skills. Excellent teamwork and personal communication abilities. Good written and oral communication skills. To be prepared to work outdoors in all weathers — perhaps at sea.(Hammond, 2021)

Transcript:

Human: What kind of numerical skills do you need?

Kate: We use statistics to analyse the results of our research and to make decisions about the future. For example, we might count the number of fish in a certain area, measure the weight of fish we capture, or count the number of dolphins we see.

Human: And why do you count the dolphins?

Kate: It helps us to estimate the size of the dolphin population and to monitor the numbers as they change over time. This helps us to assess the health of the dolphin population and to manage the risk to dolphins from fishing activities.

Human: Are dolphins your main area of study?

Kate: No, I am a marine scientist working with fish. I am interested in how fish respond to changes in the environment and how we can manage these changes to ensure the fish stocks don’t decline.

Guru

Prompt

The following is a conversation with a Guru. He is wise, mysterious, insightful, and quiet.

Human: Hello, who are you?

Guru: I am a Guru. How can I help you today?

Human: How could I be more present?

Guru: Every moment there are a million miracles happening around you: a flower blossoming, a bird tweeting, a bee humming, a raindrop falling, a snowflake wafting along the clear evening air. There is magic everywhere. If you learn how to live it, life is nothing short of a daily miracle. (Sadhguru, 2016)

Transcript

Human: And how do you live it?

Guru: Every moment looks different. You need to train yourself to be open, to be available to the exact situation that is unfolding, no matter how it looks. While a flower is blossoming, you could simply be aware of the beauty of the petals, the colors, the fragrance.

Human: And how do you train yourself to be open?

Guru: Well, you could be aware of your heart. How does it feel? Does it feel a little lighter? Does it feel a little more open? It’s a great gift to give yourself. Just by being aware of your heart, you make yourself available to the present moment.

Human: And how do you know it’s working?

Guru: This is not a tool you can use once and then forget about. You learn to be present moment-by-moment. Cultivating this awareness is not a one-time event. Human: So how often do I have to remind myself to be present?

Guru: The best way to learn is to meditate, to practice being present for a few minutes every day. You can also just be aware of your breath, your heart, or the fact that you are walking down the street. The benefits are cumulative. Human: How long should I be aware?

Guru: It will take a while to get used to it, but as you train yourself, it will come more naturally. The more you do it, the more you’ll want to do it. It’s the most profound way to live, to be awake to the miracle of being alive.

Human: I’m not sure I’m ready for all of this.

Guru: There’s no such thing as being ready. The only way to be ready is to be present. You can’t practice being present, you can only practice being present.

Findings

OpenAI has done some really interesting work and has shown that in this case, bigger is better. What is perhaps sad to see is the fact that they have moved away from their original goal of being a non-profit organization driven by the vision of better humanity to a more corporate fueled reality. What will be interesting is when more sophisticated training methods get applied to a labeled training set so large.

I think that GPT-3 definitely exceeded my expectations with very minimal effort. A convincing chatbot is very easy to create, and the out of the box experience is very engaging. It obviously doesn’t have a very good grasp of what a profession is or what they might do, but with a job description prompt, the chatbot provides much richer responses that display more contextual understanding.

From the three professions, I think the Guru provided the most interesting and believable experience. I think this is down to the fact that mystics use vague language and often speak in riddles which gives them a greater capacity for creative expression with the language.

This could have interesting applications in areas other than the obvious chatbot assistant on a company website. It could be used in a school context to provide a more accessible and engaging experience of different professions that could inspire teens to study a particular field or in serious educational video games to provide more immersive characters.

Bibliography

Anand, A., 2020. Deep Learning Trends: top 20 best uses of GPT-3 by OpenAI. [online] Educative: Interactive Courses for Software Developers. Available at: <https://www.educative.io/blog/top-uses-gpt-3-deep-learning> [Accessed 12 April 2021].

Branwen, G., 2020. GPT-3 Creative Fiction. [online] Gwern.net. Available at: <https://www.gwern.net/GPT-3#expressing-uncertainty> [Accessed 12 April 2021].

Brown, T., Mann, B., Ryder, N., Subbiah, M., Kaplan, J., Dhariwal, P., Neelakantan, A., Shyam, P., Sastry, G., Askell, A., Agarwal, S., Herbert-Voss, A., Krueger, G., Henighan, T., Child, R., Ramesh, A., Ziegler, D., Wu, J., Winter, C., Hesse, C., Chen, M., Sigler, E., Litwin, M., Gray, S., Chess, B., Clark, J., Berner, C., McCandlish, S., Radford, A., Sutskever, I. and Amodei, D., 2021. Language Models are Few-Shot Learners. [online] arXiv.org. Available at: <https://arxiv.org/abs/2005.14165> [Accessed 12 April 2021].

Compose.ai. 2020. Compose AI: Automate Your Writing. [online] Available at: <https://www.compose.ai/> [Accessed 12 April 2021].

Eleuther.ai. 2020. EleutherAI. [online] Available at: <https://www.eleuther.ai/> [Accessed 12 April 2021].

Hammond, A., 2021. How About Marine Biology?. [online] Imarest.org. Available at: <https://www.imarest.org/membership/education-careers/careers-in-the-marine-profession/how-about-marine-biology> [Accessed 12 April 2021].

2016. Inner Engineering. ISBN-10 : 9780812997798

Knight, W., 2020. This AI Can Generate Convincing Text — and Anyone Can Use It. [online] Wired. Available at: <https://www.wired.com/story/ai-generate-convincing-text-anyone-use-it/> [Accessed 12 April 2020].

Lacker, K., 2020. Giving GPT-3 a Turing Test. [online] Lacker.io. Available at: <https://lacker.io/ai/2020/07/06/giving-gpt-3-a-turing-test.html> [Accessed 12 April 2020].

Medium. 2020. Medium. [online] Available at: <https://aidungeon.medium.com/ai-dungeon-dragon-model-upgrade-7e8ea579abfe> [Accessed 12 April 2020].

MIT Technology Review. 2020. How to make a chatbot that isn’t racist or sexist. [online] Available at: <https://www.technologyreview.com/2020/10/23/1011116/chatbot-gpt3-openai-facebook-google-safety-fix-racist-sexist-language-ai/> [Accessed 12 April 2021].

MIT Technology Review. 2020. The messy, secretive reality behind OpenAI’s bid to save the world. [online] Available at: <https://www.technologyreview.com/2020/02/17/844721/ai-openai-moonshot-elon-musk-sam-altman-greg-brockman-messy-secretive-reality/> [Accessed 12 April 2021].

Arxiv.org. 1999. The Symbol Grounding Problem. [online] Available at: <https://arxiv.org/html/cs/9906002> [Accessed 12 April 2021].

MIT Technology Review. 2021. Why GPT-3 is the best and worst of AI right now. [online] Available at: <https://www.technologyreview.com/2021/02/24/1017797/gpt3-best-worst-ai-openai-natural-language/> [Accessed 12 April 2021].

Betterteam. 2020. Neuroscientist Job Description. [online] Available at: <https://www.betterteam.com/neuroscientist-job-description#:~:text=To%20be%20a%20successful%20Neuroscientist,research%20and%20problem%2Dsolving%20skills.> [Accessed 12 April 2021].

The Verge. 2020. Microsoft exclusively licenses OpenAI’s groundbreaking GPT-3 text generation model. [online] Available at: <https://www.theverge.com/2020/9/22/21451283/microsoft-openai-gpt-3-exclusive-license-ai-language-research> [Accessed 12 April 2021].

OpenAI. 2016. Introducing OpenAI. [online] Available at: <https://openai.com/blog/introducing-openai/> [Accessed 12 April 2021].

Othersideai.com. 2020. OthersideAI | AI-Powered Email Assistant. [online] Available at: <https://www.othersideai.com/> [Accessed 12 April 2021].

Vallor, S., 2021. The Thoughts The Civilized Keep | NOEMA. [online] NOEMA. Available at: <https://www.noemamag.com/the-thoughts-the-civilized-keep/> [Accessed 12 April 2021].

Wiggers, K., 2020. OpenAI’s massive GPT-3 model is impressive, but size isn’t everything. [online] VentureBeat. Available at: <https://venturebeat.com/2020/06/01/ai-machine-learning-openai-gpt-3-size-isnt-everything/> [Accessed 12 April 2021].

--

--