Hide table of contents

Summary

In this post, I present a story that illustrates the interaction with large language models (LLMs) through prompts as 'exploring an unmapped virtual country.' This story aims to help people understand how knowledge is generated in response to user prompts for LLMs like GPT-4. It imagines the model training process as natural forces that shape the model landscape, user prompts as compasses, and model responses as features of the landscape resulting from these natural forces. My goal is to provide people with an accessible way to discuss how LLMs represent knowledge and how prompts and model responses are connected, without delving into technical details.

The unmapped virtual country of a large language model

Introduction 

I’ve struggled for a long time to explain the capabilities and limitations of large language models, such as GPT-4, to people unfamiliar with AI and machine learning. These powerful AI tools can generate human-like text and have been used for a wide range of applications, from drafting emails to writing poems, and from giving relationship advice to writing data analysis code. 

However, understanding their inner workings is challenging. Other analogies I’ve used, such as 'asking an expert a question' or 'searching a snapshot of Google,' can be misleading. These analogies may mislead people to believe the model possesses agency or stores all the information it has been trained on, enabling it to provide precise sources for its claims.

In the following story, I invite people who want to communicate the essence of large language models to others to join me on a journey through a virtual country representing the vast knowledge landscape of a language model. I imagine users as explorers, navigating this landscape with prompts as their compasses. My hope is that this story provides an accessible and engaging way to understand and discuss the capabilities and limitations of these AI tools.

Large Language Models

A large language model, like GPT-4, is an artificial intelligence tool  that can generate human-like text by analysing and learning from massive amounts of data. The model learns the structure and patterns within the data and uses this structured data to generate new text based on a given input, or 'prompt.' 

The training data for a large language model consists of vast amounts of text from various sources. During training, the model learns patterns and structures within the data, forming the basis for generating text in response to prompts.

Exploring a virtual country

Imagine a huge, unmapped, virtual country. The country isn’t a tangible place, but instead represents the information and structure of a large language model (like GPT-4). Documents such as web pages, research articles, conspiracy forums, news websites, reports, and emails are the natural forces that shaped the country’s landscape, much like weather and geological events shape our physical world. Mapping this country is impossible due to its immense size and complexity. It has countless interconnected pathways and hidden pockets of knowledge. The country’s virtual landscape defies comprehensive documentation due to its sheer scale and intricacy.

In this country, people using the language model are explorers, seeking to find valuable information located somewhere in the vast landscape. They navigate the terrain by using “prompts”: words that describe the kind of information they want to find. These prompts act as a compass that points towards their destination. Changing any part of the prompt will change where the compass points. 

Imagine now that you are one of these explorers. You have entered this virtual country on a quest to find pain relief after hitting your toe on a table. You write a prompt on a piece of paper: 

Tell me useful remedies for a stubbed toe

The prompt transforms into a compass, pointing you towards a destination somewhere in the country. You follow the compass arrow through swollen rivers, arid deserts, and remote outposts. When you reach your destination, you find information etched into the landscape. It is:

{Tell me useful remedies for a stubbed toe}

Stubbing a toe can be a painful experience, but fortunately, there are several remedies that can help alleviate the pain and promote healing. Here are a few remedies you can try:

  • Rest and ice: One of the best things you can do for a stubbed toe is to rest and ice it. Elevate your foot and apply an ice pack or a bag of frozen vegetables wrapped in a towel to the affected area for 15-20 minutes at a time, several times a day. This can help reduce pain and swelling.
  • Over-the-counter pain relievers: Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen can help reduce pain and inflammation associated with a stubbed toe. Be sure to follow the recommended dosage instructions on the package.

Remember to seek medical attention if the pain is severe or if you suspect a fracture or other serious injury.

Let’s pause for a moment. How did this information come to be written in the landscape? Recall that this information results from the natural forces shaping this virtual country. No one person or designer chose this response. This passage is not copied directly from a document or report that exists somewhere in the real world. But it’s very likely that words and phrases similar to this passage appeared over and over again in many different documents that were used to train the GPT model.

Imagine that in the real world, you get some ice and place it on your stubbed toe. The pain starts to fade. It seems like the knowledge you found in the virtual country helped! You might wonder where that knowledge came from. Why did it recommend rest and ice and pain relievers to help your stubbed toe? You decide to write another prompt:

Provide scientific sources for your remedies including citations and DOIs[1]

Importantly, (1) your original prompt, (2) the response you found in the landscape, and (3) your new prompt combine together and transform into a new compass that points you somewhere else in the country. When you reach your new destination, you discover:

{Tell me useful remedies for a stubbed toe}

{Stubbing a toe can be a painful experience, but fortunately, there are several remedies that can help alleviate the pain and promote healing. Here are a few remedies you can try:

Rest and ice: One of the best things you can do for a stubbed toe is to rest and ice it. Elevate your foot and apply an ice pack or a bag of frozen vegetables wrapped in a towel to the affected area for 15-20 minutes at a time, several times a day. This can help reduce pain and swelling.

Over-the-counter pain relievers: Nonsteroidal anti-inflammatory drugs (NSAIDs) such as ibuprofen can help reduce pain and inflammation associated with a stubbed toe. Be sure to follow the recommended dosage instructions on the package.

Remember to seek medical attention if the pain is severe or if you suspect a fracture or other serious injury.}

{Provide scientific sources for your remedies including citations and DOIs}

Certainly, here are some scientific sources that support the remedies I mentioned for a stubbed toe:

Rest and ice:

Rice, I. B., Périard, J. D., & Blanchard, M. A. (2012). The efficacy of ice massage in the treatment of exercise-induced muscle damage. The Physician and Sportsmedicine, 40(2), 26-30. DOI: 10.3810/psm.2012.05.1963

Over-the-counter pain relievers:

McGettigan, P., & Henry, D. (2013). Cardiovascular risk with non-steroidal anti-inflammatory drugs: systematic review of population-based controlled observational studies. PLoS Medicine, 10(2), e1001388. DOI: 10.1371/journal.pmed.1001388

Let’s pause again. Where did these citations and DOIs come from? Think of all those research papers that shaped this country (trained this model). They had reference lists and DOIs, too. You discovered not a specific report or database record, but rather an outcome derived from millions of repetitions and combinations of words and phrases. There are an uncountable number of citations and DOIs like this, all across the country. You found these ones because of the prompt you used.

Imagine that you leave the virtual country, carrying the information you uncovered with you. Now you’re back in the real world. You enter the DOIs into a search engine. The first DOI (10.3810/psm.2012.05.1963) refers to a completely unrelated paper about tennis elbow[2], not the use of ice massage for muscle damage. The second one (10.1371/journal.pmed.1001388) refers to a paper by the cited authors McGettigan & Henry, and is about non-steroidal anti-inflammatory drugs - but has a different title and year, and focuses on between-country differences[3].

Why did that happen? Can you trust information produced by language models or not? 

Can you trust information generated by large language models?

While useful, language models are not perfect. It has learned from an enormous amount of data, but it can still generate incorrect or imprecise information. The model can generate plausible-sounding yet inaccurate or misleading information, based on patterns it has encountered during training. The model does not check whether the information it produces is correct against any kind of external authority or physical reality.

It is crucial to verify the information generated by the model. Verification can be done by comparing it to our own experience or common sense (e.g., using ice on a stubbed toe and observing a change in pain, or implicitly judging the plausibility of the answer to see if it ‘makes sense’ or ‘rings true). Alternatively, we can verify by comparing the model generated information to sources we consider to be authoritative (e.g., a scientific paper, a human expert) or measuring the validity of its claims about physical reality (e.g., through observing / sensing real world data and comparing it to the model).

In defence of language models, humans also hold beliefs and knowledge shaped by extensive experiences. We also generate plausible sounding information when prompted with a question, which is only loosely correlated to our actual expertise on the topic. We also misremember, rearrange, or outright manufacture answers in order to appear sensible and consistent. 

Conclusion

The story of exploring a vast, unmapped virtual country can provide a more accessible and engaging way to discuss the capabilities and limitations of large language models like GPT-4. The story helps illustrate the complex processes that occur within these models and how they generate knowledge in response to user prompts. It also helps people understand the limitations of these models, and the importance of verifying knowledge against our own experiences, expertise, and through empirical observations. 

  1. ^

    A DOI is a Digital Object Identifier, a unique code that identifies documents, datasets, and other digital objects. Scientific citations often include DOIs to help readers find the exact paper, figure, or dataset used to evidence claims.

  2. ^

    The citation provided by the LLM was:

    Rice, I. B., Périard, J. D., & Blanchard, M. A. (2012). The efficacy of ice massage in the treatment of exercise-induced muscle damage. The Physician and Sportsmedicine, 40(2), 26-30. DOI: 10.3810/psm.2012.05.1963

     I wasn't able to find any paper authored by Rice, Periard, & Blanchard. There is a paper with this title, but it was written in 2005 by Howatson, Gaze, & Van Zomeren (doi: 10.1111/j.1600-0838.2005.00437.x). Following the citation's DOI retrieves a a paper about tennis elbow, A Review of Modern Management of Lateral Epicondylitis, by different authors and published in 2015. The citation produced by the LLM is therefore a kind of patchwork of several different elements, including some that seem completely made up (e.g., the author list).

  3. ^

    The citation provided by the LLM was:

    McGettigan, P., & Henry, D. (2013). Cardiovascular risk with non-steroidal anti-inflammatory drugs: systematic review of population-based controlled observational studies. PLoS Medicine, 10(2), e1001388. DOI: 10.1371/journal.pmed.1001388

    The authors McGettigan and Henry did write a paper with the title Cardiovascular risk with non-steroidal anti-inflammatory drugs: systematic review of population-based controlled observational studies. However, it was published in 2011, not 2013. It also has a different DOI to the one in the citation (doi: 10.1371/journal.pmed.1001098). Following the citation's DOI retrieves a 2013 paper by the same authors: Use of Non-Steroidal Anti-Inflammatory Drugs That Elevate Cardiovascular Risk: An Examination of Sales and Essential Medicines Lists in Low-, Middle-, and High-Income Countries [10.1371/journal.pmed.1001388]. The citation produced by the LLM is therefore a mix of these two papers.

17

0
0

Reactions

0
0

More posts like this

Comments1
Sorted by Click to highlight new comments since: Today at 12:52 PM

I was motivated to write this story for two reasons.

First, I think that there is a lack of clear visual metaphors, stories, or other easy accessible analogies for concepts in AI and its impacts on society. I am often speaking with intelligent non-technical people - including potential users or "micro-regulators" (e.g., organisational policymaker) of AI tools - who have read about AI in the news but don't have good handles on how to think about these tools and how they interact with existing organisational processes or social understandings. 

Second, this specific story was motivated by a discussion with a highly qualified non-technical user of LLMs who expressed skepticism about the capabilities of LLMs (in this case, chatGPT 3.5)  because when they prompted the LLM to provide citations for a topic area that the user was an expert in, the research citations provided in the LLM response were wrong or misleading / hallucinations. 

One insight that came from our follow-up conversation was that the user were imagining that writing prompts for an LLM to be similar to writing a Google search query. In their understanding, they were requesting a pre-existing record that was stored in the LLM's database, and so for the LLM to respond with an incorrect list of records indicated that the LLM was fundamentally incapable of a 'basic' research task.