Cheatsheet: Scenarios — edsl documentation (2024)

This notebook provides quick examples of methods for using Scenario objects to add data or other content to your EDSL survey questions. Scenarios allow you to efficiently administer multiple versions of questions at once, which can be useful in conducting experiments and labeling/exploration tasks where you want to answer the same questions about many different things, such as every piece of data in a dataset, or a collection of texts or other content.

Below we show how to each of the following:

  • Inspect an example scenario

  • Use a scenario in a question

  • Create scenarios

  • Combine scenarios

  • Replicate scenarios

  • Rename scenario keys

  • Sample scenarios

  • Select and drop scenarios

  • Slice/chunk text as scenarios

  • Create scenarios from dicts and list

  • Generate code for recreating scenarios

  • Turn html into scenarios

  • Turn PDFs into scenarios

  • Turn images into scenarios

  • Add metadata to scenarios

EDSL is an open-source Python library for simulating surveys, experiments and other research with AI agents and large language models. Please see our documentation page for information and tutorials on getting started, and more details on methods for working with scenarios that are shown here.

Importing the tools

We start by importing the relevant tools (see installation instructions):

[1]:
# ! pip install edsl
[2]:
from edsl import Scenario, ScenarioList

Inspecting an example

A Scenario contains a dictionary of keys and values representing data or content to be added to (inserted in) the question_text field of a Question object (see examples of all question types). We can call the example() method to inspect an example scenario:

[3]:
example_scenario = Scenario.example()example_scenario
[3]:
{ "persona": "A reseacher studying whether LLMs can be used to generate surveys."}

We can also see an example ScenarioList, which is a dictionary containing a list of scenarios:

[4]:
example_scenariolist = ScenarioList.example()example_scenariolist
[4]:
{ "scenarios": [ { "persona": "A reseacher studying whether LLMs can be used to generate surveys." }, { "persona": "A reseacher studying whether LLMs can be used to generate surveys." } ]}

Using a Scenario

To use a scenario, we create a Question with a {{ placeholder }} in the question_text matching the scenario key. Then we call the by() method to add the scenario to the individual question or Survey (a collection of questions) when we run it:

[5]:
# Import question typesfrom edsl.questions import QuestionFreeText, QuestionListfrom edsl import Survey# Create questions in the relevant templates with placeholdersq1 = QuestionFreeText( question_name = "background", question_text = "Draft a sample bio for this researcher: {{ persona }}")q2 = QuestionList( question_name = "interests", question_text = "Identify some potential interests of this researcher: {{ persona }}")# Combine questions into a survey to administer them togethersurvey = Survey(questions = [q1, q2])# Run the survey with the scenarios to generate a dataset of resultsresults = survey.by(example_scenario).run()
[6]:
# Print a table of selected components of the resultsresults.select("persona", "background", "interests").print(format="rich")
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓┃ scenario  answer  answer ┃┃ .persona  .background  .interests ┃┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩│ A reseacher studying whether LLMs  Dr. Alex Rivera is a pioneering  ['natural language processing', ││ can be used to generate surveys.  researcher in the field of  'survey methodology', 'machine ││  computational linguistics,  learning', 'data collection ││  specifically focusing on the  automation', 'human-computer ││  capabilities and applications of  interaction', 'artificial ││  Large Language Models (LLMs). With  intelligence ethics', 'response ││  a Ph.D. in Artificial Intelligence  quality assessment', 'question ││  from MIT, Dr. Rivera has dedicated  generation algorithms', ││  over a decade to exploring the  'computational linguistics', 'user ││  intersection of machine learning  experience design'] ││  and natural language processing.  ││  Currently, their groundbreaking  ││  work investigates the potential of  ││  LLMs to autonomously generate  ││  surveys that can adapt to various  ││  research contexts and yield  ││  high-quality data. Dr. Rivera's  ││  publications have become seminal  ││  readings in advanced AI courses,  ││  and they frequently speak at  ││  international conferences. Their  ││  research aims to revolutionize the  ││  way we collect information, making  ││  it more efficient and accessible  ││  across diverse fields.  │└─────────────────────────────────────┴─────────────────────────────────────┴─────────────────────────────────────┘

Note that the by() method can take an individual Scenario or a list of scenarios (examples below). Learn more about how to construct surveys and analyze results.

Creating a Scenario

We create a scenario by passing a dictionary to a Scenario object:

[7]:
weather_scenario = Scenario({"weather":"sunny"})weather_scenario
[7]:
{ "weather": "sunny"}

Creating a ScenarioList

It can be useful to create a set of scenarios all at once. This can be done by constructing a list of Scenario objects or a ScenarioList. Compare a list of Scenario objects:

[8]:
weather_scenarios = [Scenario({"weather":w}) for w in ["sunny", "cloudy", "rainy", "snowy"]]weather_scenarios
[8]:
[Scenario({'weather': 'sunny'}), Scenario({'weather': 'cloudy'}), Scenario({'weather': 'rainy'}), Scenario({'weather': 'snowy'})]

Alternatively, we can create a ScenarioList which has a key scenarios and a list of scenarios as the values:

[9]:
example_scenariolist = ScenarioList.example()example_scenariolist
[9]:
{ "scenarios": [ { "persona": "A reseacher studying whether LLMs can be used to generate surveys." }, { "persona": "A reseacher studying whether LLMs can be used to generate surveys." } ]}
[10]:
weather_scenariolist = ScenarioList([Scenario({"weather":w}) for w in ["sunny", "cloudy", "rainy", "snowy"]])weather_scenariolist
[10]:
{ "scenarios": [ { "weather": "sunny" }, { "weather": "cloudy" }, { "weather": "rainy" }, { "weather": "snowy" } ]}

Combining scenarios

We can add scenarios together to create a single new scenario with an extended dictionary:

[11]:
scenario1 = Scenario({"food": "apple"})scenario2 = Scenario({"drink": "juice"})snack_scenario = scenario1 + scenario2snack_scenario

Replicating scenarios

We can replicate a scenario to create a ScenarioList:

[12]:
personas_scenariolist = Scenario.example().replicate(n=3)personas_scenariolist
[12]:
{ "scenarios": [ { "persona": "A reseacher studying whether LLMs can be used to generate surveys." }, { "persona": "A reseacher studying whether LLMs can be used to generate surveys." }, { "persona": "A reseacher studying whether LLMs can be used to generate surveys." } ]}

Renaming scenarios

We can call the rename() method to rename the fields (keys) of a Scenario:

[13]:
role_scenario = Scenario.example().rename({"persona": "role"})role_scenario
[13]:
{ "role": "A reseacher studying whether LLMs can be used to generate surveys."}

The method can also be called on a ScenarioList:

[14]:
scenariolist = ScenarioList([Scenario({"name": "Apostolos"}), Scenario({"name": "John"}), Scenario({"name": "Robin"})])renamed_scenariolist = scenariolist.rename({"name": "first_name"})renamed_scenariolist
[14]:
{ "scenarios": [ { "first_name": "Apostolos" }, { "first_name": "John" }, { "first_name": "Robin" } ]}

Sampling

We can call the sample() method to take a sample from a ScenarioList:

[15]:
weather_scenariolist = ScenarioList([Scenario({"weather":w}) for w in ["sunny", "cloudy", "rainy", "snowy"]])sample = weather_scenariolist.sample(n=2)sample
[15]:
{ "scenarios": [ { "weather": "rainy" }, { "weather": "cloudy" } ]}

Selecting and dropping scenarios

We can call the select() and drop() methods on a ScenarioList to include and exclude specified fields from the scenarios:

[16]:
snacks_scenariolist = ScenarioList([Scenario({"food": "apple", "drink": "water"}), Scenario({"food": "banana", "drink": "milk"})])food_scenariolist = snacks_scenariolist.select("food")food_scenariolist
[16]:
{ "scenarios": [ { "food": "apple" }, { "food": "banana" } ]}
[17]:
drink_scenariolist = snacks_scenariolist.drop("food")drink_scenariolist
[17]:
{ "scenarios": [ { "drink": "water" }, { "drink": "milk" } ]}

Adding metadata to scenarios

Note that we can create fields in scenarios without including them in the question_text. This will cause the fields to be present in the Results dataset, which can be useful for adding metadata to your questions and results. See more examples here.

Example usage:

[18]:
songs = [ ["1999", "Prince", "pop"], ["1979", "The Smashing Pumpkins", "alt"], ["1901", "Phoenix", "indie"]]metadata_scenarios = [Scenario({"title":t, "musician":m, "genre":g}) for [t,m,g] in songs]metadata_scenarios
[18]:
[Scenario({'title': '1999', 'musician': 'Prince', 'genre': 'pop'}), Scenario({'title': '1979', 'musician': 'The Smashing Pumpkins', 'genre': 'alt'}), Scenario({'title': '1901', 'musician': 'Phoenix', 'genre': 'indie'})]
[19]:
q = QuestionFreeText( question_name = "song", question_text = "What is this song about: {{ title }}" # optionally omitting other fields in the scenarios)results = q.by(metadata_scenarios).run()results.select("scenario.*", "song").print(format="rich") # all scenario fields will be present
┏━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓┃ scenario  scenario  scenario  answer ┃┃ .musician  .title  .genre  .song ┃┡━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩│ Prince  1999  pop  The song '1999' by Prince is about the celebration and enjoyment ││    of life in the face of the looming threat of the year 2000, which ││    at the time was associated with the end of the world or a major ││    catastrophic event (Y2K). Prince encourages listeners to let go ││    of their worries and to party like it's 1999, suggesting that if ││    the world is going to end, they should live life to the fullest ││    and have a good time without fear. │├───────────────────────┼──────────┼──────────┼───────────────────────────────────────────────────────────────────┤│ The Smashing Pumpkins  1979  alt  The song '1979' by The Smashing Pumpkins captures the essence of ││    youth and the transition from adolescence into adulthood. It ││    reflects on the nostalgia of teenage years, the feeling of ││    freedom, and the bittersweet nature of growing up. The lyrics and ││    mood of the song evoke memories of carefree moments, rebellious ││    adventures, and the yearning to hold onto the fleeting innocence ││    of youth amidst the inevitable passage of time. │├───────────────────────┼──────────┼──────────┼───────────────────────────────────────────────────────────────────┤│ Phoenix  1901  indie  The song '1901' by the French indie rock band Phoenix is often ││    interpreted as a nostalgic reflection on the past and the changes ││    that come with time. It's about looking back at the turn of the ││    20th century with a sense of wonder and melancholy, possibly ││    touching on themes of youth, progress, and the fleeting nature of ││    life. The lyrics suggest a mix of personal and historical ││    perspectives, evoking images of a bygone era while also relating ││    to the universal human experience of growing older and yearning ││    for the simplicity of earlier times. │└───────────────────────┴──────────┴──────────┴───────────────────────────────────────────────────────────────────┘

Note that it does not matter if we use a list of Scenario objects or a ScenarioList with the same data–the scenarios are added to the survey in the same way when it is run:

[20]:
songs = [ ["1999", "Prince", "pop"], ["1979", "The Smashing Pumpkins", "alt"], ["1901", "Phoenix", "indie"]]metadata_scenarios = ScenarioList([Scenario({"title":t, "musician":m, "genre":g}) for [t,m,g] in songs])metadata_scenarios
[20]:
{ "scenarios": [ { "title": "1999", "musician": "Prince", "genre": "pop" }, { "title": "1979", "musician": "The Smashing Pumpkins", "genre": "alt" }, { "title": "1901", "musician": "Phoenix", "genre": "indie" } ]}
[21]:
q = QuestionFreeText( question_name = "song", question_text = "What is this song about: {{ title }}" # optionally omitting other fields in the scenarios)results = q.by(metadata_scenarios).run()results.select("scenario.*", "song").print(format="rich") # all scenario fields will be present
┏━━━━━━━━━━━━━━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓┃ scenario  scenario  scenario  answer ┃┃ .musician  .title  .genre  .song ┃┡━━━━━━━━━━━━━━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩│ Prince  1999  pop  The song '1999' by Prince is about the celebration and enjoyment ││    of life in the face of the looming threat of the year 2000, which ││    at the time was associated with the end of the world or a major ││    catastrophic event (Y2K). Prince encourages listeners to let go ││    of their worries and to party like it's 1999, suggesting that if ││    the world is going to end, they should live life to the fullest ││    and have a good time without fear. │├───────────────────────┼──────────┼──────────┼───────────────────────────────────────────────────────────────────┤│ The Smashing Pumpkins  1979  alt  The song '1979' by The Smashing Pumpkins captures the essence of ││    youth and the transition from adolescence into adulthood. It ││    reflects on the nostalgia of teenage years, the feeling of ││    freedom, and the bittersweet nature of growing up. The lyrics and ││    mood of the song evoke memories of carefree moments, rebellious ││    adventures, and the yearning to hold onto the fleeting innocence ││    of youth amidst the inevitable passage of time. │├───────────────────────┼──────────┼──────────┼───────────────────────────────────────────────────────────────────┤│ Phoenix  1901  indie  The song '1901' by the French indie rock band Phoenix is often ││    interpreted as a nostalgic reflection on the past and the changes ││    that come with time. It's about looking back at the turn of the ││    20th century with a sense of wonder and melancholy, possibly ││    touching on themes of youth, progress, and the fleeting nature of ││    life. The lyrics suggest a mix of personal and historical ││    perspectives, evoking images of a bygone era while also relating ││    to the universal human experience of growing older and yearning ││    for the simplicity of earlier times. │└───────────────────────┴──────────┴──────────┴───────────────────────────────────────────────────────────────────┘

Chunking text

We can use the chunk() method to turn a Scenario into a ScenarioList with specified slice/chunk sizes based on num_words or num_lines. Note that the field _chunk is created automatically, and _original is added if optional parameter include_original is used:

[22]:
my_haiku = """This is a long text.Pages and pages, oh my!I need to chunk it."""text_scenario = Scenario({"my_text": my_haiku})word_chunks_scenariolist = text_scenario.chunk("my_text", num_words = 5, # use num_words or num_lines but not both include_original = True, # optional hash_original = True # optional)word_chunks_scenariolist
[22]:
{ "scenarios": [ { "my_text": "This is a long text.", "my_text_chunk": 0, "my_text_original": "4aec42eda32b7f32bde8be6a6bc11125" }, { "my_text": "Pages and pages, oh my!", "my_text_chunk": 1, "my_text_original": "4aec42eda32b7f32bde8be6a6bc11125" }, { "my_text": "I need to chunk it.", "my_text_chunk": 2, "my_text_original": "4aec42eda32b7f32bde8be6a6bc11125" } ]}
[23]:
line_chunks_scenariolist = text_scenario.chunk("my_text", num_lines = 1)line_chunks_scenariolist
[23]:
{ "scenarios": [ { "my_text": "", "my_text_chunk": 0 }, { "my_text": "This is a long text. ", "my_text_chunk": 1 }, { "my_text": "Pages and pages, oh my!", "my_text_chunk": 2 }, { "my_text": "I need to chunk it.", "my_text_chunk": 3 }, { "my_text": "", "my_text_chunk": 4 } ]}

Tallying scenario values

We can call the tally() method on a ScenarioList to tally numeric values for a specified key. It returns a dictionary with keys representing the number of each Scenario in the ScenarioList and values representing the tally of the key that was specified:

[24]:
numeric_scenariolist = ScenarioList([Scenario({"a": 1, "b": 1}), Scenario({"a": 1, "b": 2})])tallied_scenariolist = numeric_scenariolist.tally("b")tallied_scenariolist
[24]:
{1: 1, 2: 1}

Expanding scenarios

We can call the expand() method on a ScenarioList to expand it by a specified field. For example, if the values of a scenario key are a list we can pass that key to the method to generate a Scenario for each item in the list:

Mutating scenarios

We can call the mutate() method on a ScenarioList to add a key/value to each Scenario based on a logical expression:

[25]:
scenariolist = ScenarioList([Scenario({"a": 1, "b": 1}), Scenario({"a": 1, "b": 2})])mutated_scenariolist = scenariolist.mutate("c = a + b")mutated_scenariolist
[25]:
{ "scenarios": [ { "a": 1, "b": 1, "c": 2 }, { "a": 1, "b": 2, "c": 3 } ]}

Ordering scenarios

We can call the order_by() method on a ScenarioList to order the scenarios by a field:

[26]:
unordered_scenariolist = ScenarioList([Scenario({"a": 1, "b": 1}), Scenario({"a": 1, "b": 2})])ordered_scenariolist = unordered_scenariolist.order_by("b")ordered_scenariolist
[26]:
{ "scenarios": [ { "a": 1, "b": 1 }, { "a": 1, "b": 2 } ]}

Filtering scenarios

We can call the filter() method on a ScenarioList to filer scenarios based on a conditional expression.

[27]:
unfiltered_scenariolist = ScenarioList([Scenario({"a": 1, "b": 1}), Scenario({"a": 1, "b": 2})])filtered_scenariolist = unfiltered_scenariolist.filter("b == 2")filtered_scenariolist
[27]:
{ "scenarios": [ { "a": 1, "b": 2 } ]}

Create scenarios from a list

We can call the from_list() method to create a ScenarioList from a list of values and a specified key:

[28]:
my_list = ["Apostolos", "John", "Robin"]scenariolist = ScenarioList.from_list("name", my_list)scenariolist
[28]:
{ "scenarios": [ { "name": "Apostolos" }, { "name": "John" }, { "name": "Robin" } ]}

Adding a list of values to individual scenarios

We can call the add_list() method to add values to individual scenarios in a ScenarioList:

[29]:
scenariolist = ScenarioList([Scenario({"weather": "sunny"}), Scenario({"weather": "rainy"})])added_scenariolist = scenariolist.add_list("preference", ["high", "low"])added_scenariolist
[29]:
{ "scenarios": [ { "weather": "sunny", "preference": "high" }, { "weather": "rainy", "preference": "low" } ]}

Adding values to all scenarios

We can call the add_value() to add a value to all scenarios in a ScenarioList:

[30]:
scenariolist = ScenarioList([Scenario({"name": "Apostolos"}), Scenario({"name": "John"}), Scenario({"name": "Robin"})])added_scenariolist = scenariolist.add_value("company", "Expected Parrot")added_scenariolist
[30]:
{ "scenarios": [ { "name": "Apostolos", "company": "Expected Parrot" }, { "name": "John", "company": "Expected Parrot" }, { "name": "Robin", "company": "Expected Parrot" } ]}

Creating scenarios from a pandas DataFrame

We can call the from_pandas() method to create a ScenarioList from a pandas DataFrame:

[31]:
import pandas as pddf = pd.DataFrame({"name": ["Apostolos", "John", "Robin"], "location": ["New York", "Cambridge", "Cambridge"]})scenariolist = ScenarioList.from_pandas(df)scenariolist
[31]:
{ "scenarios": [ { "name": "Apostolos", "location": "New York" }, { "name": "John", "location": "Cambridge" }, { "name": "Robin", "location": "Cambridge" } ]}

Creating scenarios from a CSV

We can call the from_csv() method to create a ScenarioList from a CSV:

[32]:
scenariolist = ScenarioList.from_csv("example.csv")scenariolist
[32]:
{ "scenarios": [ { "name": "Apostolos", "location": "New York" }, { "name": "John", "location": "Cambridge" }, { "name": "Robin", "location": "Cambridge" } ]}

Turn a ScenarioList into a dictionary

We can call the to_dict() method to turn a ScenarioList into a dictionary:

[33]:
scenariolist = ScenarioList([Scenario({"name": "Apostolos"}), Scenario({"name": "John"}), Scenario({"name": "Robin"})])dict_scenariolist = scenariolist.to_dict()dict_scenariolist
[33]:
{'scenarios': [{'name': 'Apostolos', 'edsl_version': '0.1.25', 'edsl_class_name': 'Scenario'}, {'name': 'John', 'edsl_version': '0.1.25', 'edsl_class_name': 'Scenario'}, {'name': 'Robin', 'edsl_version': '0.1.25', 'edsl_class_name': 'Scenario'}], 'edsl_version': '0.1.25', 'edsl_class_name': 'ScenarioList'}

Create a ScenarioList from a dictionary

We can call the from_dict() method to create a ScenarioList from a dictionary. Note that the dictionary must contain a key “scenarios”:

Turning PDF pages into scenarios

We can call the from_pdf() method to turn the pages of a PDF or doc into a ScenarioList. Here we use it for John’s paper “Large Language Models as Simulated Economic Agents: What Can We Learn from hom*o Silicus?” (link to paper). Note that the keys filename, page and text are automatically specified, so the question_text placeholder that we use for the scenarios must be {{ text }}:

[35]:
pdf_pages_scenariolist = ScenarioList.from_pdf("hom*o_silicus.pdf")pdf_pages_scenariolist[0:2] # inspecting the first couple pages as scenarios
[35]:
{ "scenarios": [ { "filename": "hom*o_silicus.pdf", "page": 1, "text": "Large Language Models as Simulated Economic Agents:\nWhat Can We Learn from hom*o Silicus?\u2217\nJohn J. Horton\nMIT & NBER\nJanuary 19, 2023\nAbstract\nNewly-developed large language models (LLM)\u2014because of how they are trained and\ndesigned\u2014are implicit computational models of humans\u2014a hom*o silicus. LLMs can be\nused like economists use hom*o economicus: they can be given endowments, information,\npreferences, and so on, and then their behavior can be explored in scenarios via simulation.\nExperiments using this approach, derived from Charness and Rabin (2002), Kahneman,\nKnetsch and Thaler (1986), and Samuelson and Zeckhauser (1988) show qualitatively\nsimilar results to the original, but it is also easy to try variations for fresh insights. LLMs\ncould allow researchers to pilot studies via simulation \ufb01rst, searching for novel social sci-\nence insights to test in the real world.\n\u2217Thanks to the MIT Center for Collective Intelligence for generous o\ufb00er of funding, though all the ex-\nperiments here cost only about $50 to run. Thanks to Daniel Rock, Elliot Lipnowski, Hong-Yi TuYe, Daron\nAcemoglu, Shakked Noy, Jimbo Brand, David Autor, and Mohammed Alsobay for their helpful conversations\nand comments. Special thanks to Yo Shavit, who has been extremely generous with his time and thinking.\nThanks to GPT-3 for all this work and helping me describe the technology. Author contact information, code,\nand data are currently or will be available at http://www.john-joseph-horton.com/.\n1\narXiv:2301.07543v1 [econ.GN] 18 Jan 2023\n" }, { "filename": "hom*o_silicus.pdf", "page": 2, "text": "1\nIntroduction\nMost economic research takes one of two forms: (a) \u201cWhat would hom*o economicus do?\u201d and\nb) \u201cWhat did hom*o sapiens actually do?\u201d The (a)-type research takes a maintained model\nof humans, hom*o economicus, and subjects it to various economic scenarios, endowed with\ndi\ufb00erent resources, preferences, information, etc., and then deducing behavior; this behavior\ncan then be compared to the behavior of actual humans in (b)-type research.\nIn this paper, I argue that newly developed large language models (LLM)\u2014because of\nhow they are trained and designed\u2014can be thought of as implicit computational models of\nhumans\u2014a hom*o silicus.\nThese models can be used the same way economists use hom*o\neconomicus: they can be given endowments, put in scenarios, and then their behavior can\nbe explored\u2014though in the case of hom*o silicus, through computational simulation, not a\nmathematical deduction.1 This is possible because LLMs can now respond realistically to a\nwide range of textual inputs, giving responses similar to what we might expect from a human.\nIt is essential to note that this is a new possibility\u2014that LLMs of slightly older vintage are\nunsuited for these tasks, as I will show.\nI consider the reasons the reasons why AI experiments might be helpful in understand-\ning actual humans. The core of the argument is that LLMs\u2014by nature of their training\nand design\u2014are (1) computational models of humans and (2) likely possess a great deal of\nlatent social information. For (1), the creators of LLMs have designed them to respond in\nways similar to how a human would react to prompts\u2014including prompts that are economic\nscenarios. The design imperative to be \u201crealistic\u201d\u2019 is why they can be thought of as com-\nputational models of humans. For (2), these models likely capture latent social information\nsuch as economic laws, decision-making heuristics, and common social preferences because\nthe LLMs are trained on a corpus that contains a great deal of written text where people\nreason about and discuss economic matters: What to buy, how to bargain, how to shop, how\nto negotiate a job o\ufb00er, how to make a job o\ufb00er, how many hours to work, what to do when\nprices increase, and so on.\nLike all models, any particular hom*o silicus is wrong, but that judgment is separate from\na decision about usefulness. To be clear, each hom*o silicus is a \ufb02awed model and can often\ngive responses far away from what is rational or even sensical. But ultimately, what will\nmatter in practice is whether these AI experiments are practically valuable for generating\ninsights. As such, the majority of the paper focuses on GPT-3 experiments.\nEach experiment is motivated by a classic experiment in the behavioral economics lit-\nerature.\nI use Charness and Rabin (2002), Kahneman et al. (1986), and Samuelson and\n1Lucas (1980) writes, \u201cOne of the functions of theoretical economics is to provide fully articulated, arti\ufb01cial\neconomic systems that can serve as laboratories in which policies that would be prohibitively expensive to\nexperiment with in actual economies can be tested out at much lower cost.\u201d\n2\n" } ]}

Example usage:

Turning PDF pages into scenarios

We can call the from_pdf() method to turn the pages of a PDF or doc into a ScenarioList. Here we use it for John’s paper “Large Language Models as Simulated Economic Agents: What Can We Learn from hom*o Silicus?” (link to paper). Note that the keys filename, page and text are automatically specified, so the question_text placeholder that we use for the scenarios must be {{ text }}:

[36]:
hom*o_silicus_scenariolist = ScenarioList.from_pdf("hom*o_silicus.pdf")

Here we inspect a couple pages:

[37]:
hom*o_silicus_scenariolist["scenarios"][0:2]
[37]:
[{'filename': 'hom*o_silicus.pdf', 'page': 1, 'text': 'Large Language Models as Simulated Economic Agents:\nWhat Can We Learn from hom*o Silicus?∗\nJohn J. Horton\nMIT & NBER\nJanuary 19, 2023\nAbstract\nNewly-developed large language models (LLM)—because of how they are trained and\ndesigned—are implicit computational models of humans—a hom*o silicus. LLMs can be\nused like economists use hom*o economicus: they can be given endowments, information,\npreferences, and so on, and then their behavior can be explored in scenarios via simulation.\nExperiments using this approach, derived from Charness and Rabin (2002), Kahneman,\nKnetsch and Thaler (1986), and Samuelson and Zeckhauser (1988) show qualitatively\nsimilar results to the original, but it is also easy to try variations for fresh insights. LLMs\ncould allow researchers to pilot studies via simulation first, searching for novel social sci-\nence insights to test in the real world.\n∗Thanks to the MIT Center for Collective Intelligence for generous offer of funding, though all the ex-\nperiments here cost only about $50 to run. Thanks to Daniel Rock, Elliot Lipnowski, Hong-Yi TuYe, Daron\nAcemoglu, Shakked Noy, Jimbo Brand, David Autor, and Mohammed Alsobay for their helpful conversations\nand comments. Special thanks to Yo Shavit, who has been extremely generous with his time and thinking.\nThanks to GPT-3 for all this work and helping me describe the technology. Author contact information, code,\nand data are currently or will be available at http://www.john-joseph-horton.com/.\n1\narXiv:2301.07543v1 [econ.GN] 18 Jan 2023\n', 'edsl_version': '0.1.25', 'edsl_class_name': 'Scenario'}, {'filename': 'hom*o_silicus.pdf', 'page': 2, 'text': '1\nIntroduction\nMost economic research takes one of two forms: (a) “What would hom*o economicus do?” and\nb) “What did hom*o sapiens actually do?” The (a)-type research takes a maintained model\nof humans, hom*o economicus, and subjects it to various economic scenarios, endowed with\ndifferent resources, preferences, information, etc., and then deducing behavior; this behavior\ncan then be compared to the behavior of actual humans in (b)-type research.\nIn this paper, I argue that newly developed large language models (LLM)—because of\nhow they are trained and designed—can be thought of as implicit computational models of\nhumans—a hom*o silicus.\nThese models can be used the same way economists use hom*o\neconomicus: they can be given endowments, put in scenarios, and then their behavior can\nbe explored—though in the case of hom*o silicus, through computational simulation, not a\nmathematical deduction.1 This is possible because LLMs can now respond realistically to a\nwide range of textual inputs, giving responses similar to what we might expect from a human.\nIt is essential to note that this is a new possibility—that LLMs of slightly older vintage are\nunsuited for these tasks, as I will show.\nI consider the reasons the reasons why AI experiments might be helpful in understand-\ning actual humans. The core of the argument is that LLMs—by nature of their training\nand design—are (1) computational models of humans and (2) likely possess a great deal of\nlatent social information. For (1), the creators of LLMs have designed them to respond in\nways similar to how a human would react to prompts—including prompts that are economic\nscenarios. The design imperative to be “realistic”’ is why they can be thought of as com-\nputational models of humans. For (2), these models likely capture latent social information\nsuch as economic laws, decision-making heuristics, and common social preferences because\nthe LLMs are trained on a corpus that contains a great deal of written text where people\nreason about and discuss economic matters: What to buy, how to bargain, how to shop, how\nto negotiate a job offer, how to make a job offer, how many hours to work, what to do when\nprices increase, and so on.\nLike all models, any particular hom*o silicus is wrong, but that judgment is separate from\na decision about usefulness. To be clear, each hom*o silicus is a flawed model and can often\ngive responses far away from what is rational or even sensical. But ultimately, what will\nmatter in practice is whether these AI experiments are practically valuable for generating\ninsights. As such, the majority of the paper focuses on GPT-3 experiments.\nEach experiment is motivated by a classic experiment in the behavioral economics lit-\nerature.\nI use Charness and Rabin (2002), Kahneman et al. (1986), and Samuelson and\n1Lucas (1980) writes, “One of the functions of theoretical economics is to provide fully articulated, artificial\neconomic systems that can serve as laboratories in which policies that would be prohibitively expensive to\nexperiment with in actual economies can be tested out at much lower cost.”\n2\n', 'edsl_version': '0.1.25', 'edsl_class_name': 'Scenario'}]

Example usage–note that we can sort results by any component, filter results using conditional expressions, and also limit how many results to display:

[38]:
q = QuestionFreeText( question_name = "summarize", question_text = "Summarize this page: {{ text }}")results = q.by(hom*o_silicus_scenariolist).run()
[39]:
(results .sort_by("page") .filter("page > 1") .select("page", "summarize") .print(format="rich", max_rows = 3))
┏━━━━━━━━━━┳━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓┃ scenario  answer ┃┃ .page  .summarize ┃┡━━━━━━━━━━╇━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩│ 2  The page introduces the concept of 'hom*o silicus,' a term for large language models (LLMs) that can ││  be used as computational models of human behavior in economic research. The author suggests that ││  these models can simulate human responses to various scenarios, much like the theoretical 'hom*o ││  economicus,' but with the advantage of computational simulation. The paper argues that LLMs are ││  designed to mimic human reactions and contain latent social information from the vast corpus they ││  are trained on, which includes economic reasoning and decision-making. Despite their imperfections, ││  the paper posits that LLMs can be useful for generating insights, especially in the context of ││  behavioral economics experiments, and focuses on experiments using GPT-3 as examples. │├──────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────┤│ 3  The page summarizes experiments that explore how AI agents, specifically different models of GPT-3, ││  respond to various economic and social scenarios. It discusses how AI behavior changes when endowed ││  with different social preferences such as equity, efficiency, and self-interest in dictator games. ││  It also examines AI responses to price gouging scenarios, showing that political views and the ││  extent of price increases affect their judgments. The paper replicates a study on status quo bias in ││  budget allocation for car and highway safety, finding that even advanced AI models like ││  text-davinci-003 exhibit this bias. Lastly, it discusses a hiring scenario influenced by a minimum ││  wage field experiment, demonstrating that imposing a minimum wage can lead AI to prefer more ││  experienced applicants. These experiments aim to compare AI behavior to human responses in similar ││  situations. │├──────────┼──────────────────────────────────────────────────────────────────────────────────────────────────────┤│ 4  The page discusses the use of large language models (LLMs) like GPT-3 for conducting economic ││  experiments in silico, which can be run quickly and inexpensively to explore parameters, test ││  sensitivities to question wording, and predict behaviors, thereby guiding empirical research. The ││  paper compares this approach to building 'toy models' in economics as a way to think through ││  problems. It also notes a related paper by Aher, Arriaga, and Kalai (2022) on GPT-3's ability to ││  replicate experimental results in psychology and linguistics. The unique contribution of this paper ││  is its focus on the connection between LLM experiments and the research paradigms in economics, ││  particularly the role of foundational assumptions like rationality. The paper suggests that LLMs can ││  be an indirect method to study human behavior, similar to how 'sciences of the artificial' abstract ││  from the complexities of the real world to focus on systems optimization. │└──────────┴──────────────────────────────────────────────────────────────────────────────────────────────────────┘

Using images as scenarios

We can call the from_image() method to create a scenario for an image. Here we use it for Figure 1 in the Home Silicus paper.

Note that this method must be used with a vision model (e.g., GPT-4o) and does not require the use of a {{ placeholder }} in the question text. The scenario keys file_path and encoded_image are generated automatically:

[40]:
from edsl import Modelmodel = Model("gpt-4o")
[41]:
image_scenario = Scenario.from_image("hom*o_silicus_figure1.png")
[42]:
image_scenario.keys()
[42]:
['file_path', 'encoded_image']

Example usage:

[43]:
q = QuestionFreeText( question_name = "figure", question_text = "Explain the graphic on this page." # no scenario placeholder)results = q.by(image_scenario).by(model).run()results.select("figure").print(format="rich")
┏━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┓┃ answer ┃┃ .figure ┃┡━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━┩│ The graphic shows the choices made by different models and human subjects in simple tests, categorized by model ││ type and endowed 'personality.' The models compared are Advanced GPT-3 (davinci-003), Human Brain, and Prior ││ GPT-3 (ada, babbage, curie-001). The choices are displayed for different scenarios, with the fraction of AI ││ subjects choosing each option (left or right) shown. The scenarios are based on the Charness and Rabin (2002) ││ study, with different endowments for GPT-3 models, such as 'You only care about fairness between players,' 'You ││ only care about total pay-off of both players,' and 'You only care about your own pay-off.' The results ││ indicate the proportion of times each choice was made under different conditions. │└─────────────────────────────────────────────────────────────────────────────────────────────────────────────────┘
[44]:
scenariolist = ScenarioList([Scenario({"a":1, "b":[1,2,3]})])expanded_scenarios = scenariolist.expand("b")expanded_scenarios
[44]:
{ "scenarios": [ { "a": 1, "b": 1 }, { "a": 1, "b": 2 }, { "a": 1, "b": 3 } ]}

Generating code for scenarios

We can call the code() method to generate the code for producing scenarios:

[45]:
scenariolist = ScenarioList.example()scenariolist_code = scenariolist.code()scenariolist_code
[45]:
['from edsl.scenarios.Scenario import Scenario\nfrom edsl.scenarios.ScenarioList import ScenarioList', "scenario_0 = Scenario({'persona': 'A reseacher studying whether LLMs can be used to generate surveys.'})", "scenario_1 = Scenario({'persona': 'A reseacher studying whether LLMs can be used to generate surveys.'})", 'scenarios = ScenarioList([scenario_0, scenario_1])']
[46]:
from edsl.scenarios.Scenario import Scenariofrom edsl.scenarios.ScenarioList import ScenarioListscenario_0 = Scenario({'persona': 'A reseacher studying whether LLMs can be used to generate surveys.'})scenario_1 = Scenario({'persona': 'A reseacher studying whether LLMs can be used to generate surveys.'})scenarios = ScenarioList([scenario_0, scenario_1])

Converting a ScenarioList into an AgentList

We can call the to_agent_list() method to convert a ScenarioList into an AgentList. Note that agent traits cannot include a “name” key as agent_name is a separate optional field of Agent objects:

[47]:
from edsl import AgentListscenariolist = ScenarioList([Scenario({"first_name": "Apostolos", "location": "New York"}), Scenario({"first_name": "John", "location": "Cambridge"}), Scenario({"first_name": "Robin", "location": "Cambridge"})])agentlist = scenariolist.to_agent_list()agentlist
[47]:
[ { "traits": { "first_name": "Apostolos", "location": "New York" }, "edsl_version": "0.1.25", "edsl_class_name": "Agent" }, { "traits": { "first_name": "John", "location": "Cambridge" }, "edsl_version": "0.1.25", "edsl_class_name": "Agent" }, { "traits": { "first_name": "Robin", "location": "Cambridge" }, "edsl_version": "0.1.25", "edsl_class_name": "Agent" }]

(Note that scenarios function similarly to traits dictionaries that we pass to AI Agents that we can use to answer survey questions. Learn more about designing AI agents for simulating surveys and experiments.)

Cheatsheet: Scenarios — edsl  documentation (2024)
Top Articles
Latest Posts
Article information

Author: Nathanial Hackett

Last Updated:

Views: 6069

Rating: 4.1 / 5 (52 voted)

Reviews: 83% of readers found this page helpful

Author information

Name: Nathanial Hackett

Birthday: 1997-10-09

Address: Apt. 935 264 Abshire Canyon, South Nerissachester, NM 01800

Phone: +9752624861224

Job: Forward Technology Assistant

Hobby: Listening to music, Shopping, Vacation, Baton twirling, Flower arranging, Blacksmithing, Do it yourself

Introduction: My name is Nathanial Hackett, I am a lovely, curious, smiling, lively, thoughtful, courageous, lively person who loves writing and wants to share my knowledge and understanding with you.