M

machinebiology

6 karmaJoined

Posts
1

Sorted by New

Comments
1

Thanks for the kind words, @RachelM. Re the ontology section, I don't know if I can get it down to a few sentences, but here's a conceptual outline:

  • We ran seven experiments where we had GPT-4 simulate an agent who had to figure out a problem
  • In four of the experiments, GPT-4 was guided through the exercise in a conversation - like when you're talking to ChatGPT. The other participant in the conversation was a piece of software that we wrote, which described the environment, told GPT what actions were available to it, interpreted its responses, and described the consequences of any actions.
    • In the three of those experiments, GPT-4 was asked to write down any knowledge it gained at each step of the process. Our software would read those observations back to GPT-4 in future messages.
      • We call those observations an "external knowledge representation" because they are "written down" and exist outside of GPT-4 itself.
  • We hypothesized that asking for these observations and reading them back later would help GPT-4 solve the problem better, which was indeed the case.
  • We also hypothesized that when the observations were written down in a more "structured" format, they would be even more helpful. This was also the case.
    • Re. what I mean by "structure": for example, "unstructured" observations would be something like a few paragraphs of text describing the GPT-4's observations and thoughts. An example of structured observations, on the other hand, might include things like a table (spreadsheet) of information observed about each object, or about each demonstration.
      • This is where we begin to touch on the concept of "ontologies", which are formal and structured descriptions of knowledge. Ontologies are usually much more complex than a basic table, but this post only covered our initial experiments on the topic.