Hide table of contents

Misha and I recently recorded a short discussion about large language models and their uses for effective altruists.

This was mostly a regular Zoom meeting, but we added some editing and text transcription. After we wrote up the transcript, both Misha and myself edited our respective sections.

I think the final transcript is clearer and contains more information than the original discussion. I might even suggest using text-to-speech on the transcript rather than listening to the original audio. This back-and-forth might seem to ruin the point of presenting the video and audio, but I think it might be straightforwardly more pragmatic.



  • Opening
  • Introduction
  • How do we use LLMs already?
  • Could EAs contributing to applied LLMs be harmful?
  • Potential LLM Application: Management and Emotional Assistance
  • Potential LLM Application: Communication, Broadly
  • Aside: Human-AI-Human Communication
  • Potential LLM Application: Decision Automation
  • Potential LLM Application: EA Forum Improvements
  • Potential LLM Application: Evaluations
  • LLM user interfaces
  • What should EAs do with LLMs?


Ozzie:  Hello. I just did a recording with my friend Misha, an EA researcher at ARB Research. This was a pretty short meeting about large language models and their use by effective altruists. The two of us are pretty excited about the potential for large language models to be used by effective altruists for different kinds of infrastructure.

This is an experiment with us presenting videos publicly. Normally, our videos are just Zoom meetings. If anything, the Zoom meetings would be unedited. I found that to be quite a pain. These Zoom meetings typically don't look that great on their own, and they don't sound too terrific. So we've been experimenting with some methods to try to make that a little bit better.

I am really curious about what people are going to think about this and am looking forward to what you say. Let's get right into it.


Ozzie: For those watching us, this is Misha and me just having a meeting about large language models and their use for effective altruism.

Obviously, large language models have been a very big deal very recently, and now there's a big question about how we could best apply them to EA purposes and what EAs could do best about it. So this is going to be a very quick meeting. We only have about half an hour. 

Right now, we have about seven topics. The main topic, though, is just the LLM applications. 

How do we use LLMs already?

Ozzie: So, how do we use LLMs already? 

Misha: I think I use them for roughly 10 minutes on average per day.

Sometimes I just ask questions or ask queries like, "Hey, I have these ingredients. What cocktails can I make?" Sometimes I try to converse with them about stuff. Sometimes I just use it (e.g., text-davinci-003) as a source of knowledge. I think it's more suitable for areas where verifiable expertise is rare.

Take non-critical medicine, like skincare. I had a chat with it and got some recommendations in this domain, and I think it turned out really well. I previously tried to search for recommendations and asked a few people, but it didn't work.

I also use it as an amplification for journaling whenever I'm doing any emotional or self-coaching work. Writing is great. I personally find it much easier to write “as if” I'm writing a message to someone—having ChatGPT obviously helps with that.

Having a conversation partner activates some sort of social infrastructure in my brain. Humans solve math problems better when they are framed socially. And yeah, doing it with language models is straightforward and really good. Further, sometimes models give you hints or insights that you forgot to think about.

You can ask tons of questions and be really annoying, which you might not be comfortable doing with your friend or even with a professional. One trick I used previously to prompt a human medical professional to give me some sort of cost-benefit analysis was to ask them: "Well, if instead of me, you were giving an advice to your son or daughter, what would you do?" This makes them actually think. With LLMs, you can probe straightforwardly and see where their limits are.

Ozzie, what about you? 

Ozzie: Yeah, I have used it in a few cases, particularly to rewrite Facebook posts. Both to clean them up, also to try to change them into Tweets. This had some success. I think rewriting in different styles is quite nice. I've been experimenting with a lot of semi-EA purposes.

So some of it is like taking very aggressive writing and rewriting it for the EA Forum. And then, ChatGPT knows that writing for the EA Forum means making it super polite and long.

I'm having trouble getting it to come up with cool metaphors and witty, historic examples. There are a few cases where I tried coming up with a list of 10 examples of this intellectual idea, like negative externalities from this historical era.

Like, from 400 BC to like, 680 BC or something; in some cases it works and in some cases, it doesn't. It feels like hypothetically you should be able to come up with a Scott Alexander post-type content. Just come up with 10 good historical examples on an idea and write cool anecdotes.

Also, of course, trying to come up with whole Seinfeld episodes about any topic of your choice or something like that is starting to be possible.

I tried using it a bit for evaluating information, so I'd say, come up with like 10 attributes to rate this intellectual, and then in each one, judge the intellectual. And it did.

It definitely is gonna require some work, but it's able to kind of start, it could at least differentiate that Donald Trump gets very low marks as an intellectual. In comparison, most of the intellectuals, I could come up with had very high marks.

Could EAs contributing to applied LLMs be harmful?

Ozzie: Next topic. Can this do more harm than good? 

So, there's one camp that may be extreme that would say any use of AI would be advancing AI in the same capability, and that'll be like net bad.

But that said, A, this is a long argument and discussion. So we want to get more into applications, and we don't want to spend much time on that. But B, my guess is that a lot of people would agree that there are probably just some pretty cheap wins that we could take, without extending the theory of language models or the total capabilities of language models, but still getting some valuable things with them. Some decent risk-reward trade-offs. 

Misha: I basically agree.

I think interfaces are one area that currently sucks. Making better interfaces can help everyone think a bit better, especially with later models. I'm not sure if this would substantially accelerate capabilities.

Ozzie: In terms of groups that we don't want to have developed better epistemic, there are definitely some authoritarian regimes that would be bad. The next one that could be problematic is AI-development organizations.

If EA helps develop tools that help people have better epistemic or rational reasoning, maybe they'll be used to make AIs faster. Again, I don't know how big of a deal this is, but I think that may be one of the main things to be thinking about. 

Misha: I basically expect AI labs to start squeezing out these models to improve productivity. Though I think most of it, for now, will be via Codex-like tech, e.g., doing sysadmin better. People made ChatGPT simulate a terminal, which is suggestive. 

Application: Management and Emotional Assistance

Ozzie: So how about we get into applications? That's the main topic. You have a few applications to discuss, and then I have a few applications. What are your favorite applications? 

Misha: To just continue the discussion about differential progress. People in EA think that independent researchers and other loners struggle without proper management. 

I think a bunch of helpful things in this direction can be achieved via GPT. So I wouldn't be surprised if someone figured out how to turn models into research managers to help with performance coaching and other things. This seems positive.

Another one: there is a lot of untapped knowledge on Reddit and in other amazing communities. They don’t do proper science, but by just trying to figure out what actually works, they collect useful insight. And you can pull it out of LLMs by asking to be a knowledgeable member of such a community.

Likewise, you can get perspective through the eyes of others. GPT is sort of a phenomenological museum—you know, a window into other worlds.

I really like Clearer Thinking. They have programs to improve people's mental tools. If you're taking their quiz, it's very nice in isolation, but it’d be cooler if you were able to do them alongside LLM to be able to chat about ideas and analyze results. The conversation is better than a linear progression. "Oh, I am inconsistent in how I value my time. Why don’t I feel comfortable spending money to buy time?” is an excellent moment to reflect more, and GPT might be good at holding space for it.

Lastly, I think Scott Alexander wrote about a hypothetical client who was upset with his partner, claiming they didn’t really love them. The problem was that they wanted to have a hot cooked dinner every night they come from work. But their partner works, so it wasn't possible every time. Scott just suggested calling your partner in advance and, if they would be able to come home early enough to cook dinner, just... order Uber Eats. It worked! I think a bunch of human problems are debuggable and are often simple.

Ozzie: How would that work: of GPT understanding a human's life well enough to know what to recommend and what you need? Would it need some information about what you're doing?

Misha: Right.  

And in this case, an emotionally responsive model can simply ask, "What's up?" "Why are you thinking that?" "How would that make you feel?" Someone is creating a safe space for people to think. Because unless you—I dunno—unless I'm writing, I just can't really pull it all together and make a connection. Google Docs is amazing for that. Making it more conversational, I believe, would make it easier for more people. 

Ozzie: Hypothetically, it seems like people are going to be trying to import people's emails and social media posts and stuff like that to just begin with a lot of information about the person, and then from that they could hypothetically make a lot of recommendations. 

In the interest of time, let's go to the next topic. 

Application: Communication, Broadly

Ozzie: The first application I have to discuss is communication in general. I wrote one post about this. Arguably, translating English to Russian is similar to translating English spoken by a ninth grader in Detroit into a language spoken by someone very different, using different terminology and cultural markers.

People want to be communicated with very personally, so having something that understands them very well, knows what they know, and could explain things in terms that they understand just gives you a massive benefit, hypothetically.

I think in EA there's definitely a whole lot in the philosophical literature that we just don't know and that no one so far has figured out. Hypothetically GPT could understand all of that and say "oh, these 10 insights are the most valuable." Right now, instead, these fields are all like coded with continental philosophical language or coded with other terminology that we're really not used to, or like these other foundation assumptions that we don’t understand or agree with. 

What a lot of people want is a personal tutor that understands it very well and understands how they learn. So we kind of want things like chatbots to become personalized tutors in any individualized style of communication. So hypothetically that's just like a whole lot of value on the table.

Misha: An example would be asking Model to summarize key insights of postmodernism for someone who is libertarian-ish or reads LessWrong. This will probably work quite well.

Likewise, you can communicate in your own language. Well, the doctor doesn’t get my math metaphors or someone else’s cooking metaphors. Contrary to GPT, which speaks both math and cooking fluently and can understand them. 

Say you want to use "simulated annealing" as a metaphor for how you approach one area of your life. LLM would understand you, while friends might not. Shorter inferential differences mean less need to explain yourself or build toward a thesis—you can just shoot.

Ozzie: I guess one analogy is TV tropes. It has great lists of many super-specific things. Like, for many very specific tropes that exist online, here's every single example of it in every type of media.

That's very different from Wikipedia. There are a lot of intellectual terminologies that are not connected at all. So it is pretty common for me to find that, oh, this interesting term was actually cited in like 10 different manuscripts or something like that and used in these different ways.

But you should know that each way it's used is slightly different. So it's just like a huge pain that, hypothetically, could be solved. I guess going down a bit and finding mistakes and miscommunications would be great. I think people very often misunderstand stuff online, at least when I'm writing comments to people. I'm constantly misunderstanding people when I try to respond and stuff.

Hypothetically, there could be a browser extension that flags bad text in red and says "Oh, these few words you're probably going to misunderstand." And then if you hover over it, you can see what it actually means. Of course, that's not as intense as something that rewrites all the content for you in ways that it's pretty sure you're going to understand, which is probably more what you'd like. 

Misha: Right.

I think you probably thought about having this conversation not between me and you but between you, your AI, my AI, and me. It might smooth things a lot by interacting between different cultures. 

Ozzie: Yeah.

Human-AI-Human Communication

Ozzie: I think that human-to-human conversation is just really mediocre.

Humans are simply not very good at communicating with one another. It's very complicated. You have to understand where the other person's coming from; we have many different types of humans with different backgrounds. And then you also have to understand them. Both of those challenges are dramatically more complicated than people give them credit for.

So I think we want to move to this human-AI-human model, where AI is figuring out how to extract valuable information from humans, which probably looks a lot different than them writing essays or anything like that. I have no idea what the ideal is going to be. Perhaps the AI just asks people very targeted questions.

And then, very separately, the AI develops an embedding of the important information that comes from humans and communicates information that could partially come from that to other humans as it's most needed.

The way to write an essay in the future, for a human, would be not to write an essay but instead to get interrogated by a language model.

In another world, we would actually just stop with the idea of essays. Instead, you get like four intellectual credits, because that's about how much interesting information you've contributed to the AI system. And then, when people want information, they just get whatever ideas they want. Content comes from a mess of different people and stuff, and that's totally fine.

Miscommunication is huge. If you could completely eliminate miscommunication online, I think that would be worth a lot of money. I think therapists, like marriage therapists, do a lot of trying to get communication to be decent. So if an AI could do that for you, that would be huge.

Misha: Right. 

Application: Decision Automation

Misha: Also, like a lot of small businesses, some are dysfunctional. The Profit, a TV show, shows how a lot of them are disasters. Sure, there are some selection effects. But a lot of it is due to poor emotional and conversational skills. Well, also poor basic business skills.

Ozzie: Now, there’s a super interesting question, which is, “How much decision-making can we automate pretty easily with a combination of language models and a few other tools”? Like hypothetically, a lot of business decisions are just not being made that well. A lot of bureaucratic decisions are probably worse than a decent guess by a properly trained agent or something.

It's not super clear exactly when we'll reach what thresholds. But yeah, there are a lot of bad decisions being made.

Misha: Right. I think it would be like really hard to secure agreement from the involved actors. People don’t like when others override their decisions, and they wouldn’t like LLMs to do it either. I think this is one of the problems with the adoption of prediction markets in corporations. 

Ozzie: I think one good model is that of autonomous vehicles.

It’s level one before you get to level five. Levels one and two are just like driver assist. And then, as the systems get better, people will rely on them more. 

Misha: Slower adaptation is good. Initially just ask good probing questions in a non-threatening manner, and you're like, "oh yeah, that's a good point. I'll, I will do that". And because no one told you to do that, whether it was some AI from OpenAI or another company, you don't even lose social standing. You don’t look weak or anything. So even a very bossy person can agree.

As a side note, I think the same dynamic might enable men to do more therapy-shaped things via LLMs.

Ozzie: I imagine, too, that these AI models would be very useful in the sense that they'd be deemed much less biased than a lot of the bosses and other people making the decisions.

If someone's in charge of a big government contract and wants to make a big decision about it, that might be suspect. There are a lot of opportunities for bias.

If they had an AI kind of say, "oh yeah, that's what I would do, too", that would give them a fair bit more credibility. So just having some assistance could be useful. 

There are also definitely some situations where I expect people to only trust AI because, like, they just can't trust any responsible person to make a decent decision.

Application: EA Forum Improvements

Ozzie: There are a bunch of potential uses of GPT for the EA Forum.

First, there are a lot of new users who really take time to get used to the community norms. They come in pretty grumpy, or they don't know what terminology we use. EA does have pretty unusual epistemic standards that are difficult to teach.

In theory, when you're writing a post, you should be able to see in real-time what the comments will be and what the karma for that post will be. So as you're editing, you write a bad sentence and immediately see an imagined angry comment. And then you say, "Okay, I guess I'll just delete that sentence." Obviously, this assistant service does have some harm as well as benefits, but hypothetically, some of the angry comments that we see could just be seen in advance and then prevented.

You could also have less intense steps, such as "Oh, your writing style probably isn't up to the best standards and probably won't be properly appreciated, but here's our version of rewriting it, that you're free to take inspiration from."

In terms of people reading articles on the EA Forum, we may just have 10 different versions of an article or something. Writers give it a few sliders of, "Oh, do I want the very summarized version? Do I want the, like, very long version? Do I want it to use stories from what historical period or something similar?” And the articles could be automatically adjusted accordingly.

Application: Evaluations

Ozzie: I see evaluation is a really major application for LLMs. In this case, it just means that for every EA Forum post or comment, we have an estimate of how well-written this is and how likely this is to hold up after scrutiny. How inflammatory does it seem? How good or bad does this post seem on a few different spectrums?

Hypothetically, we could have a lot of these auto-evaluations done. That's just on the EA Forum, of course. We'd really want that in all of Twitter and all of the media. Of course, it's debatable what those would look like.

Misha: Yeah. Unbelievable today this is in the realm of possible, while some years ago sentiment prediction was mediocre or so. 

Ozzie: A whole lot of things are possible all of a sudden. It's like a really freaking interesting time. 


Ozzie: So, moving on, um, UX issues. What do you think about UX issues?

Misha: Oh yeah. You basically have two interfaces. One is Chat GPT, where, like your model, it is very polite and restricted, which is not ideal, but the chat interface is nice. Another one is to just continue the line, text completion, where you can, of course, recreate the chat model and other things.

But this all seems not that exciting because the second one is just LLM default, predicting the next token. Chat mode is nice and more humane and hence appreciated. But also, it’s the first idea you would have. I think Janus builds what they call "multiverses," where you just branch text in all sorts of directions, give only a few prompts, and rely mostly on curation. This is really nice. This is another interface that is available.

Today's appreciation and perception of capabilities are probably limited by the fact that you just get one thing as a response and can't easily customize how it's generated.

The next thing would be to have conversations with multiple LLM-simulated partners, which would have different characters, different perspectives, and so on. You’d be able to have a wide array of shoulder advisors. Someone who is more direct would be straightforward; someone else would be more nuanced and careful; someone would ask, “All these considerations are good, but what do you want?”; another would be appreciative and emotionally soothing or something.

I hope for more interfaces. But in practice, humans are kind of bad at developing new ones. It's like we still owe everything to Xerox PARK, right? But interfaces might unlock new creative ways to make models useful and helpful. But it took us a while to figure out that you need to ask them to “think step by step.” So probably exciting new crazy interfaces are not coming anytime soon. 

Ozzie: I think my quick take is that, um, on the internet, it probably took thousands of UX and design years to figure out what websites should look like.

And then, once we had mobile devices, it took many hundreds or thousands of years to figure out what that should look like. It's just a huge amount of work to really figure out the UX and UI for a new medium. And this appears to be more distinct to me than regular Web versus mobile. It seems like a much bigger shift, so I'd expect it to take a lot of time.

What should EAs do with LLMs?

Ozzie: So yeah, in the very end, um, what should EA do now? 

Misha: It's unclear if, like, EAs are anything special. I sincerely hope that people outside of the alignment community will pay more attention to this new technology, adopt these language models, and use it for their own benefit.

But yes, as previously mentioned, figuring out how to do research management well with LLMs might be very helpful. I would be pretty excited if someone spent a month or more intensely trying to integrate LLMs into their processes and workflows.

Oh, yeah, I am a bit involved in collecting all sorts of helpful base rates. For forecasting purposes and just to inform people about them. I previously figured out I can just ask ChatGPT to give me more examples of events I am interested in, like “what are huge secret projects that have remained secret for a long time?” And of course, I got a list of 30+ and never heard about most of them.

Ozzie: Yeah, that makes sense. It is also a big topic. But we need to finish up now. Thanks so much for your time. Any last comments? 

Misha: Nope. Thanks for hosting, Ozzie.

Ozzie: Yeah. Thank you.





More posts like this

Sorted by Click to highlight new comments since:

Tangentially, this made me wonder whether the ppl running EAF/LW/etc are thinking about and "ready" wrt the risk of mass-produced BS from LLMs flooding online spaces, including potentially forums like these.

I'm not sure if they are, but personally, I wouldn't be too concerned. If a bunch of new accounts joined that started to produce a bunch of maybe-fake-seeming content, that seems bad but not catastrophic.

If LLMs could genuinely do writing well enough to do well on the EA Forum, there are lots of positive things we could do with that.

If LLMs could genuinely do writing well enough to do well on the EA Forum, there are lots of positive things we could do with that.

I agree there are those things, but I am overall probably more pessimistic than you; I think there is a (significant) assymmetry towards pollution-y and not-truth-conducive content production here.

(That said, I am not too concerned overall either; I think the solution of making it harder/require some form of verification to make an account.) 

Curated and popular this week
Relevant opportunities