All of Venkatesh's Comments + Replies

How would you draw the Venn diagram of longtermism and neartermism?

I do not think its about discount rates. I was recently corrected on this point here. It looks like conservatives and moderates thinking closer to the present have other better reasons like population axiologies or tractability concerns or something along those lines.

In that case, "how long into the future you're willing to look" doesn't seem to capture what's going on, since 'neartermists' are equally willing to look into the future.
How would you draw the Venn diagram of longtermism and neartermism?

There is ambiguity in the terminology here. So here is how I visualize it with my own terminology. Its not a Venn diagram but this is how I see it.


I interpret this as using different discount rates (specifically, pure time preference, to distinguish from discounting for marginal utility or exogenous extinction risk). Is that right? That is, temporal radicalists have pure time preference = 0, while the others have pure time preference > 0. Or do you mean something else by "how long into the future you're willing to look"?
The Many Faces of Effective Altruism

I thoroughly enjoyed this! The tone of the writing matched perfectly with the idea that is being conveyed.

If I may add a category:

  1. Desi EA - Someone not from a developed country kinda feeling out of place and totally inadequate to do anything about most mainstream EA cause areas. Mostly English-speaking educated elite from developing countries who possibly watch a lot more Hollywood than their local genres. (Also has some inability to parse slang. I honestly didn't understand what the moniker "IDW" and "A-aesthetic" meant although I think I understood the explanation)
1Devin Kalish9d
Thank you! I'll admit my experience is a bit limited, and I haven't had much exposure to Desi EAs, but this sounds like a good extra category I, and maybe others in EA from the US or UK, often neglect in our analyses. "IDW" stands for "Intellectual Dark Web", which is sort of the name given to a group of centrist or right leaning, anti-woke public intellectuals like Steven Pinker and Sam Harris. "A-aesthetic" is just supposed to mean "non-aesthetic", as in avoiding attaching some particular cultural aesthetic to one's messaging.
EA Forum feature suggestion thread

Recently Less wrong has created this feature. C'mon EA Forum!

EA Forum feature suggestion thread

Please let me search within my bookmarks.

In general, I read something and bookmark it if I liked it. Then that thing that I read comes up in conversation. I go into my bookmarks to find it so that I can share it with the other person mid-convo quickly but then I can't retrieve it from the bookmarks list as fast as I thought I could! This happens to me in almost every session as a facilitator of the EA Virtual programs!

Thanks for the suggestion! I've added this to our backlog.
When did the EA Forum get so good?!

On the topic of saving posts - I personally use the bookmarks feature quite a bit. Just wanted to mention it in case someone wasn't aware. The one issue I have is that I can't search within my bookmarks.

One can bookmark posts by clicking on the 3 dots just below the title of the post and then clicking on Bookmark. Then the Bookmarks can be accessed from the dropdown menu that appears underneath the username.

Thanks! I wasn't aware of the bookmarks feature
4Cillian Crosson23d
Pocket [] might be another option to consider. * They have a chrome plug-in which makes saving articles pretty easy (on EA Forum & elsewhere). * You can search within your saved articles on Pocket. * The mobile app lets you listen to articles using TTS software.
EA is more than longtermism
  1. So EA isn’t “just longtermism,” but maybe it’s “a lot of longtermism”? And maybe it’s moving towards becoming “just longtermism”?

EA has definitely been moving towards "a lot of longtermism".

The OP has already provided some evidence of this with funding data. Another thing that signals to me that this is happening is the way 80k hours has been changing their career guide. Their earlier career guide started by talking about Seligmann's factors/Positive Psychology and made the very simple claim that if you want a satisfying career, positive psychology says... (read more)

Solving the replication crisis (FTX proposal)

I am really happy to see someone doing something about the replication crisis. Sorry that you didn't get funded. I know very little about FTX or grantmaking in general and so I can't comment on the nature of your proposal or how to make it better. But now that I see someone doing something about the replication crisis I have done an update on the Tractability of this cause area and I am excited to learn more!

This excitement lead to some small actions from my end:

  1. I visited the Institute for Replication website and found it to be very helpful. I really app
... (read more)
At this point, 'reproducibility analyst' = undergrad RAs; see this talk [] by AEA data editor Lars Vilhuber. Otherwise, the replications are currently done by academics volunteering in their spare time, which is why it would help to have full-time paid replicators.
What posts do you want someone to write?

Write about the replication crisis in the 80k hours Problem profile style. Basically, write about the problem, apply the SNT framework to it, mention orgs currently working on it, mention potential career options for someone who wants to address this problem etc..

This suggestion came after reading this post.

Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

From reading this and other comments, I think we should rename longtermists to be "Temporal radicalists". The rest of the community can be "Temporal moderates" or even be "Temporal conservatives" (aka "neratermists") if they are so inclined. I attempt to explain why below.

It looks like there is some agreement that long-termism is a fairly radical idea.

Many (but not all) of the so-called "neartermists" are simply not that radical and that is the reason why they perceive their monicker to be problematic. One side is radical and many in the other side are jus... (read more)

Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

Is it possible to have a name related to discount rates? Please correct me if I am wrong, but I guess all "neartermists" have a high discount rate right?

I believe the majority of "neartermist" EAs don't have a high discount rate. They usually prioritise near-term effects because they don't think we can tractably influence the far future (i.e. cannot improve the far future in expectation). You might find the 80,000 Hours podcast episode with Alexander Berger interesting.

EDIT: neartermists may also be concerned by longtermist fanatical thinking or may be driven by a certain population axiology e.g. person-affecting view. In the EA movement though high discount rates are virtually unheard of.

Unsurprising things about the EA movement that surprised me

For me, the big revelation was that EA was not just about causes that are supported by RCTs/empirical evidence. It has this whole element of hits-based giving. In fact, the first time I realized this, I ended up creating a question on the forum about the misleading definition.

What complexity science and simulation have to offer effective altruism

Overall, this seems like a weak criticism worded strongly. It looks like the opposition here is more to the moniker of Complexity Science and its false claims of novelty but not actually to the study of the phenomenon that fall within the Complexity Science umbrella. This is analogous to a critique of Machine Learning that reads "ML is just a rebranding of Statistics". Although I agree that it is not novel and there is quite a bit of vagueness in the field, I disagree on the point that Complexity Science has not made progress.

I think the biggest utility of... (read more)

If the OP wants to discuss agent-based modeling, then I think they should discuss agent-based modeling. I don't think there is anything to be gained by calling agent-based models "complex systems", or that taking a complexity science viewpoint adds any value. Likewise, if you want to study networks, why not study networks? Again, adding the word "complex" doesn't buy you anything. As I said in my original comment, part of complexity science is good: this is the idea we can use maths and physics to modeling other systems. But this is hardly a new insight. Economists, biophysicists, mathematical biologists, computer scientists, statisticians, and applied mathematicians have been doing this for centuries. While sometimes siloing can be a problem, for the most part ideas flow fairly freely between these disciplines and there is a lot of cross-pollination. When ideas don't flow it is usually because they aren't useful in the new field. (Maybe they rely on inappropriate assumptions, or are useful in the wrong regime, or answer the wrong questions, or are trivial and/or intractable in situations the new field cares about, or don't give empirically testable results, or are already used by the new field in a slightly different way.) The "problem" of "siloing" that complexity science claims to want to solve is largely a mirage. But of course, complexity science makes greater claims than just this. It claims to be developing some general insights into the workings of complex systems. As I've noted in my previous comment, these claims are at best just false and at worst completely vacuous. I think it is dangerous to support the kind of sophistry spouted by complexity scientists, for the same reason it is dangerous to support sophistry anywhere. At best it draws attention away from scientists who are making progress on real problems, and at worst it leads to piles of misleading and overblown hype. My criticism is not analogous to the claim that "ML is just a rebranding of st
What complexity science and simulation have to offer effective altruism

Thanks a lot for posting this! I also have the same feeling as finm in that I wanted to write something like this. But even if I had written it wouldn't have been as extensive as this one is. Wonderfully done!

To add to the pool of resources that the post has already linked to:

  1. You can meet other people interested in Complexity Science/Systems Thinking here: It is a wonderful community with a good mix of rookies and experts. So even if you are new to Complexity you should feel free to join in. I participated in their late
... (read more)
What is meta Effective Altruism?

The very vague definition of "Cause Area" is making it hard for me to think about meta EA. It feels like GPR is a cause area and so working on it would be direct impact work but I am not sure. Same goes for EA Movement building. Also, it starts getting trippy if we claim meta-EA is also a cause area!

Maybe we can clarify the definition for cause area within this meta EA framework?

2Vaidehi Agarwalla1y
Thanks for your comment, I think it's a great point! I think it depends on your level of analysis. I outline my thinking in the comment above [] for how I'm thinking about it for this post. In most cases, it probably makes sense to separate out Global Priorities Research and EA Movement Building because practically the interventions can be very different (although there are some interesting areas they overlap - e.g. career advice research). Also, there was a discussion [] on this you might find interesting.
Exporting EA discussion norms

Specifics matter. There can be no one discussion norm to get people to be nice to each other.

I think things like discussion norms are highly contextual. The platform in which the discussion is happening, the point being discussed, the people who are involved in the discussion are some of the many factors that could end up mattering. Given these factors, transporting discussion norms from one virtual place to another might not be the right way to think about it.

I think the "EA-like" discussion norm is a function of several things. In addition to the factors... (read more)

Complexity and the Search for Leverage

Thanks for this wonderful article! I absolutely agree that it would be highly beneficial to have a community that is at the intersection of EA and Complexity. I recently participated in an event, where I actually found several other EAs interested in Complexity but unfortunately I couldn't spend enough time to network with them further (I got involved in another project there).

I have also been thinking about how we may use the tools of Complexity to make EA better although I haven't been able to concretely land on anything. Here are some vague thoughts I h... (read more)

Doesn't complexity have its "roots" in reality? as one aspect of phenomenal world? of actuality and factual experience? rather than growing up out of a set of conceptualized abstractions? I refer, of course, to Varella, Matura et al ... "self-organization" and such. Autopoesis, nae? And, of course, Mandelbrot ... the fractal nature of reality ... #Lateral - Came across this in my bookmarks: [] /bdt
Is the current definition of EA not representative of hits-based giving?

Thanks for linking to the podcast! I hadn't listened to this one before and ended up listening to the whole thing and learnt quite a bit.

I just wonder if Ben actually had some other means in mind other than evidence and reasoning though. Do we happen to know what he might be referencing here? I recognize it could just be him being humble and feeling that future generations could come up with something better (like awesome crystal balls :-p). But just in case if something else is actually already there other than evidence and reason I find it really important to know.

1Prabhat Soni1y
Yeah, I agree. I don't have anything in mind as such. I think only Ben can answer this :P
Is the current definition of EA not representative of hits-based giving?

I both agree and disagree with you.


  • I agree that the ambiguity in whether giving in a hits-based way or evidence-based way is better, is an important aspect of current EA understanding. In fact, I think this could be a potential 4th point (I mentioned a third one earlier) to add to the definition desiderata: The definition should hint at the uncertainty that is in current EA understanding.
  • I also agree that my definition doesn't bring out this ambiguity. I am afraid it might even be doing the opposite! The general consensus is that both experim
... (read more)
Is the current definition of EA not representative of hits-based giving?

Thanks for bringing up Will's post! I have now updated the question's description to link to that.

I actually like Will's definition more. The reason is two-fold:

  1. Will's definition adds a bit more mystery which makes me curious to actually work out what all the words mean. In fact, I would add this to the list of "principal desiderata for the definition" the post mentions: The definition should encourage people to think about EA a bit deeply. It should be a good starting point for research.
  2. Will's definition is not radically different from what is already
... (read more)
I actually disagree with your definition. Will's definition allows for debate about what counts as evidence and careful reasoning, and whether hits based giving or focusing on RCTs is a better path. That ambiguity seems critical for capturing what EA is, a project still somewhat in flux and one that allows for refinement, rather than claiming there are 2 specific different things. A concrete example* of why we should be OK with leaving things ambiguous is considering ideas like the mathematical universe hypothesis (MUH). Someone can ask; "Should the MUH be considered as a potential path towards non-causal trade with other universes?" Is that question part of EA? I think there's a case to make that the answer is yes (in my view correctly,) because it is relevant to the question of revisiting the "tentatively understanding" part of Will's definition. *In the strangest sense of "concrete" I think I've ever used.
Is the current definition of EA not representative of hits-based giving?
  1. The point about "working through what it really means" is very interesting. (more on this below) But when I read, "high-quality evidence and careful reasoning", it doesn't really engage the curious part of my brain to work out what that really means. All of those are words I have already heard and it feels like standard phrasing. When one isn't encouraged to actually work through that definition, it does feel like it is excluding high variance strategies. I am not sure if you feel this way but "high-quality evidence" to my brain just says empirical evide

... (read more)
Is the current definition of EA not representative of hits-based giving?

For evaluating the definition of EA we would only want people who don't know much about EA. So we would need a focus group of EA newcomers and ask them what the definition means to them. Does that sound right?

Yeah or just ask people on Mechanical Turk or similar. (You could ask if people have already heard about EA and see if that makes a difference.)
Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened?

Consider this - say the EA figured out the number of people the problem could affect negatively (i.e) the scale. Then even if there is a small probability that the EA could make a difference shouldn't they have just taken it? Also even if the EA couldn't avert the crisis despite their best attempts they still get career capital, right?

Another point to consider - IMHO, EA ideas have a certain dynamic of going against the grain. It challenged the established practices of charitable giving that existed for a long time. So an EA might be inspired by this and i... (read more)

Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened?

"... I believe personal features (like fit and comparative advantages) would likely trump other considerations..." That is a very interesting point. Sometimes I do have a very similar feeling - the other 3 criteria are there mostly just so one doesn't base one's decision fully on personal fit but consider other things too. At the end of the day, I guess the personal fit consideration ends up weighing a lot more for a lot of people. Would love to hear from someone in 80k hours if this is wrong...

Editing to add this: I wonder if there is a survey somewhere out there that asked people how much do they weigh each of the 4 factors. That might validate this speculation...

Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened?

Thanks for linking to that OpenPhil page! It is really interesting. In fact, one of the pages that page links to talks about ABMs that rory_greig mentioned in his comment.

Would an EA have directed their career on fixing the subprime mortgage crisis of '07-'08 before it happened?

As someone interested in Complexity Science I find the ABM point very appealing. For those of you with a further interest in this, I would highly recommend this paper by Richard Bookstaber as a place to start. He also wrote a book on this topic and was also one of the people to foreshadow the crisis.

Also if you are interested in Complexity Science but never got a chance to interact with people from the field/learn more about it, I would recommend signing up for this event.

2Rory Greig1y
Hey Venkatesh, I am also really interested in Complexity Science, in fact I am going to publish a blog post on here soon about Complexity Science and how it relates to EA. I've also read Bookstaber's book, in fact Doyne Farmer has a similar book coming out soon which looks great, you can read the intro here []. I hadn't heard of the Complexity Weekend event but it looks great, will check that out!
The case of the missing cause prioritisation research

Sorry for digging up this old post. But it was mentioned in the Jan 2021 EA forum Prize report published today and that is how I got here.

This comment assumes that Cause Prioritization (CP) is a cause area that requires people with width(worked across different cause areas) rather than depth(worked on a single cause area) of knowledge. That is, they need to know something about several cause areas instead of deeply understanding one of them. Would love to hear from CP researchers or others who would disagree.

  1. Maybe CP is an excellent path for some people

... (read more)
Announcing "Naming What We Can"!

May I suggest that you also name people who strongly identify with the ideas of some of these organizations? For instance, 64,620 hourists; Glomars; Dr.Phils; The InCredibles (CrediblyGood);

Also if FHI is Bostrom's squad then they should rename their currently boringly named "Research Areas" page to Squad Goals.

Happy April Fools! :-)

Strong +1 for squad goals :)
My preliminary research on the Adtech marketplace

Hi tamgent! Thanks for the suggestion. I have edited the post to add my thoughts on relevance to EA. I am no expert at cause prioritization, so I have tried my best to make an argument. Would love to hear your thoughts.

Open Thread #39

Nope. Its been a long time now and I had almost forgotten about it! I guess this means we should start one...

Open Thread #39

Right. I sent a message via the contact page in the EA Hub Website. Maybe I will get an update on what is going on.

Have you had any updates on this? This topic came up at a recent meetup I was at; I'd be interested in reading/contributing.
Open Thread #39

Is there an Effective Altruism wiki? I found this one: but the URL that it asks you to go to doesn't take you anywhere.

I am sorta new to the EA movement. I think contributing to a wiki will help me learn more. Plus as a non-native English speaker trying to improve English writing skills, I think contributing to a Wiki can be useful to me. So where is the Wiki? If not, shouldn't we start one or improve the aforementioned wikia page?

According to this reddit thread [] :