TK

Thomas Kwa

Researcher @ METR
3764 karmaJoined Working (0-5 years)Berkeley, CA, USA

Bio

Participation
4

AI safety researcher

Comments
306

Are you interested in betting on these beliefs? I couldn't find a bet with Vasco but it seems more likely we could find one, because it seems like you're more confident

  • You're shooting the messenger. I'm not advocating for downvoting posts that smell of "the outgroup", just saying that this happens in most communities that are centered around an ideological or even methodological framework. It's a way you can be downvoted while still being correct, especially from the LEAST thoughtful 25% of EA forum voters
  • Please read the quote from Claude more carefully. MacAskill is not an "anti-utilitarian" who thinks consequentialism is "fundamentally misguided", he's the moral uncertainty guy. The moral parliament usually recommends actions similar to consequentialism with side constraints in practice.

I probably won't engage more with this conversation.

Claude thinks possible outgroups include the following, which is similar to what I had in mind

Based on the EA Forum's general orientation, here are five individuals/groups whose characteristic opinions would likely face downvotes:

  1. Effective accelerationists (e/acc) - Advocates for rapid AI development with minimal safety precautions, viewing existential risk concerns as overblown or counterproductive
  2. TESCREAL critics (like Emile Torres, as you mentioned) - Scholars who frame longtermism/EA as ideologically dangerous, often linking it to eugenics, colonialism, or techno-utopianism
  3. Anti-utilitarian philosophers - Strong deontologists or virtue ethicists who reject consequentialist frameworks as fundamentally misguided, particularly on issues like population ethics or AI risk trade-offs
  4. Degrowth/anti-progress advocates - Those who argue economic/technological growth is net-negative and should be reduced, contrary to EA's generally pro-progress orientation
  5. Left-accelerationists and systemic change advocates - Critics who view EA as a "neoliberal" distraction from necessary revolutionary change, or who see philanthropic approaches as fundamentally illegitimate compared to state redistribution
  • My main concern is that the arrival of AGI completely changes the situation in some unexpected way.
    • e.g. in the recent 80k podcast on fertility, Rob Wiblin opines that the fertility crash would be a global priority if not for AI likely replacing human labor soon and obviating the need for countries to have large human populations. There could be other effects.
    • My guess is that due to advanced AI, both artificial wombs and immortality will be technically feasible in the next 40 years, as well as other crazy healthcare tech. This is not an uncommon view
  • Before anything like a Delphi forecast it seems better to informally interview a couple of experts, and then write your own quick report on what the technical barriers are to artificial wombs. This way you can incorporate this into the structure of any forecasting exercise, e.g. by asking experts to forecast when each of hurdles X, Y, and Z will be solved, whereupon you can do things like identifying where the level of agreement is highest and lowest, as well as consistency checks against the overall forecast.
  • Most infant mortality still happens in the developing world, due to much more basic factors like tropical diseases. So if the goal is reducing infant mortality globally, you won't be addressing most of the problem, and for maternal mortality, the tech will need to be so mature that it's affordable for the average person in low-income countries, as well as culturally accepted.

Yeah, while I think truth-seeking is a real thing I agree it's often hard to judge in practice and vulnerable to being a weasel word.

Basically I have two concerns with deferring to experts. First is that when the world lacks people with true subject matter expertise, whoever has the most prestige--maybe not CEOs but certainly mainstream researchers on slightly related questions-- will be seen as experts and we will need to worry about deferring to them.

Second, because EA topics are selected for being too weird/unpopular to attract mainstream attention/funding, I think a common pattern is that of the best interventions, some are already funded, some are recommended by mainstream experts and remain underfunded, and some are too weird for the mainstream. It's not really possible to find the "too weird" kind without forming an inside view. We can start out deferring to experts, but by the time we've spent enough resources investigating the question that you're at all confident in what to do, the deferral to experts is partially replaced with understanding the research yourself as well as the load-bearing assumptions and biases of the experts. The mainstream experts will always get some weight, but it diminishes as your views start to incorporate their models rather than their views (example that comes to mind is economists on whether AGI will create explosive growth, and how recently good economic models have been developed by EA sources, now including some economists that vary assumptions and justify differences from the mainstream economists' assumptions).

Wish I could give more concrete examples but I'm a bit swamped at work right now.

Not "everyone agrees" what "utilitarianism" means either and it remains a useful word. In context you can tell I mean someone whose attitude, methods and incentives allow them to avoid the biases I listed and others.

I think the "most topics" thing is ambiguous. There are some topics on which mainstream experts tend to be correct and some on which they're wrong, and although expertise is valuable on topics experts think about, they might be wrong on most topics central to EA. [1] Do we really wish we deferred to the CEO of PETA on what animal welfare interventions are best? EAs built that field in the last 15 years far beyond what "experts" knew before.

In the real world, assuming we have more than five minutes to think about a question, we shouldn't "defer" to experts or immediately "embrace contrarian views", rather use their expertise and reject it when appropriate. Since this wasn't an option in the poll, my guess is many respondents just wrote how much they like being contrarian, and EAs have to often be contrarian on topics they think about so it came out in favor of contrarianism.

[1] Experts can be wrong because they don't think in probabilities, they have a lack of imagination, there are obvious political incentives to say one thing over another, and probably other reasons, and lots of the central EA questions don't have actual well-developed scientific fields around them, so many of the "experts" aren't people who have thought about similar questions in a truth-seeking way for many years

I think this is a significant reason why people downvote some, but not all, things they disagree with. Especially a member of the outgroup who makes arguments EAs have refuted before and need to reexplain, not saying it's actually you

Can you explain what you mean by "contextualizing more"? (What a curiously recursive question...)

I mean it in this sense; making people think you're not part of the outgroup and don't have objectionable beliefs related to the ones you actually hold, in whatever way is sensible and honest.

Maybe LW is better at using disagreement button as I find it's pretty common for unpopular opinions to get lots of upvotes and disagree votes. One could use the API to see if the correlations are different there.

IMO the real answer is that veganism is not an essential part of EA philosophy, just happens to be correlated with it due to the large number of people in animal advocacy. Most EA vegans and non-vegans think that their diet is a small portion of their impact compared to their career, and it's not even close! Every time you spend an extra $5 finding a restaurant with a vegan option you could help 5,000 shrimp instead. Vegans have other reasons like non-consequentialist ethics, virtue signaling or self-signaling, or just a desire not to eat the actual flesh/body fluids of tortured animals.

If you have a similar emotional reaction to other products it seems completely valid to boycott them, although as you mention there can be significant practical burdens, both in adjusting one's lifestyle to avoid such products and in judging whether the claims of marginal impact are valid. Being vegan is not obligatory in my culture and neither should boycotts be-- unless the marginal impact of the boycott is larger than any other life choice which is essentially never true.

Load more