Recent Discussion

Dwarkesh Patel has one of the best podcasts around.

Here’s a lightly-edited extract from his recent conversation with Byrne Hobart.

I’ll share some reflections in the comments.

See also: Twitter version of this post.


Many belief systems have a way of segregating and limiting the impact of the most hardcore believers

Sam Bankman-Fried was an effective altruist and he was a strong proponent of risk-neutrality. We were talking many months ago and you made this really interesting comment that in many belief systems they have a way of segregating and limiting the impact of the most hardcore believers. So if you're a Christian, then the people who take it the most seriously... you can just make them monks so they don't cause that much damage to the rest of the world. Edefective...

4Mauricio2h
Thanks for sharing! The speakers on the podcast might not have had the time to make detailed arguments, but I find their arguments here pretty uncompelling. For example: * They claim that "many belief systems they have a way of segregating and limiting the impact of the most hardcore believers." But (at least from skimming) their evidence for this seems to be just the example of monastic traditions. * A speaker claims that "the leaders who take ideas seriously don't necessarily have a great track record." But they just provide a few cherry-picked (and dubious) examples, which is a pretty unreliable way of assessing a track record. * Counting Putin a "man of ideas" because he made a speech with lots of historical references--while ignoring the many better leaders who've also made history-laden speeches--looks like especially egregious cherry-picking. So I think, although their conclusions are plausible, these arguments don't pass enough of an initial sanity check to be worth lots of our attention.

I share this impression of the actual data points being used feeling pretty flimsy

3peterhartree6h
As Byrne points out, and some notable examples testify, some people manage to: 1. "Go to the monastery" to explore ideas as a hardcore believer. 2. After a while, "return to the world", and successfully thread the needle between innovation, moderation, and crazy town. This is not an easy path. Many get stuck in the monastery, failing gracefully (i.e. harmlessly wasting their lives). Some return to the world, and achieve little. Others return to the world, accumulate great power, and then cause serious harm. Concern about this sort of thing, presumably, is a major motivation for the esotericism of figures like Tyler Cowen [https://notes.pjh.is/people/Tyler+Cowen], Peter Thiel [https://twitter.com/peterhartree/status/1583509982099607552], Plato [https://sun.pjh.is/tyler-cowen-on-straussian-truths-and-rational-choice-ethics] , and most of the other Straussian thinkers [https://plato.stanford.edu/entries/strauss-leo/].

Epistemic status: This post is meant to be a conversation starter rather than a conclusive argument. I don’t assert that any of the concerns in it are overwhelming, only that we have too quickly adopted a set of media communication practices without discussing their trade-offs.

Also, while this was in draft form, Shakeel Hashim, CEA’s new head of communications, made some positive comments on the main thesis suggesting that he agreed with a lot of my criticisms and planned to have a much more active involvement with the media. If so, this post may be largely redundant - nonetheless, it seems worth having the conversation in public.

CEA adheres to what they call the fidelity model of spreading ideas, which they formally introduced in 2017, though my sense is it...

I think something like that is I think a better idea. Or separately, for people to just write up their takes in comments and posts themselves. I've been reasonable happy with the outcomes of me doing that during this FTX thing. I think I've been quoted in one or two articles, and I think those quotes have been fine.

4John_Maxwell1h
Great points. There's an unfortunate dynamic which has occurred around discussions of longtermism outside EA. Within EA, we have a debate about whether it's better to donate to nearterm vs longterm charities. A lot of critical outsider discussion on longtermism ends up taking the nearterm side of our internal debate: "Those terrible longtermists want you to fund speculative Silicon Valley projects instead of giving to the world's poorest!" But for people outside EA, nearterm charity vs longterm charity is generally the wrong counterfactual. Most people outside EA don't give 10% of their earnings to any effective charity. Most AI work outside EA is focused on making money or producing "cool" results, not mitigating disaster or planning for the long-term benefit of humanity. Practically all EAs agree people should give 10% of their earnings to effective developing-world charities instead of 1% to ineffective developed-world ones. And practically all EAs agree that AI development should be done with significantly more thought and care. (I think even Émile Torres may agree on that! Could someone ask?) It's unfortunate that the internal nearterm vs longterm debate gets so much coverage, given that what we agree on is way more action-relevant to outsiders. In any case, I mention this because it could play into your "ideologically diverse group of public figures" point somehow. Your idea seems interesting, but I also don't like the idea of amplifying internal debates further. I would love to see public statements like "Even though I have cause prioritization disagreements with Person X, y'all should really do as they suggest!" And acquiring a norm of using the media to gain leverage in internal debates seems pretty bad.
2BrownHairedEevee1h
Yeah, it's the narcissism of small differences [https://en.wikipedia.org/wiki/Narcissism_of_small_differences]. If we're gonna emphasize our diversity more, we should also emphasize our unity. The narrative could be "EA is a framework for how to apply morality, and it's compatible with several moral systems."

This bit of pondering was beyond the scope of the manuscript I was writing (a followup to this post, which is why the examples are all about anti-rodenticide interventions), but I still wanted to share it. Cut from rough draft, lightly edited it so it would make sense to Forum readers and to make the tone more conversational.


It is often difficult to directly engage in political campaigns without incentives to lie or misrepresent. This is exacerbated by the differences in expected communication styles in politics vs the general public vs EA. There is a tradition of strong arguments in broader EA (+rationality) culture for EAs to entirely steer away from politics for both epistemic and effectiveness reasons. I find these arguments persuasive but wonder whether they have become an...

FWIW, I was always uneasy with SBF's massive donations to (mostly) Democratic politicians, and with his determination to defeat Trump at any cost, by any means necessary. It just didn't make sense in terms of EA reasoning, values, and priorities. It should have been a big red flag. 

I thought it was not super consistent with EA but easily explained by Sam's parents' careers and values. I often expressed worry about how it would affect our epistemics for EAs to become politicians bankrolled by Sam or for the community as a whole to feel pressure not und... (read more)

8Holly_Elmore2h
YES
Sign up for the Forum's email digest
You'll get a weekly email with the best posts from the past week. The Forum team selects the posts to feature based on personal preference and Forum popularity, and also adds some announcements and a classic post.

Before I get to the heart of what I want to share, a couple of disclaimers:

  • I am one person and, as such, I recognize that my perspective is both informed and limited by my experience and identity. I would like to share my perspective in the hope that it may interact with and broaden yours. 
  • In writing this post, my aim is not to be combative nor divisive. The values of Effective Altruism are my values (for the most part - I likely value rationality less than many), and its goals are my goals. I do not, therefore, aim to “take down” or harm the Effective Altruism community. Rather, I hope to challenge us all to think about what it means to be in community.

Who am I and

...
1[anonymous]3h
Hi Monica, thanks for the reply. Suppose my original comment was And I got these replies: * "I find the idea that a person has any responsibility whatsoever to donate to $EA_CHARITY baffling." * "Donating to $EA_CHARITY is not obviously net harm reducing. Their work may funge against other efforts. And even if they do perfect work, solving poverty in the developing world still leaves developed-world poverty as a major problem." * "The person reading your comment could be almost broke, such that if they donate to $EA_CHARITY they would homeless and destitute. It is unreasonable for us to ask anyone to take that sacrifice." * "Other charities which claim to solve the problem $EA_CHARITY works on have been found to be scams. Don't be surprised if they sell your credit card details to cybercriminals." * "People have the right to choose how much they give." These are all valid replies I agree with partially or fully. But they all seem to operate under the assumption that I hold a much different position than the one I actually hold. I'm not totally sure what I did to give people the mistaken impression. Maybe I just need to learn to avoid triggering people. In any case, I think you and I agree more than we disagree.
-2[anonymous]4h
I apologize for making so many edits instead of submitting separate comments the way Nathan did. Based on checking the vote tallies on this comment repeatedly, I think it got most agreevotes after the first edit and before the second one. (I believe the agreevote was at around +8 at one point.) Suggesting that matchmaking is the idea that people like the most. Also, by "maybe you women should put your heads together on this" I was essentially suggesting a panel or focus group. I find myself increasingly unenthusiastic about participating in this thread. I think it could use a little more assumption-of-good-faith and sense of humor instead of what feels like eagerness to take offense.

Low back pain (lumbago) is a leading cause of disability and reduced productivity around the world, and the EA community seems no exception. Since I have had quite a bit of back pain, and spent hundreds of hours searching for solutions, I thought I might as well share some of the many useful tips and resources I've found.

Thanks to the hacks I list below, I've gone from having intense, crippling low back pain to maximizing my wellbeing in an Epicurean sense. (Needless to say, what follows is not medical advice, and may not work for everyone; low back pain can have many causes, and you should probably consult a doctor if you have severe back pain.)

Exercise

Key stretches 🧘🏾‍♀️

Stretch your psoas. A tight psoas muscle can lead to...

I want to respond specifically to the  exercise parts of this blog. While squats and deadlifts are an important part of any leg workout routine and can have beneficial for the lower back, many people are unable to do these exercises due to their debilitating lower back pain. I would recommend starting with bracing techniques. Place your hands on your obliques right above your hip bone while sucking in your gut slightly. Then brace your core by flexing your core muscles. You should feel your hands push out and your core musculature flex all 360 degrees... (read more)

This piece from Gideon Lewis-Kraus (the writer for the MacAskill piece) is a recent overview of how EA has reacted to SBF and the FTX collapse. 

Lewis-Kraus's articles are probably the most in-depth public writing on EA, and he has had wide access to EA members and leadership. 

The New Yorker is highly respected and the narratives and attitudes in this piece will influence future perceptions of EA.

 

This piece contains inside information about discussions or warnings about SBF. It uses interviews from a "senior EA", and excepts from an internal Slack channel used by senior EAs.

When my profile of MacAskill, which discussed internal movement discord about Bankman-Fried’s rise to prominence, appeared in August, Wiblin vented his displeasure on the Slack channel.

...
[anonymous]3h20

My comment was a sloppy attempt at simultaneously replying to (a) attitudes I've personally observed in EA, (b) the PR Slack channel as described in the article, and (c) your comment. I apologize if I misunderstood your comment or mischaracterized your position.

My reply was meant as a vague gesture at how I would like EA leadership to change relative to what came through in the New Yorker article. I wouldn't read too much into what I wrote. It's tricky to make a directional recommendation, because there's always the possibility that the reader has already made the update you want them to make, and your directional recommendation causes them to over-update.

3DanielFilan6h
I think it doesn't even make sense at first glance! Anyway I retain my right to complain about bad things that are common.
4freedomandutility16h
Some reasons I disagree: I think internal criticism in EA is motivated by aiming for perfection, and is not motivated by aiming to be as good as other movements / ideologies. I think internal criticism with this motivation is entirely compatible with a self-image of exceptionalism. While I think many EAs view the movement as exceptional and I agree with them, I think too many EAs assume individual EAs will be exceptional too, which I think is an unjustified expectation. In particular, I think EAs assume that individual EAs will be exceptionally good at being virtuous and following good social rules, which is a bad assumption. I think EA also relies too heavily on personal networks, and especially given the adjacency to the rationalist community, EA is bad at mitigating against the cognitive biases this can cause in grantmaking. I expect that people overestimate how good their friends are at being virtuous and following good social rules, and given that so many EAs are friends with each other at a personal level, this exacerbates the exceptionalism problem.

Hi everyone, SFF has received numerous emails recently from organizations interested in expedited funding.  I believe a number of people here already know about SFF Speculation Grants, but since we've never actually announced our existence on the EA Forum before:

The Survival and Flourishing Fund has a means of expediting funding requests at any time of year, via applications to our Speculation Grants program:

https://survivalandflourishing.fund/speculation-grants

SFF Speculation Grants are expedited grants organized by SFF outside of our biannual grant-recommendation process (the S-process). “Speculation Grantors” are volunteers with budgets to make these grants. Each Speculation Grantor’s budget grows or increases with the settlement of budget adjustments that we call “impact futures” (explained further below). Currently, we have a total of ~20 Speculation Grantors, with a combined budget of approximately $4MM. Our process and software infrastructure for funding these grants were co-designed by Andrew Critch and Oliver Habryka.

For instructions on how to apply, please visit the link above.

For general information about the Survival and Flourishing Fund, see:

https://survivalandflourishing.fund/

Suppose the Speculation Grantors will consider a grant that is extremely risky. Suppose that grant has a 10% chance to be evaluated in the next round as far more beneficial than all the rest of the Speculation Grants of that round combined; and yet, the grant is net-negative (due to its potential to cause extreme accidental harm, which is not unlikely for extremely-impactful anthropogenic x-risk interventions).

If in your implementation of "impact futures" the "fd_value" of an impact certificate cannot be negative, then to the extent that a Speculation Gran... (read more)

I was convinced that I should work on extinction risk (X-Risk for short) relatively quickly, mostly because three points seemed very intuitive to me when I first heard them: 

  1. I should make my decisions based on their expected value.
  2. I should not discount the value of future lives in my expected value calculations.
  3. Because there will (probably) be more than 10^45 future lives (so many!) extinction risk only needs really, really, really tiny tractability in order to have very high expected value (the highest, perhaps). 

While I was at a retreat the other day for people new to thinking ‘bout extinction, I was talking to someone I thought might be a good fit for working on X-Risk but who was skeptical because, while they agreed with point #1 above, they disagreed philosophically...

How much would I personally have to reduce X-risk to make this the optimal decision? Well, that’s simple. We just calculate: 

  • 25 billion * X = 20,000 lives saved
  • X = 20,000 / 25 billion
  • X = 0.0000008 
  • That’s 0.00008% in x-risk reduction for a single individual.

I'm not sure I follow this exercise. Here's how I'm thinking about it:

Option A: spend your career on malaria.

  • Cost: one career
  • Payoff: save 20k lives with probability 1.

Option B: spend your career on x-risk.

  • Cost: one career
  • Payoff: save 25B lives with probability p (=P(prevent extinction)), save 0
... (read more)

TL;DR: Friday, December 16 (and the weekend right after it) will be “Draft Amnesty Days” (DAD) on the Forum. People are encouraged to share unfinished posts, unpolished writing, butterfly ideas, thoughts they’re not sure they endorse, etc. on those days. 

I'm sharing this post primarily to let people know that this is happening (and when — save the dates!). 

We hope that Draft Amnesty Day will let people share valuable drafts that are otherwise sitting around waiting to be finished (something that might never happen). It’s possible that these drafts will later be “upgraded” to full posts; authors will be encouraged to post an updated version if they make significant changes. You can find more on the motivation behind this in the original post about it.

How this will work

During DAD, there will...

I think this is perfectly good to try, but I'm personally skeptical that it will end up being especially useful. My sense is that right now, there isn't a shortage of frontpage content on the forum. Rather, there seems to often be a shortage of deep reading, engagement, and discussion when someone writes a long object-level post. I would be interested to see initiatives aimed at fostering that kind of deeper engagement with content, rather than at trying to get more frontpage posts.