All of berglund's Comments + Replies

FTX/CEA - show us your numbers!

Personally, I don't have a problem with the title. It clearly states the central point of the post. 

FTX/CEA - show us your numbers!

Regarding the example, spending $5k on EA group dinners is really not that much if it has even a 2% chance to cause one additional career change.

How much of the impact generated by the career change are you attributing to CEA spending here? I'm just wondering because counterfactuals run into the issue of double-counting (as discussed here). 

Unsure but probably more than 20% if the person wouldn't be found through other means. I think it's reasonable to say there are 3 parties: CEA, the group organizers, and the person, and none is replaceable so they get 33% Shapley each. At 2% chance to get a career change this would be a cost of 750k per career which is still clearly good at top unis. The bigger issue is whether the career change is actually counterfactual because often it's just a speedup.

Democratising Risk - or how EA deals with critics

I agree that there is an analogy to animal suffering here, but there's a difference in degree I think. To longtermists, the importance of future generations is many orders of magnitude higher than the importance of animal suffering is to animal welfare advocates. Therefore, I would claim, longtermists are more likely to ignore other non-longtermist considerations than animal welfare advocates would be.

Democratising Risk - or how EA deals with critics

Thanks for writing this! It seems like you've gone through a lot in publishing this. I am glad you had the courage and grit to go through with it despite the backlash you faced. 

7Patrick5mo
I would've found it helpful if the post included a definition of TUA (as well as saying what what it stands for). Here's a relevant excerpt from the paper:

Techno-utopian approach (via paper abstract)

Which EA orgs, programs, companies, etc. started as side projects?

Not sure if this fits, but it seems like 80,000 hours started as somewhat of a side project. This 2015 article it says 80k started with Will MacAskill and Ben Todd “forming a discussion group and giving lectures on the topic, then eventually creating 80,000 Hours to spread their ideas.” (They link to this vintage lecture they gave.)

I’m not sure how much of a “side project” this was to Ben and Will. Maybe others know more about that era.

Exposure to 3m Pointless viewers- what to promote?

I agree with Aaron! Given the little time you have, I would make the pitch as simple as possible.

The Explanatory Obstacle of EA

Fair enough. I would guess you can usually have a higher impact through your career since you are doing something you've specialized in. But the first two examples you bring up seem valid.

2Mauricio6mo
Agreed! So maybe differences in feasible impact are: career >> (high-skill / well-located) volunteering > donations >> other stuff
Submit comments on Paxlovid to the FDA (deadline Nov 29th).

This seems like a good idea.

I submitted the following comment:

I urge the FDA to schedule its review of Paxlovid and to make the timeline 3 weeks or less, as it did with the COVID vaccine.

1000 people are dying of COVID in the US every day. With an efficacy of 89%, Paxlovid could prevent many of these deaths. The earlier Paxlovid is approved, the more lives will be saved.

Thank you for your consideration.

I wasn't sure what topic to put it under so I chose "Drug Industry - C0022." 

2AllAmericanBreakfast6mo
Thank you for taking action!
The Explanatory Obstacle of EA

I like this framing a lot. I particularly like the idea of replacing the phrase "doing good" with "helping others" and "maximization" with "prioritization."

I understand the impulse to mention volunteering before donations and careers because people naturally connect it with doing good. But I think it would be misleading for the following reasons:

  • As you said, there is currently very little emphasis on volunteering in EA
  • In most cases, individuals can do much more good by changing their career path or donating

I  think we should be as accurate as we can w... (read more)

8GidonKadosh6mo
Thank you for this feedback lukasberglund and Maricio, I think I underestimated the misrepresentation argument, so I highly appreciate this. About your second argument on the impact of volunteer guidance, and the discussion with Mauricio: I entirely agree with your opinion on the impact of volunteering, but I think that the main case for including volunteering in the pitch (and in general, investing in guidance for effective volunteering) is that it for specific individuals, who are interested in volunteering, this can be the entry point that would attract them to learn more about EA - whether we eventually help them with prioritizing volunteer opportunities or with career/donation decisions. For this reason (and because specific volunteering opportunities can be highly impactful, as you both discussed), I still think it's beneficial to include volunteering on EA pitches. I believe that the argument about misrepresentation makes a good case for not mentioning volunteering as the first on the list, but I don't think that the order is of high significance. I'll soon make some updates to the post about that. Thank you both again for your feedback!
9Mauricio6mo
Yup, this also lines up with how (American) undergrads empirically seem to get most enthusiastic about career-centered content (maybe because they're starved for good career guidance/direction). And a nitpick: I initially nodded along as I read this, but then I realized that intuition came partly from comparing effective donations with ineffective volunteering, which might not be comparing apples to apples. Do effective donations actually beat effective volunteering? I suspect many people can have more impact through highly effective volunteering, e.g.: * Volunteering in movement-building/fundraising/recruitment * High-skill volunteering for orgs focused on having positive long-term impacts, or potentially for animal advocacy orgs (since these seem especially skill-constrained) * Volunteering with a mainstream policy org to later land an impactful job there (although this one's iffy as an example since it's kind of about careers) (Still agree that emphasizing volunteering wouldn't be very representative of what the movement focuses on.)
What high-level change would you make to EA strategy?

Is there evidence/theoretical reason to believe that not experimenting in governance leads a movement to become slow over time?

The Folly of "EAs Should"

[Comment pointing out a minor error]  Also, great post!

3Davidmanheim1y
Whoops! My apologies to both individuals - this is now fixed. (I don't know what I was looking at when I wrote this, but I vaguely recall that there was a second link which I was thinking of linking to which I can no longer find where Peter made a similar point. If not, additional apologies!)
The Center for Election Science Appeal for 2020

I'm impressed with the success you guys had! I'm excited to see your organization develop.

2aaronhamlin1y
Thanks! We look forward to continuing our impact. I'm always impressed with our team and what we're able to do with our resources.
Should local EA groups support political causes?

Good point. I'll bring this up with other group leaders.

Should local EA groups support political causes?

This approach is compelling and you make a good case for it, but I think what Lynch said about how not supporting a movement can feel like opposing it is significant here. On our university campus, supporting a movement like Black Lives Matter seems obvious, so when you refuse to, it makes it looks like you have an ideological reason not to.

EAGxVirtual Unconference (Saturday, June 20th 2020)

What is the best leadership structure for (college) EA clubs?


A few people in the EA group organizers slack (6 to be exact) expressed interest in discussing this.

Here are some ideas for topics to cover:

  • The best overall structure (What positions should there be etc.
  • Should there be regular meetings among all general members/ club leaders?
  • What are some mistakes to avoid?
  • What are some things that generally work well?
  • How to select leaders

I envision this as an open discussion for people to share their experiences. At the end, we could compile the result of our discussion into a forum post.

[AN #80]: Why AI risk might be solved without additional intervention from longtermists

In the beginning of the Christiano part it says

There can't be too many things that reduce the expected value of the future by 10%; if there were, there would be no expected value left.

Why is it unlikely that there is little to no expected value left? Wouldn't it be conceivable that there are a lot of risks in the future and that therefore there is little expected value left? What am I missing?

2Rohin Shah2y
See this comment thread [https://www.lesswrong.com/posts/QknPz9JQTQpGdaWDp/an-80-why-ai-risk-might-be-solved-without-additional#kcbZdGypHYXvK5qLD] .
2Liam_Donovan2y
I think the argument is that we don't know how much expected value is left, but our decisions will have a much higher expected impact if the future is high-EV, so we should make decisions that would be very good conditional on the future being high-EV.