All of lukasberglund's Comments + Replies

The Folly of "EAs Should"

[Comment pointing out a minor error]  Also, great post!

3Davidmanheim9moWhoops! My apologies to both individuals - this is now fixed. (I don't know what I was looking at when I wrote this, but I vaguely recall that there was a second link which I was thinking of linking to which I can no longer find where Peter made a similar point. If not, additional apologies!)
The Center for Election Science Appeal for 2020

I'm impressed with the success you guys had! I'm excited to see your organization develop.

2aaronhamlin9moThanks! We look forward to continuing our impact. I'm always impressed with our team and what we're able to do with our resources.
Should local EA groups support political causes?

Good point. I'll bring this up with other group leaders.

Should local EA groups support political causes?

This approach is compelling and you make a good case for it, but I think what Lynch said about how not supporting a movement can feel like opposing it is significant here. On our university campus, supporting a movement like Black Lives Matter seems obvious, so when you refuse to, it makes it looks like you have an ideological reason not to.

EAGxVirtual Unconference (Saturday, June 20th 2020)

What is the best leadership structure for (college) EA clubs?


A few people in the EA group organizers slack (6 to be exact) expressed interest in discussing this.

Here are some ideas for topics to cover:

  • The best overall structure (What positions should there be etc.
  • Should there be regular meetings among all general members/ club leaders?
  • What are some mistakes to avoid?
  • What are some things that generally work well?
  • How to select leaders

I envision this as an open discussion for people to share their experiences. At the end, we could compile the result of our discussion into a forum post.

[AN #80]: Why AI risk might be solved without additional intervention from longtermists

In the beginning of the Christiano part it says

There can't be too many things that reduce the expected value of the future by 10%; if there were, there would be no expected value left.

Why is it unlikely that there is little to no expected value left? Wouldn't it be conceivable that there are a lot of risks in the future and that therefore there is little expected value left? What am I missing?

2rohinmshah2ySee this comment thread [https://www.lesswrong.com/posts/QknPz9JQTQpGdaWDp/an-80-why-ai-risk-might-be-solved-without-additional#kcbZdGypHYXvK5qLD] .
2Liam_Donovan2yI think the argument is that we don't know how much expected value is left, but our decisions will have a much higher expected impact if the future is high-EV, so we should make decisions that would be very good conditional on the future being high-EV.