lukasberglund

Comments

Should local EA groups support political causes?

Good point. I'll bring this up with other group leaders.

Should local EA groups support political causes?

This approach is compelling and you make a good case for it, but I think what Lynch said about how not supporting a movement can feel like opposing it is significant here. On our university campus, supporting a movement like Black Lives Matter seems obvious, so when you refuse to, it makes it looks like you have an ideological reason not to.

EAGxVirtual Unconference (Saturday, June 20th 2020)

What is the best leadership structure for (college) EA clubs?


A few people in the EA group organizers slack (6 to be exact) expressed interest in discussing this.

Here are some ideas for topics to cover:

  • The best overall structure (What positions should there be etc.
  • Should there be regular meetings among all general members/ club leaders?
  • What are some mistakes to avoid?
  • What are some things that generally work well?
  • How to select leaders

I envision this as an open discussion for people to share their experiences. At the end, we could compile the result of our discussion into a forum post.

[AN #80]: Why AI risk might be solved without additional intervention from longtermists

In the beginning of the Christiano part it says

There can't be too many things that reduce the expected value of the future by 10%; if there were, there would be no expected value left.

Why is it unlikely that there is little to no expected value left? Wouldn't it be conceivable that there are a lot of risks in the future and that therefore there is little expected value left? What am I missing?