As promised, here are the top ten suggestions for our next debate week. 

This thread will be pinned throughout the week. The statement with the highest karma when I get into work at 9am on Monday the 2nd of February, will be the subject of our debate week in March[1]. Comments (beyond those we are voting on) are disabled to make the voting easier. If you need to comment something, please do so here, or just dm me if you have feedback. 

Note that you are voting on the topic, not the precise wording. I’ve kept the wording the same as in this thread, but once we collectively choose a topic, we’ll have an opportunity to make it more precise/ reframe it a little so it works better for a debate. 

  1. ^

    Again, generic caveat here: there are legal and practical reasons that CEA might want to veto a topic. I personally think this is very unlikely, but we do reserve the right to veto if necessary. In the case of a veto, we'd move to the second highest karma topic. 

29

0
0

Reactions

0
0
Comments10
Sorted by Click to highlight new comments since:

"Countering democratic backsliding is now a more urgent issue than more traditional longtermist concerns." 

Note that due to our Politics on the EA Forum policy, this conversation would have to stay quite meta, and be pretty actively moderated. We reserve the right to disqualify the option if we feel we can't moderate it adequately at our current capacity. 

"Policy or institutional approaches to AI Safety are currently more effective than technical alignment work"

"Conditional on avoiding existential catastrophes, the vast majority of future value depends on whether humanity implements a comprehensive reflection process (e.g., long reflection or coherent extrapolated volition)"

“EA focuses too much on AI"

“Most, if not all, animal advocacy funding should be directed towards bringing cultivated meat products to market ASAP.” 

"EAs aren't giving enough weight to longer AI timelines"

"Farmed animal advocacy currently underinvests in anticipating future policy and industry shifts relative to responding to current harms."

"Individual donors shouldn't diversify their donations"

"By default, the world where AI goes well for humans will also go well for other sentient beings" 

"In expectation, work to prevent the use of weapons of mass destruction (nuclear weapons, bio-engineered viruses, and perhaps new AI-powered weapons) as funded by Longview Philanthropy (Emerging Challenges Fund) and Founders Pledge (Global Catastrophic Risks Fund) is more effective at saving lives than Givewell's top charities."

Curated and popular this week
Relevant opportunities