Currently doing local AI safety Movement Building in Australia and NZ.
Upvoted for sharing an interesting framing!
Although once you start accounting for ripple effects, it becomes very suspicious if someone claims that the best way to improve the future is to work on global poverty or donate to animal welfare and they aren't proposing a specific intervention that is especially likely to ripple in a positive way.
I really love the visuals of the voting tool, here's how we could make it even better for future iterations.
The axes currently aren't labeled and, if I'm being really honest I ended up being too lazy to vote as I would have had to count up the notches manually. I'm pretty sure I'm not the only one (see Beware Trivial Inconveniences).
I also suspect that it makes the results less meaningful. Even though people have wildly different views on what 7/10 or strongly agree means, there's still some degree of social consensus that has implicitly formed around these from use. Since this is a relatively novel interface, there's going to be a lot more variation in terms of what three notches means for one person versus another.
Anyway, thanks again to the team for building the tool/running this debate week!
I'm not really focused on animal rights nor do I spend much time thinking about it, so take this comment with a grain of salt.
However, if I wanted to make the future go well for animals I'd be offering free vegan meals in the Bay Area or running a conference on how to ensure that the transition to advanced AI systems goes well for animals in the Bay Area.
Reality check: Sorry for being harsh, but you're not going to end factory farming before the transition to advanced AI technologies. Max 1-2% chance of that happening. So the best thing to do is to ensure that this goes well for animals and not just humans.
Anyway, that concludes my hot-take.
EA needs more communications projects.
Unfortunately, the EA Communications Fellowship and the EA Blog prize shut down[1]. Any new project needs to be adapted to the new funding environment.
If someone wanted to start something in this vein, what I'd suggest would be something along the lines of AI Safety Camp. People would apply with a project to be project leads and then folk could apply to these projects. Projects would likely run over a few months, part-time remote[2].
Something like this would be relatively cheap as it would be possible for someone to run this on a volunteer basis, but it might also make sense for there to be a paid organiser at a certain point.
Interesting article, however, I would class some of the things you've suggested might happen in the crazy growth world as better fitting the modest improvement in AI abilities world.