Nice, I didn't know! Their research goals seem quite broad, which is good. Within the context of AI existential risk, this project looks interesting.
I think the better question might be, "who are the best some professors/academic research groups in AI Safety to work with?"
Two meta-points I feel might be important —
With that out of the way, three research groups in academia come to mind:
Others:
The hallmark experiences of undiagnosed ADHD seem to be saying “I just need to try harder” over and over for years, or kicking yourself for intending to start work and then not getting much done...
Extremely relatable.
Thank you very much for writing this. I am in the process of getting a diagnosis, and this helped me overcome some of the totally made-up mental barriers regarding ADHD medication.
I downvoted and want to explain my reasoning briefly: the conclusions presented are too strong, and the justifications don't necessarily support them.
We simply don't have enough experience or data points to say what the "central problem" in a utilitarian community will be. The one study cited seems suggestive at best. People on the spectrum are, well, on a spectrum, and so is their behavior; how they react will not be as monolithic as suggested.
All that being said, I softly agree with the conclusion (because I think this would be true for any community).
All of this suggests that, as you recommend, in communities with lots of consequentialists, there needs to be very large emphasis on virtues and common sense norms.
It's maybe worth clarifying that I'm most concerned about people who a combination of high-confidence in utilitarianism and a lack of qualms about putting it into practice.
Thank you, that makes more sense + I largely agree.
However, I also wonder if all this could be better gauged by watching out for key psychological traits/features instead of probing someone's ethical view. For instance, a person low in openness showing high-risk behavior who happens to be a deontologist could cause as much trouble as a naive utilitarian optimizer. In either case, it would be the high-risk behavior that would potentially cause problems rather than how they ethically make decisions.
Disagree-voting for the following reasons:
I think Metaculus does a decent job at forecasting, and their forecasts are updated based on recent advances, but yes, predicting the future is hard.
There are two statements in this summary that I find somewhat confusing:
I haven't dived into the working paper yet, but these two statements seem contradictory. Is it possible to not be at the hinge of history but live during the most influential time? I thought being at the hinge of history ≈ being at the most influential time. What am I missing?
Hello from another group organizer in the Southwest! We are in Tucson AZ, just a six hours drive away. Hopefully, someday in the not-so-far future, organizing a southwestern meetup / retreat / something would be feasible and super cool!