Excellent overview, thank you Sjir!
Thanks for the great source, Richard! I intentionally didn't include a description of how to best go about contacting people, as this post was more or less directed at more established orgs that have direct access to recruiters. However, after posting this and receiving messages from interested individuals/smaller orgs, your contribution comes in really handy!
If anyone is interested in more detailed information about contacting and/or being connected, please reach out!
Serene moment in midst of all the chaos. Thank you for sharing.
Hi there! Sorry for the late reply. I was also planning on joining tomorrow. Thank you and see you soon!
I've created a spreadsheet in which I collected roughly 500 AI Safety / Longtermism-related ideas (non-research meta ideas, so they sometimes overlap with other cause areas as well). They are now semi-prioritized and semi-fleshed out. We are in the process of turning this into a central alignment ideas database / matchmaking / prioritization platform, for funders, talent, ideas, and advisors. Let me know if you'd like to support this project, or would like some more info on it!
But there doesn’t seem to be a useful slack-like thing for just “I’m going to be living in this place, who can I find housing with?”
Seems to me like EA Houses is doing that. What problem would the slack channel be solving that EAH isn't?
Hi Akash, awesome that you're doing this! 👏🏼
Esben Kran and I are setting up reading challenges, which currently only involves AI Safety as a topic. Soon we'll be adding managerial topics too (communications, ops, recruiting, etc.). We named it "Reading What We Can" :P https://readingwhatwecan.com/. This is a great 80/20 way of upskilling during a summer break. 📚
(If anybody is lacking the funds to upskill / +, feel free to get in touch!)
Thanks for pointing that out!
Thanks, Pablo! I replied to Michael's message underneath with some examples of how personal development could be structured.
empty tag: now I feel inclined to write a post on productivity :p
Disclaimer: Be careful about definitions and interpreting metaculus questions. The latter involves resolution criteria on defining AGI that does not align with my own (e.g. meeting the named criteria would not replace all human tasks). Also, there has been an inflow of additional forecasters based on recent developments which should be factored in.
I listed some of my current sources down below. I hope this helps!
“My probability is at least 16% [on the IMO grand challenge falling], though I'd have to think more and Look into Things, and maybe ask for such sad little metrics as are available before I was confident saying how much more.”
"I'd put 4% on "For the 2022, 2023, 2024, or 2025 IMO an AI built before the IMO is able to solve the single hardest problem"
I think the IMO challenge would be significant direct evidence that powerful AI would be sooner, or at least would be technologically possible sooner. I think this would be fairly significant evidence, perhaps pushing my 2040 TAI probability up from 25% to 40% or something like that."