nnn

320Joined Nov 2021

Comments
25

Answer by nnnApr 20, 2022810

Disclaimer: Be careful about definitions and interpreting metaculus questions. The latter involves resolution criteria on defining AGI that does not align with my own (e.g. meeting the named criteria would not replace all human tasks). Also, there has been an inflow of additional forecasters based on recent developments which should be factored in.

I listed some of my current sources down below. I hope this helps!

  • Metaculus forecasts: 
  • Other:
  • Shane Legg (DeepMind Co-founder): 50%: 2030, some chance in next 10-30y
  • Demis Hassabis: 10-20y from now
  • Eliezer & Paul’s IMO challenge bet: “Paul at <8%, Eliezer at >16% for AI made before the IMO is able to get a gold (under time controls etc. of grand challenge) in one of 2022-2025. Separately, we have Paul at <4% of an AI able to solve the "hardest" problem under the same conditions.”
    • Eliezer: 
      “My probability is at least 16% [on the IMO grand challenge falling], though I'd have to think more and Look into Things, and maybe ask for such sad little metrics as are available before I was confident saying how much more.”
    • Paul Christiano: 
      "I'd put 4% on "For the 2022, 2023, 2024, or 2025 IMO an AI built before the IMO is able to solve the single hardest problem"
      I think the IMO challenge would be significant direct evidence that powerful AI would be sooner, or at least would be technologically possible sooner. I think this would be fairly significant evidence, perhaps pushing my 2040 TAI probability up from 25% to 40% or something like that."
  • Ajeya's "When required computation may be affordable" (from ACT):
    • Ajeya created a function to evaluate the predicted annual investments on giant AI projects (upward sloping curve), vs. the likely cost of training a human-level AI (downward sloping curve).
    • Eventually, these curves meet, representing the first trained human-level AI.
    • You can play around with the spreadsheet here.
    • Ajeya’s values: 20% neural net, short horizon, 30% neural net, medium horizon, 15% neural net, long horizon, 5% human lifetime as training data, 10% evolutionary history as training data, 10% genome as parameter number

 

Excellent overview, thank you Sjir!

  •  Including a "country" column might be valuable, especially for the national orgs (or MVP it including the country in [brackets] next to the name)
  • You might be able to embed a sheet view directly on the forum (maybe just a static csv file/table, dynamically showing updates on the original spreadsheet, or fully interactive)

Thanks for the great source, Richard! I intentionally didn't include a description of how to best go about contacting people, as this post was more or less directed at more established orgs that have direct access to recruiters. However, after posting this and receiving messages from interested individuals/smaller orgs,  your contribution comes in really handy!

If anyone is interested in more detailed information about contacting and/or being connected, please reach out!

Serene moment in midst of all the chaos. Thank you for sharing.

Hi there! Sorry for the late reply. I was also planning on joining tomorrow. Thank you and see you soon!

I've created a spreadsheet in which I collected roughly  500 AI Safety / Longtermism-related ideas (non-research meta ideas, so they sometimes overlap with other cause areas as well). They are now semi-prioritized and semi-fleshed out. We are in the process of turning this into a central alignment ideas database / matchmaking / prioritization platform, for funders, talent, ideas, and advisors. Let me know if you'd like to support this project, or would like some more info on it!

But there doesn’t seem to be a useful slack-like thing for just “I’m going to be living in this place, who can I find housing with?”

Seems to me like EA Houses is doing that. What problem would the slack channel be solving that EAH isn't?

Hi Akash, awesome that you're doing this! 👏🏼

Esben Kran and I are setting up reading challenges, which currently only involves AI Safety as a topic. Soon we'll be adding managerial topics too (communications, ops, recruiting, etc.). We named it "Reading What We Can" :P https://readingwhatwecan.com/. This is a great 80/20 way of upskilling during a summer break. 📚

(If anybody is lacking the funds to upskill / +, feel free to get in touch!)

Thanks, Pablo! I replied to Michael's message underneath with some examples of how personal development could be structured.
empty tag: now I feel inclined to write a post on productivity :p

Load More