I'm building tools for forecasting and thinking at Sage.
My PhD in HCI at St Andrews is paused, and I previously worked at Clearer Thinking.
Web: http://adambinks.me/
Tweeting, sometimes about EA: https://twitter.com/adambinks_
Super interesting to see this analysis, especially the table of current capabilities - thank you!
I have interpreted [feasible] as, one year after the forecasted date, have AI labs achieved these milestones, and disclosed this publicly?
It seems to me that this ends up being more conservative than the original "Ignore the question of whether they would choose to" , which presumably makes the expert forecasts worse than they seem to be here.
For example, a task like "win angry birds" seems pretty achievable to me, just that no one's thinking about angry birds these days so it probably hasn't been attempted. Does that sound right to you?
I'm curious if you have a rough estimate of how many of these tasks would be achievable within a year if top labs attempted them?
Thanks for the feedback Forslack! I'm curious whether you'd prefer to play without logging in because you don't have a Google account or because you don't want to share your email?
Thanks very much for the feedback, this is really helpful!
If anyone has question suggestions, I'd really appreciate them! I think crowdsourcing questions will help us make them super varied and globally relevant. I made a suggestion form here https://forms.gle/792QQAfqTrutAH9e6
Thanks for organising! I had a great time, I'd love to see more of these events. Maybe you could circulate a Google Doc beforehand to help people brainstorm ideas, comment on each other's ideas, and indicate interest in working on ideas. You could prepopulate it with ideas you've generated as the organisers. That way when people show up they can get started faster - I think we spent the first hour or so choosing our idea.
(Btw - our BOTEC calculator's first page is at this URL.)
Interesting to think about!
But for this kind of bargain to work, wouldn't you need confidence that the you in other worlds would uphold their end of the bargain?
E.g., if it looks like I'm in videogame-world, it's probably pretty easy to spend lots of time playing videogames. But can I be confident that my counterpart in altruism-world will actually allocate enough of their time towards altruism?
(Note I don't know anything about Nash bargains and only read the non-maths parts of this post, so let me know if this is a basic misunderstanding!)
This is a really useful round-up, thank you!
A data-point on this - today I was looking for and couldn't find this graph. I found effectivealtruismdata.com but sadly it didn't have these graphs on it. So would be cool to have it on there, or at least link to this post from there!
Thanks Jack, great to see this!
Pulling out the relevant part as a quote for other readers:
- On average, it took about 25 hours to organize and run a campaign (20 hours by organizers and 5 hours by HIP).
- The events generated an average of 786 USD per hour of counterfactual donations to effective charities.
- This makes fundraising campaigns a very cost effective means of counterfactual impact; as a comparison, direct work that generates 1,000,000 USD of impact equivalent per year equates to around 500 USD per hour.
This is a great idea, so we made Anki with Uncertainty to do exactly this!
Thank you Hauke for the suggestion :D
I think we'll keep the calibration app as a pure calibration training game, where you see each question only once. Anki is already the king of spaced repetition, so adding calibration features to it seemed like a natural fit.