I'm currently working as a Senior Research Scholar at the Future of Humanity Institute.
Oh cool, thanks!
What does DMI stand for?
EA "civilisational epistemics" project / org ideaOr: an EA social media team for helping to spread simple and important ideas
Below I describe a not-at-all-thought-through idea for a high impact EA org / project. I am in no way confident that something like this is actually a good idea, although I can imagine it being worth looking into. Also, for all I know people have already thought about whether something like this would be good. Also, the idea is not due to me (any credit goes to others, all blame goes to me).
Motivating example (rough story which I haven't verified, probably some details are wrong): in the US in 2020 COVID testing companies had a big backlog of tests, and tests had a relatively short shelf life during which analysis would still be useful. Unfortunately, analysis was done with a "first in first out" queue, meaning that testing companies would analyse the oldest tests first, often meaning they wasted precious analysis capacity on expired tests. Bill Gates and others publicly noted that this was dumb and that testing companies should flip the queue and analyse the newest tests first (or maybe he just said that testing companies should be paid for "on time delivery of results" rather than just "results"). But very few people paid attention, and the testing companies didn't change their systems. This had very bad consequences for the spread of the pandemic in the US.
Claim: creating common knowledge of / social media lobbying pressure related to the right idea can make the right people know about it / face public pressure, causing them to implement the idea.
Further claim: this is a big enough deal that it's worth some EA effort to make this happen.
The suggested EA project / org: work to identify important, simple, and uncontroversial ideas as they arise and use social media to get people to pay attention to them.
Nice, thanks for those links, great to have those linked here since we didn't point to them in the report. I've seen the Open Phil one but I don't think I'd seen the Animal Ethics study, it looks very interesting.
Thanks for raising the point about speed of establishment for Clean Meat and Genetic Circuits! Our definition for the "origin year" (from here) is "The year that the technology or area is purposefully explored for the first time." So it's supposed to be when someone starts working on it, not when someone first has the idea. We think that Willem van Eelen started working on developing clean meat in the 1950's, so we set the origin year to be around then. Whereas as far as we're aware no-one was working on genetic circuits until much later.
At the moment I'm not sure whether the supplementary notes say anywhere that we think van Eelen was working on developing clean meat in the 50's, I think Megan is going to update the notes to make this clearer.
Thanks both (and Owen too), I now feel more confident that geometric mean of odds is better!
(Edit: at 1:4 odds I don't feel great about a blanket recommendation, but I guess the odds at which you're indifferent to taking the bet are more heavily stacked against us changing our mind. And Owen's <1% is obviously way lower)
(don't feel extremely confident about the below but seemed worth sharing)
I think it's really great to flag this! But as I mentioned to you elsewhere I'm not sure we're certain enough to make a blanket recommendation to the EA community.
I think we have some evidence that geometric mean of odds is better, but not that much evidence. Although I haven't looked into the evidence that Simon_M shared from Metaculus.
I guess I can potentially see us changing our minds in a year's time and deciding that arithmetic mean of probabilities is better after all, or that some other method is better than both of these.
Then maybe people will have made a costly change to a new method (learning what odds are, what a geometric mean is, learning how to apply it in practice, maybe understanding the argument for using the new method) that turns out not to have been worth it.
Nice, thanks for this!
I mean, depending on what you mean by "an okay approach sometimes... especially when you want to do something quick and dirty" I may agree with you! What I said was:This is not Tetlock’s advice, nor is it the lesson from the forecasting tournaments, especially if we use the nebulous modern definition of “outside view” instead of the original definition.
I mean, depending on what you mean by "an okay approach sometimes... especially when you want to do something quick and dirty" I may agree with you! What I said was:
This is not Tetlock’s advice, nor is it the lesson from the forecasting tournaments, especially if we use the nebulous modern definition of “outside view” instead of the original definition.
I guess I was reacting to the part just after the bit you quoted
For an entire book written by Yudkowsky on why the aforementioned forecasting method is bogus
Which I took to imply "Daniel thinks that the aforementioned forecasting method is bogus". Maybe my interpretation was incorrect. Anyway, seems very possible we in fact roughly agree here.
Re your 1, 2, 3, 4: It seems cool to try doing 4, and I can believe it's better (I don't have a strong view). Fwiw re 1 vs 2, my initial reaction is that partitioning by outside/inside view lets you decide how much weight you give to each, and maybe we think that for non-experts it's better to mostly give weight to the outside view, so the partitioning performed a useful service. I guess this is kind of what you were trying to argue against and unfortunately you didn't convince me to repent :).
Here are some forecasts for near-term progress / impacts of AI on research. They are the results of some small-ish number of hours of reading + thinking, and shouldn't be taken at all seriously. I'm sharing in case it's interesting for people and especially to get feedback on my bottom line probabilities and thought processes. I'm pretty sure there are some things I'm very wrong about in the below and I'd love for those to be corrected.
I realise that "excellent performance" etc is vague, I choose to live with that rather than putting in the time to make everything precise (or not doing the exercise at all).
If you don't know what multi-domain proteins and protein complexes are, I found this Mohammed Al Quraishi blog very useful (maybe try ctrl-f for those terms), although maybe you need to start with some relevant background knowledge. I don't have a great sense for how big a deal this would be for various areas of biological science, but my impression is that they're both roughly the same order of magnitude of usefulness as getting excellent performance on single-domain proteins was (i.e. what AF2 has already achieved).
As for why:
80% chance that excellent AI performance on multi-domain proteins is announced by end of 2023
70% chance that excellent AI performance on protein complexes is announced by end of 2023
20% chance of widespread adoption of a system like OpenAI Codex for data analysis by end of 2023
(NB this is just about data analysis / "data science" rather than about usage of Codex in general)
Separately, various people seem to think that the appropriate way to make forecasts is to (1) use some outside-view methods, (2) use some inside-view methods, but only if you feel like you are an expert in the subject, and then (3) do a weighted sum of them all using your intuition to pick the weights. This is not Tetlock’s advice, nor is it the lesson from the forecasting tournaments, especially if we use the nebulous modern definition of “outside view” instead of the original definition. (For my understanding of his advice and those lessons, see this post, part 5. For an entire book written by Yudkowsky on why the aforementioned forecasting method is bogus, see Inadequate Equilibria, especially this chapter. Also, I wish to emphasize that I myself was one of these people, at least sometimes, up until recently when I noticed what I was doing!)
This is a bit tangential to the main point of your post, but I thought I'd give some thoughts on this, partly because I basically did exactly this procedure a few months ago in an attempt to come to a personal all-things-considered view about AI timelines (although I did "use some inside-view methods" even though I don't at all feel like I'm an expert in the subject!).
I liked your AI Impacts post, thanks for linking to it! Maybe a good summary of the recommended procedure is the part at the very end. I do feel like it was useful for me to read it.
Tetlock describes how superforecasters go about making their predictions.56 Here is an attempt at a summary:Sometimes a question can be answered more rigorously if it is first “Fermi-ized,” i.e. broken down into sub-questions for which more rigorous methods can be applied.Next, use the outside view on the sub-questions (and/or the main question, if possible). You may then adjust your estimates using other considerations (‘the inside view’), but do this cautiously.Seek out other perspectives, both on the sub-questions and on how to Fermi-ize the main question. You can also generate other perspectives yourself.Repeat steps 1 – 3 until you hit diminishing returns.Your final prediction should be based on an aggregation of various models, reference classes, other experts, etc.
Tetlock describes how superforecasters go about making their predictions.56 Here is an attempt at a summary:
I'm less sure about the direct relevance of Inadequate Equilibria for this, apart from it making the more general point that ~"people should be less scared of relying on their own intuition / arguments / inside view". Maybe I haven't scrutinised it closely enough.
To be clear, I don't think "weighted sum of 'inside views' and 'outside views'" is the gold standard or something. I just think it's an okay approach sometimes (maybe especially when you want to do something "quick and dirty").
If you strongly disagree (which I think you do), I'd love for you to change my mind! :)
This from Paul Christiano in 2014 is also very relevant (part of it makes similar points to a lot of the recent stuff from Open Philanthropy, but the arguments are very brief; it's interesting to see how things have evolved over the years): Three impacts of machine intelligence