I'm currently working as a Senior Research Scholar at the Future of Humanity Institute.
Nice! I've been doing annual reviews loosely following Alex Vermeer's guide for the past few years, and my sense is that they've been extremely valuable.
Thanks for writing this! The "how to make writing more engaging" section seems useful to me, and so does the general pointer to at least consider putting more effort into being engaging with public writing.
I agree with the general sentiment in some of the other comments that's along the lines of "actually sometimes a relatively dry style makes sense". I personally have pretty mixed feelings about the "Lesswrong style" (as a reader and a writer).
(For what it's worth, I didn't really have a problem with the previous title. I probably would have hesitated before using that title myself, but I often feel like I'm too conservative about these things)
Oh cool, thanks!
What does DMI stand for?
EA "civilisational epistemics" project / org ideaOr: an EA social media team for helping to spread simple and important ideas
Below I describe a not-at-all-thought-through idea for a high impact EA org / project. I am in no way confident that something like this is actually a good idea, although I can imagine it being worth looking into. Also, for all I know people have already thought about whether something like this would be good. Also, the idea is not due to me (any credit goes to others, all blame goes to me).
Motivating example (rough story which I haven't verified, probably some details are wrong): in the US in 2020 COVID testing companies had a big backlog of tests, and tests had a relatively short shelf life during which analysis would still be useful. Unfortunately, analysis was done with a "first in first out" queue, meaning that testing companies would analyse the oldest tests first, often meaning they wasted precious analysis capacity on expired tests. Bill Gates and others publicly noted that this was dumb and that testing companies should flip the queue and analyse the newest tests first (or maybe he just said that testing companies should be paid for "on time delivery of results" rather than just "results"). But very few people paid attention, and the testing companies didn't change their systems. This had very bad consequences for the spread of the pandemic in the US.
Claim: creating common knowledge of / social media lobbying pressure related to the right idea can make the right people know about it / face public pressure, causing them to implement the idea.
Further claim: this is a big enough deal that it's worth some EA effort to make this happen.
The suggested EA project / org: work to identify important, simple, and uncontroversial ideas as they arise and use social media to get people to pay attention to them.
Nice, thanks for those links, great to have those linked here since we didn't point to them in the report. I've seen the Open Phil one but I don't think I'd seen the Animal Ethics study, it looks very interesting.
Thanks for raising the point about speed of establishment for Clean Meat and Genetic Circuits! Our definition for the "origin year" (from here) is "The year that the technology or area is purposefully explored for the first time." So it's supposed to be when someone starts working on it, not when someone first has the idea. We think that Willem van Eelen started working on developing clean meat in the 1950's, so we set the origin year to be around then. Whereas as far as we're aware no-one was working on genetic circuits until much later.
At the moment I'm not sure whether the supplementary notes say anywhere that we think van Eelen was working on developing clean meat in the 50's, I think Megan is going to update the notes to make this clearer.
Thanks both (and Owen too), I now feel more confident that geometric mean of odds is better!
(Edit: at 1:4 odds I don't feel great about a blanket recommendation, but I guess the odds at which you're indifferent to taking the bet are more heavily stacked against us changing our mind. And Owen's <1% is obviously way lower)
(don't feel extremely confident about the below but seemed worth sharing)
I think it's really great to flag this! But as I mentioned to you elsewhere I'm not sure we're certain enough to make a blanket recommendation to the EA community.
I think we have some evidence that geometric mean of odds is better, but not that much evidence. Although I haven't looked into the evidence that Simon_M shared from Metaculus.
I guess I can potentially see us changing our minds in a year's time and deciding that arithmetic mean of probabilities is better after all, or that some other method is better than both of these.
Then maybe people will have made a costly change to a new method (learning what odds are, what a geometric mean is, learning how to apply it in practice, maybe understanding the argument for using the new method) that turns out not to have been worth it.
Nice, thanks for this!
I mean, depending on what you mean by "an okay approach sometimes... especially when you want to do something quick and dirty" I may agree with you! What I said was:This is not Tetlock’s advice, nor is it the lesson from the forecasting tournaments, especially if we use the nebulous modern definition of “outside view” instead of the original definition.
I mean, depending on what you mean by "an okay approach sometimes... especially when you want to do something quick and dirty" I may agree with you! What I said was:
This is not Tetlock’s advice, nor is it the lesson from the forecasting tournaments, especially if we use the nebulous modern definition of “outside view” instead of the original definition.
I guess I was reacting to the part just after the bit you quoted
For an entire book written by Yudkowsky on why the aforementioned forecasting method is bogus
Which I took to imply "Daniel thinks that the aforementioned forecasting method is bogus". Maybe my interpretation was incorrect. Anyway, seems very possible we in fact roughly agree here.
Re your 1, 2, 3, 4: It seems cool to try doing 4, and I can believe it's better (I don't have a strong view). Fwiw re 1 vs 2, my initial reaction is that partitioning by outside/inside view lets you decide how much weight you give to each, and maybe we think that for non-experts it's better to mostly give weight to the outside view, so the partitioning performed a useful service. I guess this is kind of what you were trying to argue against and unfortunately you didn't convince me to repent :).
Here are some forecasts for near-term progress / impacts of AI on research. They are the results of some small-ish number of hours of reading + thinking, and shouldn't be taken at all seriously. I'm sharing in case it's interesting for people and especially to get feedback on my bottom line probabilities and thought processes. I'm pretty sure there are some things I'm very wrong about in the below and I'd love for those to be corrected.
I realise that "excellent performance" etc is vague, I choose to live with that rather than putting in the time to make everything precise (or not doing the exercise at all).
If you don't know what multi-domain proteins and protein complexes are, I found this Mohammed Al Quraishi blog very useful (maybe try ctrl-f for those terms), although maybe you need to start with some relevant background knowledge. I don't have a great sense for how big a deal this would be for various areas of biological science, but my impression is that they're both roughly the same order of magnitude of usefulness as getting excellent performance on single-domain proteins was (i.e. what AF2 has already achieved).
As for why:
80% chance that excellent AI performance on multi-domain proteins is announced by end of 2023
70% chance that excellent AI performance on protein complexes is announced by end of 2023
20% chance of widespread adoption of a system like OpenAI Codex for data analysis by end of 2023
(NB this is just about data analysis / "data science" rather than about usage of Codex in general)