Like a lot of this post, this is a bit of an intuition-based 'hot take'. But some quick things that come to mind: i) iirc it didn't seem like our initial intuitions were very different to the WFM results, ii) when we filled in the weighted factor model I think we had a pretty limited understanding of what each project involved (so you might not expect super useful results), iii) I got a bit more of a belief that it just matters a lot that central-AI-x-risk people have a lot of context (and that this more than offsets the a risk of bias and groupthink) so u...
Hi Stephen, thanks for the kind words!
I'm wondering if you have any sense of how quickly returns to new projects in this space might diminish? Founding an AI policy research and advocacy org seems like a slam dunk, but I'm wondering how many more ideas nearly that promising are out there.
I guess my rough impression is that there's lots of possible great new projects if there's a combination of a well-suited founding team and support for that team. But "well-suited founding team" might be quite a high bar.
Thanks, I found this helpful to read. I added it to my database of resources relevant for thinking about extreme risks from advanced nanotechnology.
I do agree that MNT seems very hard, and because of that it seems likely that it if it's developed it in an AGI/ASI hyper-tech-accelerated world it would developed relatively late on (though if tech development is hugely accelerated maybe it would still be developed pretty fast in absolute terms).
Thanks for sharing Ben! As a UK national and resident I'm grateful for an easy way to be at least a little aware of relevant UK politics, which I otherwise struggle to manage.
Thanks for writing this Joey, very interesting!
Since the top 20% of founders who enter your programme generate most of the impact, and it's fairly predictable who these founders will be, it seems like getting more applicants in that top 20% bracket could be pretty huge for the impact you're able to have. Curious if you have any reaction to that? I don't know whether expanding the applicant pool at the top end is a top priority for the organisation currently.
Thanks for these!
I think my general feeling on these is that it's hard for me to tell if they actually reduced existential risk. Maybe this is just because I don't understand the mechanisms for a global catastrophe from AI well enough. (e.g. because of this, linking to Neel's longlist of theories for impact was helpful, so thank you for that!)
E.g. my impression is that some people with relevant knowledge seem to think that technical safety work currently can't achieve very much.
(Hopefully this response isn't too annoying -- I could put in the work to understand the mechanisms for a global catastrophe from AI better, and maybe I will get round to this someday)
I think my motivation comes from things to do with: helping with my personal motivation for work on existential risk, helping me form accurate beliefs on the general tractability of work on existential risk, and helping me advocate to other people about the importance of work on existential risk.
Thinking about it maybe it would be pretty great to have someone assemble and maintain a good public list of answers to this question! (or maybe someone did already and I don't know about it)
I imagine a lot of relevant stuff could be infohazardous (although that stuff might not do very well on the "legible" criterion) -- if so and if you happen to feel comfortable sharing it with me privately, feel free to DM me about it.
Should EA people just be way more aggressive about spreading the word (within the community, either publicly or privately) about suspicions that particular people in the community have bad character?
(not saying that this is an original suggestion, you basically mention this in your thoughts on what you could have done differently)
I (with lots of help from my colleague Marie Davidsen Buhl) made a database of resources relevant nanotechnology strategy research, with articles sorted by relevance for people new to the area. I hope it will be useful for people who want to look into doing research in this area.
This is pretty funny because, to me, Luke (who I don't know and have never met) seems like one of the most intimidatingly smart EA people I know of.
Nice, I don't think I have much to add at the moment, but I really like + appreciate this comment!
Thanks, would be interested to discuss more! I'll give some reactions here for the time being
This sounds astonishingly high to me (as does 1-2% without TAI)
(For context / slight warning on the quality of the below: I haven't thought about this for a while, and in order to write the below I'm mostly relying on old notes + my current sense of whether I still agree with them.)
Maybe we don't want to get into an AGI/TAI timelines discussion here (and I don't have great insights to offer there anyway) so I'll focus on the pre-TAI number.
I definitely agree that i...
[2023-01-19 update: there's now an expanded version of this comment here.]
Note: I've edited this comment after dashing it off this morning, mainly for clarity.
Sure, that all makes sense. I'll think about spending some more time on this. In the meantime I'll just give my quick reactions:
a) Has anyone ever thought about this question in detail?
b) What factors would such a decision depend on? ...
Ah was looking forward to listening to this using the Nonlinear Library podcast but twitter screenshots don't work well with that. If someone made a version of this with the screenshots converted to normal text that would be helpful for me + maybe others.
Some quick thoughts on this from me:
Honestly for me it's probably at the "almost too good to be true" level of surprisingness (but to be clear it actually is true!). I think it's a brilliant community / ecosystem (though of course there's always room for improvement).
I agree that you probably generally need unusual views to find the goals of these jobs/projects compelling (and maybe also to be a good job applicant in many cases?). That seems like a high bar to me, and I think it's a big factor here.
I also agree that not all roles are research roles, althou...
Yeah, I think that progress in nanotech stuff has been very slow over the past 20 years, whereas progress in AI stuff has sped up a lot (and investment has increased a huge amount). Based on that, it seems reasonable to focus more on making the development of powerful AI go well for the world and to think less about nanotech, so I think this is at least part of the story.
Thanks for sharing your thoughts!
For mid-career people, it feels like runway may be less of an impact relative to the knowledge you may be giving up something with a guaranteed impact, even if it may not be optimal, on the basis of uncertain factors.
If you're thinking purely about maximising impact, you probably want to go for the highest expected value thing, in which case accepting a bit more uncertainty in your lifetime impact to explore other options is (in the kind of situation you described) maybe well worth it in many cases. Of course, one impor...
At a high level I'd say ~in the 2 years I've spent doing "EA work" my average motivation has been towards the upper end of my motivation level over the previous 8-9 years doing a PhD and working in finance. (I might have been significantly less motivated working in finance if I wasn't kind of doing an "earning to give" type thing.)
I think the biggest areas of difficulty for me re motivation in "EA work" have been difficulties with motivation associated with doing research-type things that are many steps removed from impact, and at times not having huge amounts of management / guidance (but there are lots of pluses, as I implied in the post I guess).
Thanks. On the first point in particular, the post might add a bit of confusion here unfortunately.
Edit: I added something near the top that hopefully makes things a bit clearer re the first point
Also note that, for the purposes of this post, by “EA work” I mostly mean working at EA orgs. But I also think it would be great if mid-career people considered switching to really impactful stuff that isn't at EA orgs, and if they're already doing really impactful stuff that isn't at an EA org maybe they should keep doing that. And a lot of what I say here is still relevant for switching to highly impactful work that isn't at an EA org.
I think descriptions like this of the challenges doing good research poses are really helpful! The description definitely resonates with me.
Related question: I'm not sure whether the unique views time series plot is showing "number of views that were unique for that day" rather than "number of views from devices that never accessed the page before". E.g. if I looked at my post every day, and no-one else ever looked at it, maybe I'd see 1 unique view every day in the plot?
I like the post analytics thing! One thing that would be nice (maybe as an option) would be to see a time series of cumulative unique views as well as the time series of daily unique views that you already get. E.g. that would help with
Cumulative time series of all the statistics could also be pretty nice.
Nice! I've been doing annual reviews loosely following Alex Vermeer's guide for the past few years, and my sense is that they've been extremely valuable.
Thanks for writing this! The "how to make writing more engaging" section seems useful to me, and so does the general pointer to at least consider putting more effort into being engaging with public writing.
I agree with the general sentiment in some of the other comments that's along the lines of "actually sometimes a relatively dry style makes sense". I personally have pretty mixed feelings about the "Lesswrong style" (as a reader and a writer).
(For what it's worth, I didn't really have a problem with the previous title. I probably would have hesitated before using that title myself, but I often feel like I'm too conservative about these things)
EA "civilisational epistemics" project / org idea
Or: an EA social media team for helping to spread simple and important ideas
Below I describe a not-at-all-thought-through idea for a high impact EA org / project. I am in no way confident that something like this is actually a good idea, although I can imagine it being worth looking into. Also, for all I know people have already thought about whether something like this would be good. Also, the idea is not due to me (any credit goes to others, all blame goes to me).
Motivating example (rough story which...
Nice, thanks for those links, great to have those linked here since we didn't point to them in the report. I've seen the Open Phil one but I don't think I'd seen the Animal Ethics study, it looks very interesting.
Thanks for raising the point about speed of establishment for Clean Meat and Genetic Circuits! Our definition for the "origin year" (from here) is "The year that the technology or area is purposefully explored for the first time." So it's supposed to be when someone starts working on it, not when someone first has the idea. We think that Willem va...
Thanks both (and Owen too), I now feel more confident that geometric mean of odds is better!
(Edit: at 1:4 odds I don't feel great about a blanket recommendation, but I guess the odds at which you're indifferent to taking the bet are more heavily stacked against us changing our mind. And Owen's <1% is obviously way lower)
(don't feel extremely confident about the below but seemed worth sharing)
I think it's really great to flag this! But as I mentioned to you elsewhere I'm not sure we're certain enough to make a blanket recommendation to the EA community.
I think we have some evidence that geometric mean of odds is better, but not that much evidence. Although I haven't looked into the evidence that Simon_M shared from Metaculus.
I guess I can potentially see us changing our minds in a year's time and deciding that arithmetic mean of probabilities is better after all, or that s...
I guess I can potentially see us changing our minds in a year's time and deciding that arithmetic mean of probabilities is better after all, or that some other method is better than both of these.
This seems very unlikely, I'll bet your $20 against my $80 that this doesn't happen.
Nice, thanks for this!
I mean, depending on what you mean by "an okay approach sometimes... especially when you want to do something quick and dirty" I may agree with you! What I said was:
This is not Tetlock’s advice, nor is it the lesson from the forecasting tournaments, especially if we use the nebulous modern definition of “outside view” instead of the original definition.
I guess I was reacting to the part just after the bit you quoted
For an entire book written by Yudkowsky on why the aforementioned forecasting method is bogus
Which I took to imply "Danie...
Here are some forecasts for near-term progress / impacts of AI on research. They are the results of some small-ish number of hours of reading + thinking, and shouldn't be taken at all seriously. I'm sharing in case it's interesting for people and especially to get feedback on my bottom line probabilities and thought processes. I'm pretty sure there are some things I'm very wrong about in the below and I'd love for those to be corrected.
...Separately, various people seem to think that the appropriate way to make forecasts is to (1) use some outside-view methods, (2) use some inside-view methods, but only if you feel like you are an expert in the subject, and then (3) do a weighted sum of them all using your intuition to pick the weights. This is not Tetlock’s advice, nor is it the lesson from the forecasting tournaments, especially if we use the nebulous modern definition of “outside view” instead of the original definition. (For my understanding of his advice and those lessons, see this pos
This from Paul Christiano in 2014 is also very relevant (part of it makes similar points to a lot of the recent stuff from Open Philanthropy, but the arguments are very brief; it's interesting to see how things have evolved over the years): Three impacts of machine intelligence
I realise re-reading this that I'm not sure whether these projects are supposed to cost $100million per year or e.g. $100million over their lifetime or something. Maybe something in between?
(idea probably stolen from somewhere else) create an organisation employing an army of superforecasters to gather facts and/or forecasts about the world that are vitally important from an EA perspective.
Maybe it's hard to get to $100million? E.g. 400 employees each costing $250k would get you there, which (very naively) seems on the high end of what's likely to work well. Also e.g. other comments in this post have said that CSET was set up for $55m/5 years.
(extremely speculative)
Promote global cooperation and moral circle expansion by paying people (/ incentivising them in some smarter way) to have regular video calls with a random other person somewhere on the planet.
Here are some thoughts after reading a book called "The Inner Game of Tennis" by Timothy Gallwey. I think it's quite a famous book and maybe a lot of people know it well already. I consider it to be mainly about how to prevent your system 2/conscious mind/analytical mind from interfering with the performance of your system 1/subconscious mind/intuitive mind. This is explained in the context of tennis, but it seems applicable to many other contexts, as the author himself argues. If that sounds interesting, I recommend checking the book out, it's short and q...
Takeaways from some reading about economic effects of human-level AI
I spent some time reading things that you might categorise as “EA articles on the impact of human-level AI on economic growth”. Here are some takeaways from reading these (apologies for not always providing a lot of context / for not defining terms; hopefully clicking the links will provide decent context).
Thanks for this, I think it's really brilliant, I really appreciate how clearly the details are laid out in the blog and report. It's really cool to be able to see external reviewer comments too.
I found it kind of surprising that there isn't any mention of civilizational collapse etc when thinking about growth outcomes for the 21st century (e.g. in Appendix G, but also apparently in your bottom line probabilities in e.g. Section 4.6 "Conclusion" -- or maybe it's there and I missed it / it's not explicit).
I guess your probabilities for various growth outcom...
Causal vs evidential decision theory
I wrote this last Autumn as a private “blog post” shared only with a few colleagues. I’m posting it publicly now (after mild editing) because I have some vague idea that it can be good to make things like this public. Decision theories are a pretty well-worn topic in EA circles and I'm definitely not adding new insights here. These are just some fairly naive thoughts-out-loud about how CDT and EDT handle various scenarios. If you've already thought a lot about decision theory you probably won't learn anything from this.
T...
Changing your working to fit the answer
I wrote this last Autumn as a private “blog post” shared only with a few colleagues. I’m posting it publicly now (after mild editing) because I have some vague idea that it can be good to make things like this public. It is quite rambling and doesn't really have a clear point (but I think it's at least an interesting topic).
Say you want to come up with a model for AI timelines, i.e. the probability of transformative AI being developed by year X for various values of X. You put in your assumptions (beliefs about the wo...
One (maybe?) low-effort thing that could be nice would be saying "these are my top 5" or "these are listed in order of how promising I think they are" or something (you may well have done that already and I missed it).
I don't necessarily have a great sense for how good each one is, but here are some names. Though I expect you're already familiar with all of them :).
EA / x-risk -related
Outside EA
- Entrepreneur First seems impressive, though I'm not that well placed to judge
- Maybe this is nitpicking: As far as I know Y-Combinator is an accelerator rather than an incubator (ie it's focu
... (read more)