All of Ben Snodin's Comments + Replies

I don't necessarily have a great sense for how good each one is, but here are some names. Though I expect you're already familiar with all of them :).

EA / x-risk -related

  • Future of Life Foundation
  • Active grantmaking, which might happen e.g. at Open Phil or Longview or Effective Giving, is a bit like incubation
  • (Charity Entrepreneurship of course, as you mentioned)

Outside EA

  • Entrepreneur First seems impressive, though I'm not that well placed to judge
  • Maybe this is nitpicking: As far as I know Y-Combinator is an accelerator rather than an incubator (ie it's focu
... (read more)
1
SebastianSchmidt
3mo
Thanks for your response Ben. All of these were on my radar but thanks for sharing.  Good luck with what you'll be working on too!

Like a lot of this post, this is a bit of an intuition-based 'hot take'. But some quick things that come to mind: i) iirc it didn't seem like our initial intuitions were very different to the WFM results, ii) when we filled in the weighted factor model I think we had a pretty limited understanding of what each project involved (so you might not expect super useful results), iii) I got a bit more of a belief that it just matters a lot that central-AI-x-risk people have a lot of context (and that this more than offsets the a risk of bias and groupthink) so u... (read more)

Hi Stephen, thanks for the kind words!

I'm wondering if you have any sense of how quickly returns to new projects in this space might diminish? Founding an AI policy research and advocacy org seems like a slam dunk, but I'm wondering how many more ideas nearly that promising are out there.

 

I guess my rough impression is that there's lots of possible great new projects if there's a combination of a well-suited founding team and support for that team. But "well-suited founding team" might be quite a high bar.

Thanks, I found this helpful to read. I added it to my database of resources relevant for thinking about extreme risks from advanced nanotechnology.

I do agree that MNT seems very hard, and because of that it seems likely that it if it's developed it in an AGI/ASI hyper-tech-accelerated world it would developed relatively late on (though if tech development is hugely accelerated maybe it would still be developed pretty fast in absolute terms).

Thanks for sharing Ben! As a UK national and resident I'm grateful for an easy way to be at least a little aware of relevant UK politics, which I otherwise struggle to manage.

1
Ben Stevenson
7mo
Thanks Ben! Glad it was helpful1

Thanks for writing this Joey, very interesting!

Since the top 20% of founders who enter your programme generate most of the impact, and it's fairly predictable who these founders will be, it seems like getting more applicants in that top 20% bracket could be pretty huge for the impact you're able to have. Curious if you have any reaction to that? I don't know whether expanding the applicant pool at the top end is a top priority for the organisation currently.

4
Joey
11mo
Yep, increasing this pool is a top priority, particularly outreach outside of the EA movement.
3
Aidan Alexander
11mo
You’re right, and so It is a top priority! Others can say more as to the current hypotheses on how to do so

Thanks for these!

I think my general feeling on these is that it's hard for me to tell if they actually reduced existential risk. Maybe this is just because I don't understand the mechanisms for a global catastrophe from AI well enough. (e.g. because of this, linking to Neel's longlist of theories for impact was helpful, so thank you for that!)

E.g. my impression is that some people with relevant knowledge seem to think that technical safety work currently can't achieve very much. 

(Hopefully this response isn't too annoying -- I could put in the work to understand the mechanisms for a global catastrophe from AI better, and maybe I will get round to this someday)

I think my motivation comes from things to do with: helping with my personal motivation for work on existential risk, helping me form accurate beliefs on the general tractability of work on existential risk, and helping me advocate to other people about the importance of work on existential risk.

Thinking about it maybe it would be pretty great to have someone assemble and maintain a good public list of answers to this question! (or maybe someone did already and I don't know about it)

I imagine a lot of relevant stuff could be infohazardous (although that stuff might not do very well on the "legible" criterion) -- if so and if you happen to feel comfortable sharing it with me privately, feel free to DM me about it.

4
EdoArad
1y
Just out of curiosity, and maybe it'd help readers with answers, could you share why you are interested in this question? 

Should EA people just be way more aggressive about spreading the word (within the community, either publicly or privately) about suspicions that particular people in the community have bad character?

(not saying that this is an original suggestion, you basically mention this in your thoughts on what you could have done differently)

I (with lots of help from my colleague Marie Davidsen Buhl) made a database of resources relevant nanotechnology strategy research, with articles sorted by relevance for people new to the area. I hope it will be useful for people who want to look into doing research in this area.

Sure, I think I or Claire Boine might write about that kind of thing some time soon :).

This is pretty funny because, to me, Luke (who I don't know and have never met) seems like one of the most intimidatingly smart EA people I know of.

Nice, I don't think I have much to add at the moment, but I really like + appreciate this comment!

Thanks, would be interested to discuss more! I'll give some reactions here for the time being

This sounds astonishingly high to me (as does 1-2% without TAI)

(For context / slight warning on the quality of the below: I haven't thought about this for a while, and in order to write the below I'm mostly relying on old notes + my current sense of whether I still agree with them.)

Maybe we don't want to get into an AGI/TAI timelines discussion here (and I don't have great insights to offer there anyway) so I'll focus on the pre-TAI number.

I definitely agree that i... (read more)

[2023-01-19 update: there's now an expanded version of this comment here.]

Note: I've edited this comment after dashing it off this morning, mainly for clarity.

Sure, that all makes sense. I'll think about spending some more time on this. In the meantime I'll just give my quick reactions:

  • On reluctance to be extremely confident—I start to worry when considerations like this dictate that one give a series of increasingly specific/conjunctive scenarios roughly the same probability. I don't expect a forum comment or blog post to get someone to such high confiden
... (read more)

a) Has anyone ever thought about this question in detail? 

  • I haven't thought about this in detail but I have a weakly held view that senior people should do more mentoring
  • (without wanting to imply that I'm a "senior EA") I've thought about it / am generally inclined to think about it more carefully for me personally, I think last time I did I basically thought I'd like to do more mentoring and was bottlenecked on not having anyone to mentor (but not sure that I currently think I should do more mentoring)

b) What factors would such a decision depend on? ... (read more)

Ah was looking forward to listening to this using the Nonlinear Library podcast but twitter screenshots don't work well with that. If someone made a version of this with the screenshots converted to normal text that would be helpful for me + maybe others.

Nice, sounds like a cool project!

Some quick thoughts on this from me:

Honestly for me it's probably at the "almost too good to be true" level of surprisingness (but to be clear it actually is true!). I think it's a brilliant community / ecosystem (though of course there's always room for improvement).

I agree that you probably generally need unusual views to find the goals of these jobs/projects compelling (and maybe also to be a good job applicant in many cases?). That seems like a high bar to me, and I think it's a big factor here.

I also agree that not all roles are research roles, althou... (read more)

Yeah, I think that progress in nanotech stuff has been very slow over the past 20 years, whereas progress in AI stuff has sped up a lot (and investment has increased a huge amount). Based on that, it seems reasonable to focus more on making the development of powerful AI go well for the world and to think less about nanotech, so I think this is at least part of the story.

Thanks for sharing your thoughts!

For mid-career people, it feels like runway may be less of an impact relative to the knowledge you may be giving up something with a guaranteed impact, even if it may not be optimal, on the basis of uncertain factors.

If you're thinking purely about maximising impact, you probably want to go for the highest expected value thing, in which case accepting a bit more uncertainty in your lifetime impact to explore other options is (in the kind of situation you described) maybe well worth it in many cases. Of course, one impor... (read more)

At a high level I'd say ~in the 2 years I've spent doing "EA work" my average motivation has been towards the upper end of my motivation level over the previous 8-9 years doing a PhD and working in finance. (I might have been significantly less motivated working in finance if I wasn't kind of doing an "earning to give" type thing.)

I think the biggest areas of difficulty for me re motivation in "EA work" have been difficulties with motivation associated with doing research-type things that are many steps removed from impact, and at times not having huge amounts of management / guidance (but there are lots of pluses, as I implied in the post I guess).

Thanks. On the first point in particular, the post might add a bit of confusion here unfortunately.

Edit: I added something near the top that hopefully makes things a bit clearer re the first point

Also note that, for the purposes of this post, by “EA work” I mostly mean working at EA orgs. But I also think it would be great if mid-career people considered switching to really impactful stuff that isn't at EA orgs, and if they're already doing really impactful stuff that isn't at an EA org maybe they should keep doing that. And a lot of what I say here is still relevant for switching to highly impactful work that isn't at an EA org.

I think descriptions like this of the challenges doing good research poses are really helpful! The description definitely resonates with me.

Related question: I'm not sure whether the unique views time series plot is showing "number of views that were unique for that day" rather than "number of views from devices that never accessed the page before". E.g. if I looked at my post every day, and no-one else ever looked at it, maybe I'd see 1 unique view every day in the plot?

I like the post analytics thing! One thing that would be nice (maybe as an option) would be to see a time series of cumulative unique views as well as the time series of daily unique views that you already get. E.g. that would help with

  • comparing posts that went up at different times (e.g. "does post X only have more views than post Y because it's been up for 3 months longer?")
  •  answering the question "after how many days did the post accumulate 90% of its (as of today) total unique views".

Cumulative time series of all the statistics could also be pretty nice.

1
Ben Snodin
2y
Related question: I'm not sure whether the unique views time series plot is showing "number of views that were unique for that day" rather than "number of views from devices that never accessed the page before". E.g. if I looked at my post every day, and no-one else ever looked at it, maybe I'd see 1 unique view every day in the plot?

Congratulations on the new position, it sounds really exciting!

2
Jaime Sevilla
2y
Thank you Ben!

Nice! I've been doing annual reviews loosely following Alex Vermeer's guide for the past few years, and my sense is that they've been extremely valuable.

Thanks for writing this! The "how to make writing more engaging" section seems useful to me, and so does the general pointer to at least consider putting more effort into being engaging with public writing.

I agree with the general sentiment in some of the other comments that's along the lines of "actually sometimes a relatively dry style makes sense".  I personally have pretty mixed feelings about the "Lesswrong style" (as a reader and a writer).

(For what it's worth, I didn't really have a problem with the previous title. I probably would have hesitated before using that title myself, but I often feel like I'm too conservative about these things)

Nice!

DMI-like organization

What does DMI stand for?

1
Stasiana
2y
They are to many DMI meaning like "Development Media International", "Desktop Management Information",  "Deferred Maintenance Item".. More of them look there : https://acronym24.com/dmi-meaning/
3
Hauke Hillebrandt
3y
Development Media International (DMI) is a non-governmental organization with both non-profit and for-profit arms that "use[s] scientific modelling combined with mass media campaigns in order to save the greatest number of lives in the most cost-effective way". https://en.wikipedia.org/wiki/Development_Media_International https://www.givewell.org/charities/DMI-July-2021-Version
4
Linch
3y
Maybe Development Media International? It was a standout Givewell charity for a while.

EA "civilisational epistemics" project / org idea
Or:  an EA social media team for helping to spread simple and important ideas

Below I describe a not-at-all-thought-through idea for a high impact EA org / project. I am in no way confident that something like this is actually a good idea, although I can imagine it being worth looking into. Also, for all I know people have already thought about whether something like this would be good. Also, the idea is not due to me (any credit goes to others, all blame goes to me).

Motivating example (rough story which... (read more)

3
Aaron Gertler
3y
The lobbying pressure seems more important than the common knowledge. EA orgs already spend a lot of time identifying and sharing important and simple ideas — I wouldn't call them "uncontroversial", but few ideas are. (See "building more houses makes housing cheaper", which is a lot more controversial than I'd have expected before I started to follow that "debate".) I do think it would be worth spending a few hours trying to come up with examples of ideas that would be good to spread + calculating very rough BOTECs for them. For example, what's the value of getting one middle-class American to embrace passive rather than active investment? What's the value of getting one more person vaccinated? Development Media International is the obvious parallel, and the cost-effectiveness of using ridiculously cheap radio advertisements to share basic public health information seems hard to beat on priors. But there are a lot of directions you could go with "civilizational epistemics", and maybe some of them wind up looking much better, e.g. because working in the developed world = many more resources to redirect. (Speaking of which, Guarding Against Pandemics is another example — their goal isn't just to reach a few specific politicians, but to reach people who will share their message with politicians.)
5
Hauke Hillebrandt
3y
Really good idea and I think spreading socially useful information is really underexplored.  Maybe one could even think about more  broad generalizable bite-sized memes that are robustly good  for everyone to know and one should spread.  Some examples: * Germ theory * Pigouvian Taxes * Personal finance (e.g. Index funds) * Cost-effectiveness analysis * Health behaviours Maybe there should be a DMI-like organization that does that. Maybe effective would be either very visual ways of spreading these messages within a few seconds (e.g. https://edition.cnn.com/videos/entertainment/2020/03/17/scrubs-14-year-old-clip-infection-spread-mxp-vpx.hln ).  There's already Kurzgesagt, which is a bit further along the spectrum towards 'deep engagement' which I think is really good and gets funding from the Gates Foundation.
2
kokotajlod
3y
Is this the sort of thing where if we had, say, 10 - 100 EAs and a billion dollar / year budget, we could use that money to basically buy the eyeballs of a significant fraction of the US population? Are they for sale?

Nice, thanks for those links, great to have those linked here since we didn't point to them in the report. I've seen the Open Phil one but I don't think I'd seen the Animal Ethics study, it looks very interesting.

Thanks for raising the point about speed of establishment for Clean Meat and Genetic Circuits! Our definition for the "origin year" (from here) is "The year that the technology or area is purposefully explored for the first time." So it's supposed to be when someone starts working on it, not when someone first has the idea. We think that Willem va... (read more)

1
kierangreig
3y
Thanks, Ben! :) 

Thanks both (and Owen too), I now feel more confident that geometric mean of odds is better!

(Edit: at 1:4 odds I don't feel great about a blanket recommendation, but I guess the odds at which you're indifferent to taking the bet are more heavily stacked against us changing our mind. And Owen's <1% is obviously way lower)

(don't feel extremely confident about the below but seemed worth sharing)

I think it's really great to flag this! But as I mentioned to you elsewhere I'm not sure we're certain enough to make a blanket recommendation to the EA community.

I think we have some evidence that geometric mean of odds is better, but not that much evidence. Although I haven't looked into the evidence that Simon_M shared from Metaculus.

I guess I can potentially see us changing our minds in a year's time and deciding that arithmetic mean of probabilities is better after all, or that s... (read more)

5
Linch
3y
Geometric mean is just a really useful tool for estimations in general. It also makes a lot of sense for aggregating results other than probabilities, eg for different Fermi estimates of real quantities.
8
Owen Cotton-Barratt
3y
Like Nuno I think this is very unlikely. Probably <1% that we'd straightforwardly prefer arithmetic mean of probabilities. Much higher chance that in some circumstances we'd prefer something else (e.g. unweighted geometric mean of probabilities gets very distorted by having one ignorant person put in a probability which is extremely close to zero, so in some circumstances you'd want to be able to avoid that). I don't think the amount of evidence here would be conclusive if we otherwise thought arithmetic means of probabilities were best. But also my prior before seeing this evidence significantly favoured taking geometric mean of odds -- this comes from some conversations over a few years getting a feel for "what are sensible ways to treat probabilities" and feeling like for many purposes in this vicinity things behave better in log-odds space. However I didn't have a proper grounding for that, so this post provides both theoretical support and empirical support, which in combination with the prior make it feel like a fairly strong case. That said, I think it's worth pointing out the case where arithmetic mean of probabilities is exactly right to use: if you think that exactly one of the estimates is correct but you don't know which (rather than the usual situation of thinking they all provide evidence about what the correct answer is).

I guess I can potentially see us changing our minds in a year's time and deciding that arithmetic mean of probabilities is better after all, or that some other method is better than both of these.

This seems very unlikely, I'll bet your $20 against my $80 that this doesn't happen.

Nice, thanks for this!

I mean, depending on what you mean by "an okay approach sometimes... especially when you want to do something quick and dirty" I may agree with you! What I said was:

This is not Tetlock’s advice, nor is it the lesson from the forecasting tournaments, especially if we use the nebulous modern definition of “outside view” instead of the original definition.

I guess I was reacting to the part just after the bit you quoted

For an entire book written by Yudkowsky on why the aforementioned forecasting method is bogus

Which I took to imply "Danie... (read more)

3
kokotajlod
3y
Yeah, I probably shouldn't have said "bogus" there, since while I do think it's overrated, it's not the worst method. (Though arguably things can be bogus even if they aren't the worst?)  Partitioning by any X lets you decide how much weight you give to X vs. not-X. My claim is that the bag of things people refer to as "outside view" isn't importantly different from the other bag of things, at least not more importantly different than various other categorizations one might make.  I do think that people who are experts should behave differently than people who are non-experts. I just don't think we should summarize that as "Prefer to use outside-view methods" where outside view = the things on the First Big List. I think instead we could say: --Use deference more --Use reference classes more if you have good ones (but if you are a non-expert and your reference classes are more like analogies, they are probably leading you astray) --Trust your models less --Trust your intuition less --Trust your priors less ...etc. 

Here are some forecasts for near-term progress / impacts of AI on research. They are the results of some small-ish number of hours of reading + thinking, and shouldn't be taken at all seriously. I'm sharing in case it's interesting for people and especially to get feedback on my bottom line probabilities and thought processes. I'm pretty sure there are some things I'm very wrong about in the below and I'd love for those to be corrected.

  1. Deepmind will announce excellent performance from Alphafold2 (AF2) or some successor / relative for multi-domain proteins
... (read more)

Separately, various people seem to think that the appropriate way to make forecasts is to (1) use some outside-view methods, (2) use some inside-view methods, but only if you feel like you are an expert in the subject, and then (3) do a weighted sum of them all using your intuition to pick the weights. This is not Tetlock’s advice, nor is it the lesson from the forecasting tournaments, especially if we use the nebulous modern definition of “outside view” instead of the original definition. (For my understanding of his advice and those lessons, see this pos

... (read more)
3
kokotajlod
3y
Thanks! Re : Inadequate Equilibria: I mean, that was my opinionated interpretation I guess. :) But Yudkowsky was definitely arguing something was bogus. (This is a jab at his polemical style) To say a bit more:  Yudkowsky argues that the justifications for heavy reliance on various things called "outside view" don't hold up to scrutiny, and that what's really going on is that people are overly focused on matters of who has how much status and which topics are in whose areas of expertise and whether I am being appropriately humble and stuff like that, and that (unconsciously) this is what's really driving people's use of "outside view" methods rather than the stated justifications. I am not sure whether I agree with him or not but I do find it somewhat plausible at least. I do think the stated justifications often (usually?) don't hold up to scrutiny. I mean, depending on what you mean by "an okay approach sometimes... especially when you want to do something quick and dirty" I may agree with you! What I said was: And it seems you agree with me on that. What I would say is: Consider the following list of methods: 1. Intuition-weighted sum of "inside view" and "outside view" methods (where those terms refer to the Big Lists summarized in this post) 2. Intuition-weighted sum of "Type X" and "Type Y" methods (where those terms refer to any other partition of the things in the Big Lists summarized in this post) 3. Intuition 4. The method Tetlock recommends (as interpreted by me in the passage of my blog post you quoted) My opinion is that 1 and 2 are probably typically better than 3 and that 4 is probably typically better than 1 and 2 and that 1 and 2 are probably about the same. I am not confident in this of course, but the reasoning is: Method 4 has some empirical evidence supporting it, plus plausible arguments/models.* So it's the best. Methods 1 & 2 are like method 3 except that they force you to think more and learn more about the case (incl. relevant arguments

This from Paul Christiano in 2014 is also very relevant (part of it makes similar points to a lot of the recent stuff from Open Philanthropy, but the arguments are very brief; it's interesting to see how things have evolved over the years): Three impacts of machine intelligence

I realise re-reading this that I'm not sure whether these projects are supposed to cost $100million per year or e.g. $100million over their lifetime or something. Maybe something in between?

3
Nathan Young
3y
They are meant to grow to eventually be spending 100 million a year.

(idea probably stolen from somewhere else) create an organisation employing an army of superforecasters to gather facts and/or forecasts about the world that are vitally important from an EA perspective.

Maybe it's hard to get to $100million? E.g. 400 employees each costing $250k would get you there, which (very naively) seems on the high end of what's likely to work well. Also e.g. other comments in this post have said that CSET was set up for $55m/5 years.

1
Ben Snodin
3y
I realise re-reading this that I'm not sure whether these projects are supposed to cost $100million per year or e.g. $100million over their lifetime or something. Maybe something in between?

(extremely speculative) 

Promote global cooperation and moral circle expansion by paying people (/ incentivising them in some smarter way) to have regular video calls with a random other person somewhere on the planet.

Here are some thoughts after reading a book called "The Inner Game of Tennis" by Timothy Gallwey. I think it's quite a famous book and maybe a lot of people know it well already. I consider it to be mainly about how to prevent your system 2/conscious mind/analytical mind from interfering with the performance of your system 1/subconscious mind/intuitive mind. This is explained in the context of tennis, but it seems applicable to many other contexts, as the author himself argues. If that sounds interesting, I recommend checking the book out, it's short and q... (read more)

Takeaways from some reading about economic effects of human-level AI

I spent some time reading things that you might categorise as “EA articles on the impact of human-level AI on economic growth”. Here are some takeaways from reading these (apologies for not always providing a lot of context / for not defining terms; hopefully clicking the links will provide decent context).

... (read more)
1
Ben Snodin
3y
This from Paul Christiano in 2014 is also very relevant (part of it makes similar points to a lot of the recent stuff from Open Philanthropy, but the arguments are very brief; it's interesting to see how things have evolved over the years): Three impacts of machine intelligence

Thanks for this, I think it's really brilliant, I really appreciate how clearly the details are laid out in the blog and report. It's really cool to be able to see external reviewer comments too.

I found it kind of surprising that there isn't any mention of civilizational collapse etc when thinking about growth outcomes for the 21st century (e.g. in Appendix G, but also apparently in your bottom line probabilities in e.g. Section 4.6 "Conclusion" -- or maybe it's there and I missed it / it's not explicit).

I guess your probabilities for various growth outcom... (read more)

2
Tom_Davidson
3y
Great question! I would read Appendix G as conditional on "~no civilizational collapse (from any cause)", but not conditional on "~no AI-triggered fundamental reshaping of society that unexpectedly prevents growth". I think the latter would be incorporated in "an unanticipated bottleneck prevents explosive growth".

Thanks that's interesting, I've heard of it but I haven't looked into it.

Causal vs evidential decision theory

I wrote this last Autumn as a private “blog post” shared only with a few colleagues. I’m posting it publicly now (after mild editing) because I have some vague idea that it can be good to make things like this public. Decision theories are a pretty well-worn topic in EA circles and I'm definitely not adding new insights here. These are just some fairly naive thoughts-out-loud about how CDT and EDT handle various scenarios. If you've already thought a lot about decision theory you probably won't learn anything from this.

T... (read more)

6
Linch
3y
Are you familiar with MIRI's work on this? One recent iteration is Functional Decision Theory, though it is unclear to me if they made more recent progress since then.  It took me a long time to come around to it, but I currently buy that FDT is superior to CDT in the twin prisoner's dilemma case, while not falling to evidential blackmail (the way EDT does), as well as being notably superior overall in the stylized situation of "how should an agent relate to a world where other smarter agents can potentially read the agent's source code"

Changing your working to fit the answer

I wrote this last Autumn as a private “blog post” shared only with a few colleagues. I’m posting it publicly now (after mild editing) because I have some vague idea that it can be good to make things like this public. It is quite rambling and doesn't really have a clear point (but I think it's at least an interesting topic).

Say you want to come up with a model for AI timelines, i.e. the probability of transformative AI being developed by year X for various values of X. You put in your assumptions (beliefs about the wo... (read more)

One (maybe?) low-effort thing that could be nice would be saying "these are my top 5" or "these are listed in order of how promising I think they are" or something (you may well have done that already and I missed it).

3
MichaelA
3y
Ah, yes, this is probably useful and definitely low-effort (I've now done it in 1 minute, due to your comment).  The list was actually already in order of how promising I think they are, and I mentioned that in footnote 1. But I shouldn't expect people to read footnotes, and your feedback plus that other feedback I got on other posts suggests that readers want that sort of thing enough / find it useful enough that that should be said in the main text. So I've now moved that info to the main text (in the summary, before I list the 19 interventions). I think the main reason I originally put it in a footnote is that it's hard to know what my ranking really means (since each intervention could be done in many different ways, which would vary in their value) or how much to trust it. But my ranking is still probably better than the ranking a reader would form, or than an absence of ranking, given that I've spent more time thinking about this. Going forward, I'll be more inclined to just clearly tell readers things like my ranking, and less focused on avoiding "anchoring" them or things like that. (So thanks again for the feedback!)
Load more