All of lukeprog's Comments + Replies

Re: why our current rate of spending on AI safety is "low." At least for now, the main reason is lack of staff capacity! We're putting a ton of effort into hiring (see here) but are still not finding as many qualified candidates for our AI roles as we'd like. If you want our AI safety spending to grow faster, please encourage people to apply!

4
RyanCarey
15d
There is also the theoretical possibility of disbursing a larger number of $ per hour of staff capacity.

I'll also note that GCRs was the original name for this part of Open Phil, e.g. see this post from 2015 or this post from 2018.

Holden has been working on independent projects, e.g. related to RSPs; the AI teams at Open Phil no longer report to him and he doesn't approve grants. We all still collaborate to some degree, but new hires shouldn't e.g. expect to work closely with Holden.

We fund a lot of groups and individuals and they have a lot of different (and sometimes contradicting) policy opinions, so the short answer is "yes." In general, I really did mean the "tentative" in my 12 tentative ideas for US AI policy, and the other caveats near the top are also genuine.

That said, we hold some policy intuitions more confidently than others, and if someone disagreed pretty thoroughly with our overall approach and they also weren't very persuasive that their alternate approach would be better for x-risk reduction, then they might not be a good fit for the team.

Echoing Eli: I've run ~4 hiring rounds at Open Phil in the past, and in each case I think if the top few applicants disappeared, we probably just wouldn't have made a hire, or made significantly fewer hires.

Indeed. There aren't hard boundaries between the various OP teams that work on AI, and people whose reporting line is on one team often do projects for or with a different team, or in another team's "jurisdiction." We just try to communicate about it a lot, and our team leads aren't very possessive about their territory — we just want to get the best stuff done!

The hiring is more incremental than it might seem. As explained above, Ajeya and I started growing our teams earlier via non-public rounds, and are now just continuing to hire. Claire and Andrew have been hiring regularly for their teams for years, and are also just continuing to hire. The GCRCP team only came into existence a couple months ago and so is hiring for that team for the first time. We simply chose to combine all these hiring efforts into one round because that makes things more efficient on the backend, especially given that many people might ... (read more)

The technical folks leading our AI alignment grantmaking (Daniel Dewey and Catherine Olsson) left to do more "direct" work elsewhere a while back, and Ajeya only switched from a research focus (e.g. the Bio Anchors report) to an alignment grantmaking focus late last year. She did some private recruiting early this year, which resulted in Max Nadeau joining her team very recently, but she'd like to hire more. So the answer to "Why now?" on alignment grantmaking is "Ajeya started hiring soon after she switched into a grantmaking role. Before that, our initia... (read more)

Cool stuff. Do you only leverage prediction markets, or do you also leverage prediction polls (e.g. Metaculus)? My sense of the research so far is that they tend to be similarly accurate with similar numbers of predictors, with perhaps a slight edge for prediction polls.

4
vandemonian
11mo
Thank you. Yes, I include Metaculus, GJ Open, etc. Also any forecasts made by top groups like Samotsvety, the Swift Centre, publicly available Superforecasts, etc.

Another historical point I'd like to make is that the common narrative about EA's recent "pivot to longtermism" seems mostly wrong to me, or at least more partial and gradual than it's often presented to be, because all four leading strands of EA — (1) neartermist human-focused stuff, mostly in the developing world, (2) animal welfare, (3) long-term future, and (4) meta — were all major themes in the movement since its relatively early days, including at the very first "EA Summit" in 2013 (see here), and IIRC for at least a few years before then.

9
MatthewDahlhausen
1y
I don't think anyone is denying that longtermist and existential risk concerns were part of the movement from the beginning. Or think that longtermist concerns don't belong in a movement about doing the most good. I think the concern is around the shift from longtermist concerns existing relatively equally with other cause areas to becoming much more dominant. Longtermism is much more prominent both in terms of funding and attention given to longtermism in community growth and introductory materials.

MacAskill was definitely a longtermist in 2012. But I don't think he mentioned it in Doing Good Better, or any of the more public/introductory narrative around EA.

I think the "pivot to longermism" narrative is a reaction to a change in communication strategy (80000 hours becoming explicitly longtermist, EA intro materials becoming mostly longtermist). I think critics see it as a "sharp left turn" in the AI Alignment sense, where the longtermist values were there all along but were much more dormant while EA was less powerful.

There's a previous discussion h... (read more)

What's his guess about how "% of humans enslaved (globally)" evolved over time? See e.g. my discussion here.

5
David van Beveren
1y
Thanks Luke, appreciate it!

How many independent or semi-independent abolitionist movements were there around the world during the period of global abolition, vs. one big one that started with Quakers+Britain and then was spread around the world primarily by Europeans? (E.g. see footnote 82 here.)

Re: more neurons = more valenced consciousness, does the full report address the hidden qualia possibility? (I didn't notice it at a quick glance.) My sense was that people who argue for more neurons = more valenced consciousness are typically assuming hidden qualia, but your objections involving empirical studies are presumably assuming no hidden qualia.

3
Adam Shriver
1y
Here's the report on conscious subsystems: https://forum.effectivealtruism.org/posts/vbhoFsyQmrntru6Kw/do-brains-contain-many-conscious-subsystems-if-so-should-we 
1
Oliver Sourbut
1y
I've given a little thought to this hidden qualia hypothesis but it remains very confusing for me. To what extent should we expect to be able to tractably and knowably affect such hidden qualia?

We have a report on conscious subsystems coming out I believe next week, which considers the possibility of non-reportable valenced conscious states. 

Also (speaking only about my own impressions), I'd say that while some people who talk about neuron counts might be thinking of hidden qualia (eg Brian Tomasik), it's not clear to me that that is the assumption of most. I don't think the hidden qualia assumption, for example, is an explicit assumption of Budolfson and Spears or  of MacAskill's discussion in his book (though of course I can't speak to what they believe privately). 

4[comment deleted]1y

I really appreciate this format and would love to see other inaccurate articles covered in this way (so long as the reviewer is intellectually honest, of course).

Answer by lukeprogOct 03, 202210
1
0

I suspect this is because there isn't a globally credible/legible consensus body generating or validating the forecasts, akin to the IPCC for climate forecasts that are made with even longer time horizons.

Cool, I might be spending a few weeks in Belgrade sometime next year! I'll reach out if that ends up happening. (Writing from Dubrovnik now, and I met up with some rationalists/EAs in Zagreb ~1mo ago.)

1
Dušan D. Nešić (Dushan)
2y
Sure, we'll be around! Let us know, we'd be happy to meet new folks!

(cross-posted)

Re: Shut Up and Divide. I haven't read the other comments here but…

For me, effective-altruism-like values are mostly second-order, in the sense that a lot of my revealed behavior shows that a lot of the time I don't want to help strangers, animals, future people, etc. But I think I "want to want to" help strangers, and sometimes the more goal-directed rational side of my brain wins out and I do the thing consistent with my second-order desires, something to help strangers at personal sacrifice to myself (though I do this less than e.g. Will M... (read more)

FWIW I generally agree with Eli's reply here. I think maybe EAG should 2x or 3x in size, but I'd lobby for it to not be fully open.

2
Ivy Mazzola
2y
I suspect that 2x or 3x will happen naturally within a year, given that there is a bar on fit for the event rather than a bar on quantity. People who aren't getting in this year will surely, if they are dedicated EAs, have more to list on their EAG applications next year. 

Not sure it's worth the effort, but I'd find the charts easier to read if you used a wider variety of colors.

9
TylerMaule
2y
Fixed
Dewi
2y24
0
0

+1, I'd also recommend using colours that are accessible for people with colour vision deficiency

As someone with a fair amount of context on longtermist AI policy-related grantmaking that is and isn't happening, I'll just pop in here briefly to say that I broadly disagree with the original post and broadly agree with [https://forum.effectivealtruism.org/posts/Xfon9oxyMFv47kFnc/some-concerns-about-policy-work-funding-and-the-long-term?commentId=TEHjaMd9srQtuc2W9](abergal's reply).

Hey Luke. Great to hear from you. Also thank you for your push back on an earlier draft where I was getting a lot of stuff wrong and leaping to silly conclusions, was super helpful. FWIW I don’t know how much any of this applies to OpenPhil.

Just to pin down on what it is you agree / disagree with:

For what it is worth I also broadly agree with abergal's reply. The tl;dr of both the original post / abergal's comment is basically the same: hey it [looks like from the outside/ is the case that] the LTFF is applying a a much higher bar to direct policy work tha... (read more)

FWIW I don't use "theory of victory" to refer to 95th+ percentile outcomes (plus a theory of how we could plausibly have ended up there). I use it to refer to outcomes where we "succeed / achieve victory," whether I think that represents the top 5% of outcomes or the top 20% or whatever. So e.g. my theory of victory for climate change would include more likely outcomes than my theory of victory for AI does, because I think succeeding re: AI is less likely.

1
Jack Cunningham
2y
Thanks for the clarification! I've edited the post to reflect your feedback.

FWIW, I wouldn't say I'm "dumb," but I dropped out of a University of Minnesota counseling psychology undergrad degree and have spent my entire "EA" career (at MIRI then Open Phil) working with people who are mostly very likely smarter than I am, and definitely better-credentialed. And I see plenty of posts on EA-related forums that require background knowledge or quantitative ability that I don't have, and I mostly just skip those.

Sometimes this makes me insecure, but mostly I've been able to just keep repeating to myself something like "Whatever, I'm ex... (read more)

2
TomChivers
2y
re the webzine, I feel like Works in Progress covers a lot of what you're looking for (it's purportedly progress studies rather than EA, but the mindset is very similar and the topics overlap)

This is pretty funny because, to me, Luke (who I don't know and have never met) seems like one of the most intimidatingly smart EA people I know of.

4
Joseph Lemien
2y
That would be great! I'd love to see this. I consider myself fairly smart/well-read, but I don't think that I have the background or the quantitative skills to comprehend advanced topics. I would very much like to see content targeted at a general audience, the way that I can find books about the history of the earth or about astrophysics targeted at a general audience.

Thanks for this comment. I really appreciate what you said about just being excited to help others as much as possible, rather than letting insecurities get the better of you.

Interesting that you mentioned the idea of an EA webzine because I have been toying with the idea of creating a blog that shares EA ideas in a way that would be accessible to lots of people. I’m definitely going to put some more thought into that idea.

Vox’s Future Perfect is pretty good for this!

Since this exercise is based on numbers I personally made up, I would like to remind everyone that those numbers are extremely made up and come with many caveats given in the original sources. It would not be that hard to produce numbers more reasonable than mine, at least re: moral weights. (I spent more time on the "probability of consciousness" numbers, though that was years ago and my numbers would probably be different now.)

Despite ample need for materials science in pandemic prevention, electrical engineers in climate change, civil engineers in civilisational resilience, and bioengineering in alternative proteins, EA has not yet built a community fostering the talent needed to meet these needs.

Also engineers who work on AI hardware, e.g. to help develop the technologies and processes needed to implement most compute governance ideas!

5
lennart
2y
Just came here to comment the same. :) See this profile by 80k, this post, and my sequence for more. Also, don't hesitate to reach out if you want to hear more about roles for AI hardware experts or if someone is interested.
2
Jessica Wen
2y
You're absolutely right! I think there are probably many more areas where engineers are needed, as well as future cause areas that we haven't discovered/explored yet so this is definitely not an exhaustive list. We're open to chatting with anybody/any organisation who has or forecasts a deficit of engineering expertise to find out how we can help!

+1 to the question, I tried to figure this out a couple years ago and all the footnotes and citations kept bottoming out without much information having been provided.

Thanks for this! I looked into this further and tweaked the final paragraph of the post and its footnote as a result.

a Christian EA I heard about recently who lives in a van on the campus of the tech company he works for, giving away everything above $3000 per year

Will this person please give an in-depth interview on some podcast? Could be anonymous if desired.

That person is Oliver Yeung and he has done a two part talk where he discusses this - main talk, Q&A.

(I spoke to him to okay sharing these, if any interviewer wants to speak to him then DM me and I can put you in touch)

Very minor note, but I love that you included "practice the virtue of silence" in your list.

practice the virtue of silence

 

Honest question: Besides the negative example of myself, can you or the OP give some examples of  practicing or not practicing this virtue?

Motivation: The issue here is heterogeneity (which might be related in a deeper way to the post itself). 

I think some readers are going to overindex on this, and become very silent, while myself, or other unvirtuous people will just ignore it (or even pick at it irritatingly). 

So without more detail, the result could be the exact opposite of what you want.

It's funny, I've done this so many times (including commenting on others' docs of this sort) that I sort-of forgot that not everyone does this regularly.

9
Stefan_Schubert
2y
Yes, effective altruism has many unusual norms and practices like this, which ultimately derive from our focus on impact. The benefits of receiving advice often outweigh the costs of giving them, so it makes sense for an impact-focused community to have this kind of norm.  It's also true that it's easy to forget that these norms are unusual because you're so used to them.

An important point here is that if you're considering this move, there's a decent/good chance you'll be able to find career transition funding so that you can have 3-12mo of runway during which you can full-time talk to people, read lots of stuff, apply to lots of things, etc. after you quit your job, so that you don't have to burn through much or any of your savings while trying to make the transition work.

4
Imma
2y
Is that only true for people who have a very good track reckord or are very talented or skilled?

I agree. To point to a singular source of funding to complete this call to action, I encourage relevant onlookers to look into the Effective Altruism Infrastructure Fund.

It's a fair question. Technically speaking, of course progress can be more incremental, and some small pieces can be built on with other small pieces. Ultimately that's what happened with Khan's series of papers on the semiconductor supply chain and export control options. But in my opinion that kind of thing almost never really happens successfully when it's different authors building on each other's MVPs (minimum viable papers) rather than a single author or team building out a sorta-comprehensive picture of the question they're studying, with all the context and tacit knowledge they've built up from the earlier papers carrying over to how they approach the later papers.

Huge +1 to this post! A few reflections:

  • As someone who has led or been involved in many hiring rounds in the last decade, I'd like to affirm most of the points above, e.g.: it's very hard to predict what you'll get offers for, you'll sometimes learn about personal fit and improve your career capital, stated role "requirements" are often actually fairly flexible, etc.
  • Applicants who get the job, or make it to the final stage, often comment that they're surprised they got so far and didn't think they were a strong fit but applied because a friend told the
... (read more)

To support people in following this post's advice, employers (including Open Phil?) need to make it even quicker for applicants to submit the initial application materials

From my perspective as an applicant, fwiw, I would urge employers to reduce the scope of questions in the initial application materials, more so than the time commitment. EA orgs have a tendency to ask insanely big questions of their early-stage job applicants, like "How would you reason about the moral value of humans vs. animals?" or "What are the three most important ways our research ... (read more)

As a college dropout from the SF Bay Area EA/rationalist community where it's common for people at parties (including non-EA/rationalist parties) to brag about who dropped out of school earliest, I've never really grokked some people's impression that EA is highly credentialist.

If you're privileged in other ways, it's easier to get away with dropping out (or even use it as a countersignal). It's an intersectional issue.

1
Michael_Wiebe
2y
Yes, seems like clear self-selection of people who enjoy schooling.

Random thought: another way in which such a group could prepare for action is to have some experience commissioning forecasts on short notice from platforms like Good Judgment, Metaculus, Hypermind, etc., so that when there's some emergency (or signs that there might soon be an emergency, a la the early-Jan evidence about what became the COVID-19 pandemic), ALERT can immediately commission crowdcasts that help to track the development or advent of the emergency.

See also what Linch proposes in "Why short-range forecasting can be useful for longtermism":

To do this, I propose the hypothetical example of a futuristic EA Early Warning Forecasting Center. The main intent is that, in the lead up to or early stages of potential major crises (particularly in bio and AI), EAs can potentially (a) have several weeks of lead time to divert our efforts to respond rapidly to such crises and (b) target those efforts effectively.

FWIW a big thing for Open Phil and a couple other EA-ish orgs I've spoken to is that very few lawyers are willing to put probabilities on risks, so they'll just say "I advise against X," but what we need is "If you do X then the risk of A is probably 1%-10% and the risk of B is <1% and the risk of C is maybe 1%-5%." So would be nice you could do some calibration training etc. if you haven't already.

8
Tyrone-Jay Barugh
2y
Thanks for the suggestion, Luke. I hadn't considered this. I'm going to have a go at the Open Phil Calibration Training applet first, and will scour the forum and Lesswrong for other useful training. I've had mixed experiences using probabilistic language in my legal advice. It really depends on the client and the advisor being able to think like that. But I've got some internal clients who have responded well to this sort of advice - explaining things in terms of expected value can be especially good when giving advice about counterparties who won't tell us what they're thinking (e.g. in litigation or commercial negotiations). It would be excellent to work with EAs who think like this without prompting, and who actually expect that kind of advice.

Yeah CSET isn't an EA think tank, though a few EAs have worked there over the years.

Yes, this is part of the reason I personally haven't prioritized funding European think tanks much, in addition to my grave lack of context on how policy and politics works in the most AI-relevant European countries.

4
Stefan_Schubert
2y
I guess the US is generally more important also.
Load more