All of lukeprog's Comments + Replies

Concrete Biosecurity Projects (some of which could be big)

The authors will have a more-informed answer, but my understanding is that part of the answer is "some 'disentanglement' work needed to be done w.r.t. biosecurity for x-risk reduction (as opposed to biosecurity for lower-stakes scenarios)."

I mention this so that I can bemoan the fact that I think we don't have a similar list of large-scale, clearly-net-positive projects for the purpose of AI x-risk reduction, in part because (I think) the AI situation is more confusing and requires more and harder disentanglement work (some notes on this here and here). Th... (read more)

Democratising Risk - or how EA deals with critics

Hi Michael,

I don't have much time to engage on this, but here are some quick replies:

  • I don't know anything about your interactions with GiveWell. My comment about ignoring vs. not-ignoring arguments about happiness interventions was about me / Open Phil, since I looked into the literature in 2015 and have read various things by you since then. I wouldn't say I ignored those posts and arguments, I just had different views than you about likely cost-effectiveness etc.
  • On "weakly validated measures," I'm talking in part about lack of IRT validation studies
... (read more)
6MichaelPlant6dHello Luke, Thanks for this too. I appreciate you've since moved on to other things, so this isn't really your topic to engage on anymore. However, I'll make two comments. First, you said you read various things in the area, including by me, since 2015. It would have been really helpful (to me) if, given you had different views, you had engaged at the time and set out where you disagreed and what sort of evidence would have changed your mind. Second, and similarly, I would really appreciate it if the current team at Open Philanthropy could more precisely set out their perspective on all this. I did have a few interactions with various Open Phil staff in 2021, but I wouldn't say I've got anything like canonical answers on what their reservations are about 1. measuring outcomes in terms of SWB - Alex Berger's recent technical update [https://forum.effectivealtruism.org/posts/uTSxcDzPLiifryqH6/technical-updates-to-our-global-health-and-wellbeing-cause] didn't comment on this - and 2. doing more research or grantmaking into the things that, from the SWB perspective, seem overlooked.
EA megaprojects continued

I can't share more detail right now and they might not work out, but just FYI, I'm currently working on the details of Science #5 and Miscellaneous #2.

Democratising Risk - or how EA deals with critics

FWIW, one of my first projects at Open Phil, starting in 2015, was to investigate subjective well-being interventions as a potential focus area. We never published a page on it, but we did publish some conversation notes. We didn't pursue it further because my initial findings were that there were major problems with the empirical literature, including weakly validated measures, unconvincing intervention studies, one entire literature using the wrong statistical test for decades, etc. I concluded that there might be cost-effective interventions in this spa... (read more)

Hello Luke, thanks for this, which was illuminating. I'll make an initial clarifying comment and then go on to the substantive issues of disagreement.

At least for me, I don't think this is a case of an EA funder repeatedly ignoring work by e.g. Michael Plant — I think it's a case of me following the debate over the years and disagreeing on the substance after having familiarized myself with the literature.

I'm not sure what you mean here. Are you saying GiveWell didn't repeatedly ignore the work? That Open Phil didn't? Something else? As I set out in anothe... (read more)

Thank you Luke – super helpful to hear!!

AI Governance Course - Curriculum and Application

I'm not involved in this program, but I would like to see that happen. Though note that some of the readings are copyrighted.

EA Forum engagement doubled in the last year

FWIW the EA forum seems subjectively much better to me than it did ~2 years ago, both in platform and in content, and much of that intuitively seems plausibly traceable to specific labor of the EA forum team. Thanks for all your work!

2Ben_West2moThanks Luke! We appreciate the kind words.
Great Power Conflict

If you know of work on how AI might cause great power conflict, please let me know

Phrases to look for include "accidental escalation" or "inadvertent escalation" or "strategic stability," along with "AI" or "machine learning." Michael Horowitz and Paul Scharre have both written a fair bit on this, e.g. here.

1Zach Stein-Perlman4moThank you!
The motivated reasoning critique of effective altruism

[EA has] largely moved away from explicit expected value calculations and cost-effectiveness analyses.

How so? I hadn't gotten this sense. Certainly we still do lots of them internally at Open Phil.

Re: cost-effectiveness analyses always turning up positive, perhaps especially in longtermism. FWIW that hasn't been my experience. Instead, my experience is that every time I investigate the case for some AI-related intervention being worth funding under longtermism, I conclude that it's nearly as likely to be net-negative as net-positive given our great unce... (read more)

1Linch4moSome quick thoughts: I would guess that Open Phil is better at this than other EA orgs, both because of individually more competent people and much better institutional incentives (ego not wedded to specific projects working). For your specific example, I'm (as you know) new to AI governance, but I would naively guess that most (including competence-weighted) people in AI governance are more positive about AI interventions than you are. Happy to be corrected empirically. (I also agree with Larks that publishing a subset of these may be good for improving the public conversation/training in EA, but I understand if this is too costly and/or if the internal analyses embed too much sensitive information or models)

Certainly we still do lots of them internally at Open Phil.

It might be helpful if you published some more of these to set a good example.

7MichaelStJules4moIs this for both technical AI work and AI governance work? For both, what are the main ways these interventions are likely to backfire?
6MichaelStJules4moI guess no one is really publishing these CEAs, then? Do you also have CEAs of the meta work you fund, in terms of AI risk reduction/increase?
What are the EA movement's most notable accomplishments?

Much of the concrete life saving and life improvement that GiveWell top charities have done with GiveWell-influenced donations.

In favor of more anthropics research

Is the claimed dissolution by MIRI folks published somewhere?

2RyanCarey5moI think they believe in Wei Dai's UDT, or some variant of it, which is very close to Stuart's anthropic decision theory, but you'd have to ask them which, if any, published or unpublished version they find most convincing.
What is the closest thing you know to EA that isn't EA?

Maybe the John A Hartford Foundation.

Various utilitarianism- and Peter Singer-motivated efforts in global poverty and animal welfare, decades before the modern effective altruism community emerged.

Mohism.

Empirical development economics and GBD-prioritized global health interventions.

Of course, the "rationalist" and "transhumanist" communities have strong similarities, and large chunks of them have essentially merged with EA.

There are various efforts aimed at more widespread use of cost-benefit analysis, e.g. see Sunstein's book.

AMA: The new Open Philanthropy Technology Policy Fellowship

It's mostly about skillsets, context/experience with both the DC policy world, and familiarity with Open Philanthropy's programmatic priorities.

AMA: The new Open Philanthropy Technology Policy Fellowship

A large portion of the value from programs like this comes from boosting fellows into career paths where they spend at least some time working in the US government, and many of the most impactful government roles require US citizenship. We are therefore mainly focused on people who have (a plausible pathway to) citizenship and are interested in US government work. Legal and organizational constraints means it is unlikely that we will be able to sponsor visas even if we run future rounds.

This program is US-based because the US government is especially impor... (read more)

AMA: The new Open Philanthropy Technology Policy Fellowship

I expect Open Philanthropy will want to fund more fellowships like this in the future, but we have some uncertainty about (1) the supply of applicants who are a good fit for the program, and especially (2) the availability of staff and contractors who can run time-intensive programs like this. If we don't run a similar program in the future, I think the most likely reason will be a lack of (2).

1tamgent5moIs the staff availability problem more about certain skillsets being in short supply (e.g. ability to evaluate, connect and mentor candidates) or just raw operational power (and if so, is the problem here that it's hard to recruit enough people because of the overhead in recruitment, or you don't want to for another reason), or something else?
A personal take on longtermist AI governance

As far as I know it's true that there isn't much of this sort of work happening at any given time, though over the years there has been a fair amount of non-public work of this sort, and it has usually failed to convince people who weren't already sympathetic to the work's conclusions (about which intermediate goals are vs. aren't worth aiming for, or about the worldview cruxes underlying those disagreements). There isn't even consensus about intermediate goals such as the "make government generically smarter about AI policy" goals you suggested, though in some (not all) cases the objection to that category is less "it's net harmful" and more "it won't be that important / decisive."

9weeatquince6moThank you Luke – great to hear this work is happening but still surprised by the lack of progress and would be keen to see more such work out in public! (FWIW Minor point but I am not sure I would phrase a goal as "make government generically smarter about AI policy" just being "smart" is not good. Ideally want a combination of smart + has good incentives + has space to take action. To be more precise when planning I often use COM-B [https://drive.google.com/drive/u/2/folders/1nsyGsFnsfK2_Q7f7T-PZ9Ixtwev7Kwnk] models, as used in international development governance reform work, to ensure all three factors are captured and balanced.)
EA needs consultancies

A couple quick replies:

  • Yes, there are several reasons why Open Phil is reluctant to hire in-house talent in many cases, hence the "e.g." before "because our needs change over time, so we can't make a commitment that there's much future work of a particular sort to be done within our organizations."
  • I actually think there is more widespread EA client demand (outside OP) for EA consulting of the types listed in this post than the post itself represents, because there were several people who gave me feedback on the post and said something like "This is grea
... (read more)
EA needs consultancies

I don't feel strongly. You all have more context than I do on what seems feasible here. My hunch is in favor of RP maintaining current quality (or raising it only a tiny bit) and scaling quickly for a while — I mostly wanted to give some counterpoints to your suggestion that maybe RP should lower its quality to get more quantity.

EA needs consultancies

I don't think EAs have a comparative advantage in policy/research in general, but I do think some EAs have a comparative advantage in doing some specific kinds of policy/research for other EAs, since EAs care more than many (not all) clients about certain analytic features, e.g. scope-sensitivity, focus on counterfactual impact, probability calibration, reasoning transparency of a particular sort, a tolerance for certain kinds of weirdness, etc.

EA needs consultancies

Other Rethink Priorities clients (including at Open Phil) might disagree, but my hunch is that if anything, higher quality and lower quantity is the way to go, because a client like me has less context on consultants doing some project than I do on someone I've directly managed (internally) on research projects for 2 years. So e.g. Holden vetted my Open Phil work pretty closely for 2 years and now feels less need to do so because he has a sense of what my strengths and weaknesses are, where he can just defer to me and where he should make sure to develop h... (read more)

6Linch7moIn this case, do you think RP should focus more on quality and less on quantity as we scale [https://forum.effectivealtruism.org/posts/CwFyTacABbWuzdYwB/ea-needs-consultancies?commentId=k2bEyiFe4ae3aCWf5] , by satisficing on quantity and focusing/optimizing on research quality (concretely, this may mean being very slow to add additional researchers and primarily using them as additional quality checks on existing work, over trying to have more output in novel work)? This is very much not the way we currently plan to scale, which is closer to focusing on maintaining research quality and trying to increase quantity/output. (reiterating that all impressions here are my own).
EA needs consultancies

Thanks for your thoughtful comment!

Re: reluctance. Can you say more about the concern about donor perceptions? E.g. maybe grantmakers like me should be more often nudging grantees with questions like "How could you get more done / move faster by outsourcing some work to consultants/contractors?" I've done that in a few cases but haven't made a consistent effort to signal willingness to fund subcontracts.

What do you mean about approval from a few parties? Is it different than other expenditures?

Re: university rules. Yes, very annoying. BERI is trying to hel... (read more)

(Personal views only)

I found this post and the comments very interesting, and I'd be excited to see more people doing the sort of things suggested in this post.

That said, there's one point of confusion that remains for me, which is somewhat related to the point that "Right now the market for large EA consulting seems very isolated to OpenPhil". In brief, the confusion is something like "I agree that there is sufficient demand for EA consultancies. But a large enough fraction of that demand is from Open Phil that it seems unclear why Open Phil wouldn't inst... (read more)

6Ozzie Gooen7moContractors are known to be pricey and have a bit of a bad reputation in some circles. Research hires have traditionally been dirt cheap (though that is changing). I think if an org spends 10-30% of its budget on contractors, it would be treated with suspicion. It feels like a similar situation to how a lot of charities tried to have insanely low overheads (and many outside EA still do). I think that grantmakers / influential figureheads making posts like yours above, and applying some pressure, could go a long way here. It should be obvious to the management of the nonprofit that the funders won't view them poorly if they spend a fair bit on contractors, even if sometimes this results in failures. (Contract work can be risky for clients, though perhaps less risky than hiring.) At many orgs, regular expenditures can be fairly annoying. Contracting engagements can be more expensive and more unusual, so new arrangements have to sometimes be figured out. I've had some issues around hiring contractors myself in previous startups for a similar reason. The founders would occasionally get cold-feet, sometimes after I agreed to an arrangement with a contractor. I agree. The main thing for contractors is the risk of loss of opportunities. So if there were multiple possible clients funded by one group, but each makes separate decisions, and that one group is unlikely to stop funding all of those subgroups at once, things should be fine. Agreed Sorry, this was vague. I meant cases where: 1) Person A is employed at Organization B. 2) Person A leaves employment. 3) Person A later (or immediately) joins Organization B as a contractor. I've done this before. The big benefit is that person A has established a relationship with Organization B, so this relationship continues to do a lot of work (similar to what you describe). Yep, this is what I was thinking about above in point (3) on the bottom. Having more methods to encourage interaction seem good. There's been a bit of di
EA needs consultancies

The problem I'm trying to solve (at the top of the post) is that (non-consultancy) EA organizations like Open Phil, for a variety of reasons, can't hire the talent we need to accomplish everything we'd like to accomplish. So when we do manage to hire someone into a specific role, I think their work in that role can be highly valuable, and if they're performing well in that role after the first ~year then my hunch is they should stay in that role for at least a few years. That said, we've had staff leave and become a grantee/similar instead, and I could imagine some staff leaving to become an EA consultant at some point if they think they can accomplish more good that way and/or if they think that's a better fit for them personally.

2Linch7moHmm I think the main reason to start a consultancy is for scalability, since for whatever reasons existing orgs can't hire fast while maintaining quality. I do think value of time is unusually high at Open Phil compared to the majority of other EA orgs I'm aware of, which points against people leaving Open Phil specifically.
EA needs consultancies

I don't think that would play to Open Phil's comparative advantages especially well. I think Open Phil should focus on figuring out how to move large amounts of money toward high-ROI work.

EA needs consultancies

Interesting, thanks, I didn't know about this. That group's first newsletter says:

  • The EACN network consist of 200+ members by now
  • All major consulting firms represented
  • BCG & McKinsey launched their own internal EA slack channels - featuring 70+ consultants each

Those are some pretty compelling numbers, but I'd be a lot more optimistic if they were engaged enough to show up in the comments here. (Maybe — I could imagine they're engaged with EA ideas in other ways, but now we're into territory where I'd feel like I'd need to do more vetting.)

EA needs consultancies

Thanks, I didn't know this!

EA needs consultancies

I agree, the EA Infrastructure Fund seems like a great source of funding for launching potential new EA consultancies!

EA needs consultancies

Yeah, I originally had the same thought, and I considered e.g. web development, event management, legal services, and HR services as not benefiting enough from EA context etc. to be worth the opportunity cost of EA talent, but then several people at multiple organizations said "Actually we've struggled to get what we want from non-EA consultants doing those things. I really wish I could contract EA consultants to do that work instead." So I added them to the list of possibilities for services that EA consultancies could provide.

I'm still not sure which con... (read more)

1Nathan Young7moWill ponder. Thanks again for going to the effort. I largely agree regardless.
On the limits of idealized values

Just FYI, some additional related literature is cited here.

High Impact Careers in Formal Verification: Artificial Intelligence

Is it easy to dig up a source for the RL agent that learned to crash into the deck?

1jtcbrule7moI don't remember where I initially read about it, but I found a survey paper that mentions it (and several other, similar anecdotes), along with some citations https://arxiv.org/abs/1803.03453v1 [https://arxiv.org/abs/1803.03453v1]
EA is a Career Endpoint

I broadly endorse this advice.

I would agree strongly - but would advise people to think about how to stay connected to EA via both giving, and their social circles, ideally including local EA leadership positions, while building their resume and skill set.

Why AI is Harder Than We Think - Melanie Mitchell

I wish "relative skeptics" about deep learning capability timelines such as Melanie Mitchell and Gary Marcus would move beyond qualitative arguments and try to build models and make quantified predictions about how quickly they expect things to proceed, a la Cotra (2020) or Davidson (2021) or even Kurzweil. As things stand today, I can't even tell whether Mitchell or Marcus have more or less optimistic timelines than the people who have made quantified predictions, including e.g. authors from top ML conferences.

6CarlShulman8moShe does talk about century plus timelines here and there.
International cooperation as a tool to reduce two existential risks.

I think EAs focused on x-risks are typically pretty gung-ho about improving international cooperation and coordination, but it's hard to know what would actually be effective for reducing x-risk, rather than just e.g. writing more papers about how cooperation is desirable. There are a few ideas I'm exploring in the AI governance area, but I'm not sure how valuable and tractable they'll look upon further inspection. If you're curious, some concrete ideas in the AI space are laid out here and here.

2johl@umich.edu9moGreat points. I wonder if building awareness of x-risk in the general public (i.e. outside EAs) could help increase tractability and make research papers on cooperation more likely to get put into practice. I'm curious which ideas you're exploring too. I saw your post [https://www.openphilanthropy.org/blog/ai-governance-grantmaking] on the topic from last year. Reading some of the research linked there has been super helpful! Thanks for linking these resources too. Looking forward to reading them.
EA Debate Championship & Lecture Series

This seems great to me, please do more.

Strong Longtermism, Irrefutability, and Moral Progress

I know I'm late to the discussion, but…

I agree with AGB's comment, but I would also like to add that strong longtermism seems like a moral perspective with much less "natural" appeal, and thus much less ultimate growth potential, than neartermist EA causes such as global poverty reduction or even animal welfare.

For example, I'm a Program Officer in the longtermist part of Open Philanthropy, but >80% of my grantmaking dollars go to people who are not longtermists (who are nevertheless doing work I think is helpful for certain longtermist goals). Why? Bec... (read more)

Forecasting Newsletter: January 2021

Cool search engine for probabilities! Any chance you could add Hypermind?

5NunoSempere1yThanks! Sure, I just did. Just search [https://metaforecast.org/] for "Hypermind" to see all of them, or for e.g., "covid-19" to get some results which include questions from Hypermind as well.
6BrianTan1yI agree! I think people will like the idea of donating to help build infrastructure for the growth of EA, rather than just donating to "meta".
Informational Lobbying: Theory and Effectiveness

Thanks for this!

FWIW, I'd love to see a follow-up review on lobbying Executive Branch agencies. They're less powerful than Congress, but often more influenceable as well, and can sometimes be the most relevant target of lobbying if you're aiming for a very specific goal (that is too "in the weeds" to be addressed directly in legislation). I found Godwin et al. (2012) helpful here, but I haven't read much else. Interestingly, Godwin et al. find that some of the conclusions from Baumgartner et al. (2009) about Congressional lobbying don't hold for agency lobbying.

1Matt_Lerner1yThough I didn't read Godwin (now on my to-do list), I encountered some useful research that seemed to point toward the idea that regulatory lobbying could be a lot more efficient than legislative lobbying. By the end of my review, I had started to think that it would have more productive to do that instead. Since I finished, though, I've been thinking about one of the main concerns I have about regulatory lobbying. The fact that it's probably (comparatively) easy to influence regulatory agencies means that it's pretty easy to walk back any positive rule changes. This seems to happen fairly frequently, e.g. with EPA regulations. From that standpoint, the stickiness of the status quo in the legislative context is also an advantage: when policy change succeeds legislatively, the new policy becomes part of the difficult-to-change status quo. For longermist-oriented policies, it seems like this is a major advantage over regulatory changes. Curious to hear your thoughts.
Forecasting Newsletter: May 2020.

Thanks!

Some additional recent stuff I found interesting:

  • This summary of US and UK policies for communicating probability in intelligence reports.
  • Apparently Niall Ferguson’s consulting firm makes & checks some quantified forecasts every year: “So at the beginning of each year we at Greenmantle make predictions about the year ahead, and at the end of the year we see — and tell our clients — how we did. Each December we also rate every predictive statement we have made in the previous 12 months, either “true”, “false” or “not proven”. In recent years,
... (read more)
Forecasting Newsletter: April 2020

The headline looks broken in my browser. It looks like this:

/(Good Judgement?[^]*)|(Superforecast(ing|er))/gi

The last explicit probabilistic prediction I made was probably a series of forecasts on my most recent internal Open Phil grant writeup, since it's part of our internal writeup template to prompt the grant investigator for explicit probabilistic forecasts about the grant. But it could've easily been elsewhere; I do somewhat-often make probabilistic forecasts just in conversation, or in GDoc/Slack comments, though for those I usually spend less time

... (read more)
Forecasting Newsletter: April 2020

Note that the headline ("Good Judgement Project: gjopen.com") is still confusing, since it seems to be saying GJP = GJO. The thing that ties the items under that headline is that they are all projects of GJI. Also, "Of the questions which have been added recently" is misleading since it seems to be about the previous paragraph (the superforecasters-only questions), but in fact all the links go to GJO.

1NunoSempere2yEdited again. If you want, throw me a bone: what's the last explicit probabilistic prediction you've made? Also, I liked your review on How to Measure Anything [https://www.lesswrong.com/posts/ybYBCK9D7MZCcdArB/how-to-measure-anything], which feels relevant to the topic at hand. NNTR.
Forecasting Newsletter: April 2020

Nice to see a newsletter on this topic!

Clarification: The GJO coronavirus questions are not funded by Open Phil. The thing funded by Open Phil is this dashboard (linked from our blog post) put together by Good Judgment Inc. (GJI), which runs both GJO (where anyone can sign up and make forecasts) and their Superforecaster Analytics service (where only superforecasters can make forecasts). The dashboard Open Phil funded uses the Superforecaster Analytics service, not GJO. Also, I don't think Tetlock is involved in GJO (or GJI in general) much at all these da

... (read more)
1NunoSempere2yThanks for the correction; edited.
Insomnia with an EA lens: Bigger than malaria?

I wrote up some thoughts on CBT-I and the evidence base behind it here.

Information security careers for GCR reduction

Is it easy to say more about (1) which personality/mindset traits might predict infosec fit, and (2) infosec experts' objections to typical GCR concerns of EAs?

Rethink Priorities 2019 Impact and Strategy

FWIW I was substantially positively surprised by the amount and quality of the work you put out in 2019, though I didn't vet any of it in depth. (And prior to 2019 I think I wasn't aware of Rethink.)

I'm Buck Shlegeris, I do research and outreach at MIRI, AMA

FWIW, it's not clear to me that AI alignment folks with different agendas have put less effort into (or have made less progress on) understanding the motivations for other agendas than is typical in other somewhat-analogous fields. Like, MIRI leadership and Paul have put >25 (and maybe >100, over the years?) hours into arguing about merits of their differing agendas (in person, on the web, in GDocs comments), and my impression is that central participants to those conversations (e.g. Paul, Eliezer, Nate) can pass the others' ideological Turing tests

... (read more)
Load More