lukeprog

Wiki Contributions

Comments

The motivated reasoning critique of effective altruism

[EA has] largely moved away from explicit expected value calculations and cost-effectiveness analyses.

How so? I hadn't gotten this sense. Certainly we still do lots of them internally at Open Phil.

Re: cost-effectiveness analyses always turning up positive, perhaps especially in longtermism. FWIW that hasn't been my experience. Instead, my experience is that every time I investigate the case for some AI-related intervention being worth funding under longtermism, I conclude that it's nearly as likely to be net-negative as net-positive given our great uncertainty and therefore I end up stuck doing almost entirely "meta" things like creating knowledge and talent pipelines.

What are the EA movement's most notable accomplishments?

Much of the concrete life saving and life improvement that GiveWell top charities have done with GiveWell-influenced donations.

In favor of more anthropics research

Is the claimed dissolution by MIRI folks published somewhere?

What is the closest thing you know to EA that isn't EA?

Maybe the John A Hartford Foundation.

Various utilitarianism- and Peter Singer-motivated efforts in global poverty and animal welfare, decades before the modern effective altruism community emerged.

Mohism.

Empirical development economics and GBD-prioritized global health interventions.

Of course, the "rationalist" and "transhumanist" communities have strong similarities, and large chunks of them have essentially merged with EA.

There are various efforts aimed at more widespread use of cost-benefit analysis, e.g. see Sunstein's book.

AMA: The new Open Philanthropy Technology Policy Fellowship

It's mostly about skillsets, context/experience with both the DC policy world, and familiarity with Open Philanthropy's programmatic priorities.

AMA: The new Open Philanthropy Technology Policy Fellowship

A large portion of the value from programs like this comes from boosting fellows into career paths where they spend at least some time working in the US government, and many of the most impactful government roles require US citizenship. We are therefore mainly focused on people who have (a plausible pathway to) citizenship and are interested in US government work. Legal and organizational constraints means it is unlikely that we will be able to sponsor visas even if we run future rounds.

This program is US-based because the US government is especially important to our programmatic priorities. That said, it's possible we'll run (or fund someone else to run) a similar program in one or more non-US countries in the future, perhaps most likely in the UK.

AMA: The new Open Philanthropy Technology Policy Fellowship

I expect Open Philanthropy will want to fund more fellowships like this in the future, but we have some uncertainty about (1) the supply of applicants who are a good fit for the program, and especially (2) the availability of staff and contractors who can run time-intensive programs like this. If we don't run a similar program in the future, I think the most likely reason will be a lack of (2).

A personal take on longtermist AI governance

As far as I know it's true that there isn't much of this sort of work happening at any given time, though over the years there has been a fair amount of non-public work of this sort, and it has usually failed to convince people who weren't already sympathetic to the work's conclusions (about which intermediate goals are vs. aren't worth aiming for, or about the worldview cruxes underlying those disagreements). There isn't even consensus about intermediate goals such as the "make government generically smarter about AI policy" goals you suggested, though in some (not all) cases the objection to that category is less "it's net harmful" and more "it won't be that important / decisive."

EA needs consultancies

A couple quick replies:

  • Yes, there are several reasons why Open Phil is reluctant to hire in-house talent in many cases, hence the "e.g." before "because our needs change over time, so we can't make a commitment that there's much future work of a particular sort to be done within our organizations."
  • I actually think there is more widespread EA client demand (outside OP) for EA consulting of the types listed in this post than the post itself represents, because there were several people who gave me feedback on the post and said something like "This is great, I think my org has lots of demand for several of these services if they can be provided to a sufficient quality level, but please don't quote me on that because I haven't thought hard enough about this and don't want people to become over-enthusiastic about this on the basis of my OTOH reaction." Perhaps I should've mentioned this in the original post.
Load More