3565 karmaJoined Jul 2018


my two cents:

  • I love the addition of a reaction for 'changed my mind', the rest I'm less excited for - what value does the eg. heart reaction add?
  • Forum styled looks much better to me

Has anyone managed to get any use out of gpt-4 integrations yet? I've tried to set up integrations into my private spreadsheets with Zapier, but the painfully slow speed at which gpt-4 writes and needing to click a link to confirm every action makes any small ask slower than just doing it myself.

So far I've been pretty disappointed, but maybe I'm just steering myself blind on tasks that it's currently not well suited for.


This is a great question! There is a great lack of good cost-effectiveness estimates of large multilaterals such as UNICEF. The problem is that they are extremely difficult to create for the reasons outlined in the Givewell article you linked.

Different vaccine programs carried out by GAVI, for example, vary massively in cost-effectiveness. HPV vaccines don't look as cost-effective as rotavirus vaccines, so depending on where additional funding will be spent the cost-effectiveness will vary quite a bit!

At aidpolicy.org we have been toying with ranking of multilaterals on $/daly in style of Givewell, but not only would it be a massive undertaking, the resulting estimates would have very high error bars to the point that we worry nobody would take them seriously.

There are some rankings such as the QUODA by CGD, which can give you a sense of the relative effectiveness of multilaterals (For $/daly purposes I would primarily look at their prioritization and evaluation criteria), but you won't be able to use the QUODA to compare a multilateral with Givewell.

I'm near certain the 1$/7 months claim is incorrect - or at least calculated with much fewer caveats than Givewell's CEAs. My best guess is that UNICEF is significantly less cost-effective than givewell's charities. Between any mega-charity and Givewell's maximum impact fund, I would recommend Givewell for individual donors.

As @freedomandutility points out, the question Givewell is trying to answer is: "what is the most impact you, an individual, can have on the margin with your donations". This answer is not necessarily going to be the same for a government with ten billion to spend. Even a single medium-sized government could cover Givewell's entire funding gap and have plenty left over. Finding something as cost-effective as Givewell's which can effectively absorb $100b is not easy!

I don't mean to say this to justify the current system, I believe governments and multilaterals alike are doing a less-than-stellar job with their development efforts. Were a government to actually fully fund Givewell, Givewell should just lower their bar and recommend additional charities.


Only critique is that I think ~1 hour is on the low end of what is optimal here

Is there any reason you want to focus on Afghanistan in particular?

I'm convinced, what needs to happen for institutions such as the US HHS or European Commission's dg HERA to take on these initiatives? To what extent are they already?

I would suspect that one of the bottlenecks may be cheap and well-formulated plans. Good arguments can go far in policy, but I imagine success requires going from "We need UVC/Ventilation in every hospital" to "Try this specific pilot scheme to test efficacy".

Which are the pandemic preparedness organizations working on getting wins around the world? If eg. Denmark adopts Far-UVC and it is a smash success, it becomes a lot easier to advocate for elsewhere. My impression is that many of the EA organizations doing advocacy, mostly are doing it at the broader level of "we need better biosecurity" rather than proposing and pushing for very specific plans. Is my impression correct, or am I just not sufficiently familiar with the field?

20 minutes in and completely hooked, note for others - don't think you'll 'just briefly check it out to decide if its worth watching later' if you're planning on your next hour being productive :)

I believe AI will significantly decrease the cost of overregulation, and make many policies attractive that previously were too costly to administrate.

Read why I believe this in my new substack, which I'm trying to start so I have a place to write about non-EA stuff!



This is absolutely the case for global health and development. Development is really complicated, and I think EAs tend to vastly overrate just how how certain we are about what works the best.

When I began working full time in the space, I spent about the first six months getting continuously smacked in the face by just how much there is to know, and how little of it I knew.

I think introductory EA courses can do better at getting people to dig deep. For example, I don't think its unreasonable to have attendees actually go through a CEA by Givewell and discuss the many key assumptions that are made. For a workshop I did recently for a Danish high school talent programme, we created simplified versions which they had no trouble engaging with.

Fantastic to get this update - was just finding myself complaining about the lack of good object-level AI policy proposals!

At the risk of letting perfect be the enemy of the good, I would love a top level post for each of the recommendations, going into much greater detail. Getting discussions of policy proposals into the open where they can be criticized from diverse perspectives is crucial to arrive at policies that are robustly good.

One thing I find interesting to think about, is how well-funded non-governmental actors might be able to bring these policies to life. After all, I expect most progress to come out of a few influential labs. Getting a handshake agreement from those labs, would achieve results not too dissimilar from national legislation.

For rapid shutdown mechanisms, for example, the bottleneck to me seems just as much to be developing the actual protocols as getting adoption. If a great protocol is developed that would allow openAI leadership to shut down a compute cluster at the hardware level running an experimental AI, and adopting the protocol doesn't add much overhead, I feel like there's a non-zero chance they might adopt it without any coercion. If the overhead is significant, how significant would it be? Is it within the bounds of what a wealthy actor could subsidize?

Load more