All of AppliedDivinityStudies's Comments + Replies

Research idea: Evaluate the IGM economic experts panel

If you read the expert comments, very often they complain that the question is poorly phrased. It's typically about wording like "would greatly increase" where there's not even an attempt to define "greatly". So if you want to improve the panel or replicate it, that is my #1 recommendation.

...My #2 recommendation is to create a Metaculus market for every IGM question and see how it compares.

8Larks8dAdditionally, sometimes the question seems to ask about one specific cost or benefit of a policy, and respondents are unsure how to answer if they think that issue is unimportant but disagree/agree for other reasons.
Is EA over-invested in Crypto?

At what level of payoff is that bet worth it? Lets say the bet is a 50/50 triple-or-nothing bet. So, either EA ends up with half its money, or ends up with double. I'd guess (based on not much) that right now losing 50% of EA's money is more negative than doubling EA's money is positive.


There is an actual correct answer, at least the abstract. According to the Kelly criterion, on a 50/50 triple-or-nothing bet, you should put down 25% of your bankroll.

Say EA is now at around 50/50 Crypto/non-Crypto, what kind of returns would justify that allocation? At 50/... (read more)

Bryan Caplan on EA groups

People like to hear nice things about themselves from prominent people, and Bryan is non-EA enough to make it feel not entirely self-congratulatory. 

Is there a market for products mixing plant-based and animal protein? Is advocating for "selective omnivores" / reducitarianism / mixed diets neglected - with regards to animal welfare?

A while back I looked into using lard and/or bacon in otherwise vegan cooking. The idea being that you could use a fairly small amount of animal product to great gastronomical effect. One way to think about this is to consider whether you would prefer:
A: Rice and lentils with a tablespoon of bacon
B: Rice with 0.25lb ground beef

I did the math on this, and it works out surprisingly poorly for lard. You're consuming 1/8th as much mass, which sounds good, except that by some measures, producing pig induces 4x as much suffering as producing beef per unit o... (read more)

Bayesian Mindset

The tension between overconfidence and rigorous thinking is overrated:

Swisher: Do you take criticism to heart correctly?

Elon: Yes.

Swisher: Give me an example of something if you could.

Elon: How do you think rockets get to orbit?

Swisher: That’s a fair point.

Elon: Not easily. Physics is very demanding. If you get it wrong, the rocket will blow up. 
Cars are very demanding. If you get it wrong, a car won’t work. Truth in engineering and science is extremely important.

Swisher: Right. And therefore?

Elon: I have a strong interest in the truth.

Source and prev... (read more)

World's First Octopus Farm - Linkpost

Okay sorry, maybe I'm having a stroke and don't understand. The original phrasing and new phrasing look identical to me.

1Lumpyproletariat1moOh, I'm sorry for being unclear! The second phrasing emphasizes different words (as and adult human) in a way I thought made the meaning of the original post clearer.
World's First Octopus Farm - Linkpost

Oh wait, did you already edit the original comment? If not I might have misread it. 

1Lumpyproletariat1moI haven't edited the original comment.
World's First Octopus Farm - Linkpost

I agree that it's pretty likely octopi are morally relevant, though we should distinguish between "30% likelihood of moral relevance" and "moral weight relative to a human".

1Lumpyproletariat1moDo you think the initial post would have read better as: "I think that an octopus is ~30% likely to be as morally relevant as an adult human (with wide error bars, I don't know as much about the invertebrates as I'd like to), so this is pretty horrifying to me."?
World's First Octopus Farm - Linkpost

I don't have anything substantive to add, but this is really really sad to hear. Thanks for sharing.

Bayesian Mindset

The wrong tool for many.... Some people accomplish a lot of good by being overconfident.

But Holden, rationalists should win. If you can do good by being overconfident, then bayesian habits can and should endorse overconfidence.

Since "The Bayesian Mindset" broadly construed is all about calibrating confidence, that might sound like a contradiction, but it shouldn't. Overconfidence is an attitude, not an epistemic state.

2Holden Karnofsky25dIt might be true that the right expected utility calculation would endorse being overconfident, but "Bayesian mindset" isn't about behaving like a theoretically ideal utility maximizer - it's about actually writing down probabilities and values and taking action based on those. I think trying to actually make decisions this way is a very awkward fit with an overconfident attitude: even if the equation you write down says you'll do best by feeling overconfident, that might be tough in practice.

bayesian habits can and should endorse overconfidence

I disagree, Bayesian habits would lead one to the self-fulfilling prophecy point.

A Case for Improving Global Equity as Radical Longtermism

~50% of Open Phil spending is on global health, animal welfare, criminal justice reform, and other "shortermist" and egalitarian causes.

This is their recent writeup on one piece of how they think about disbursing funds now vs later https://www.openphilanthropy.org/blog/2021-allocation-givewell-top-charities-why-we-re-giving-more-going-forward

EA megaprojects continued

This perspective strikes me as as extremely low agentiness.

Donors aren't this wildly unreachable class of people, they read EA forum, they have public emails, etc. Anyone, including you, can take one of these ideas, scope it out more rigorously, and write up a report. It's nobody's job right now, but it could be yours.

5John G. Halstead2mohaha that's fair! there is of course a tragedy of the commons risk here though - of people discussing these ideas and it not being anyone's job to make them happen
What are some success stories of grantmakers beating the wider EA community?

Sure, but outside of OpenPhil, GiveWell is the vast majority of EA spending right?

Not a grant-making organization, but as another example, the Rethink Priorities report on Charter Cities seemed fairly "traditional EA" style analysis.

2Neel Nanda2moSure. But I think the story there was that Open Phil intentionally split off to pursue this much more aggressive approach, and GiveWell is more traditional charity focused/requires high standards of evidence. And I think having prominent orgs doing each strategy is actually pretty great? They just fit into different niches
What are some success stories of grantmakers beating the wider EA community?

There's a list of winners here, but I'm not sure how you would judge counterfactual impact. With a lot of these, it's difficult to demonstrate that the grantee would have been unable to do their work without the grant.

At the very least, I think Alexey was fairly poor when he received the grant and would have had to get a day job otherwise.

What are some success stories of grantmakers beating the wider EA community?

I think the framing of good grantmaking as "spotting great opportunities early" is precisely how EA gets beat.

Fast Grants seems to have been hugely impactful for a fairly small amount of money, the trick is that the grantees weren't even asking, there was no institution to give no, and no cost-effectiveness estimate to run. It's a somewhat more entrepreneurial approach to grantmaking. It's not that EA thought it wasn't very promising, it's that EA didn't even see the opportunity.

I think it's worth noting that a ton of OpenPhil's portfolio would score reall... (read more)

I think it's worth noting that a ton of OpenPhil's portfolio would score really poorly along conventional EA metrics. They argue as much in this piece.

To be clear, to the extent your claim is true, giving money to things that ex ante have a lower cost-effectiveness than Givewell top charities + have low information value is more of a strike against Open Phil than it is against the idea of using cost-effectiveness analysis. 

So of course the community collectively gets credit because OpenPhil identifies as EA, but it's worth noting that their "hits based giving" approach divers substantially from more conventional EA-style (quantitative QALY/cost-effectiveness) analysis and asking what that should mean for the movement more generally.

My impression is that most major EA funding bodies, bar Givewell, are mostly following a hits based giving approach nowadays. Eg EA Funds are pretty explicit about this. I definitely agree with the underlying point about weaknesses of traditional EA methods, but I'm not sure this implies a deep question for the movement, vs a question that's already fairly internalised

The coronavirus Fast Grants were great, but their competitive advantage seems to have been that they were they first (and fastest) people to move in a crisis.

The overall Emergent Ventures idea is interesting and worth exploring (I say, while running a copy of it), but has it had proven cost-effective impact yet? I haven't been following the people involved but I don't remember MR formally following up.

I think Fast Grants may not be great on a longtermist worldview (though it might still be good in terms of capacity-building, hmm), and there are few competent EA grantmakers with a neartermist, human-centric worldview.

Liberty in North Korea, quick cost-effectiveness estimate

Saying "I'd rather die than live like that" is distinct from "this is worse than non-existence." Can you clarify?

Even the implication that moving a NK person to SK is better than saving 10 SK lives is sort of implausible - for both NKs and SKs alike. I don't know what they would find implausible. To me it seems plausible.

Liberty in North Korea, quick cost-effectiveness estimate

I believe NK people would likely disagree with this conclusion, even if they were not being coerced to do so. I don't have good intuitions on this, it doesn't seem absurd to me.

Unrelated to NK, many people suffer immensely from terminal illnesses, but we still deny them the right to assisted suicide. For very good reasons, we have extremely strong biases against actively killing people, even when their lives are clearly net negative.

So yes, I think it's plausible that many humans living in extreme poverty or under totalitarian regimes are experiencing e... (read more)

1Ramiro2moI tend to agree that there are lives (human or not) not worth living, but my point is that it's very difficult to consistently identify them by using my only own preference ordering. Saying "I'd rather die than live like that" is distinct from "this is worse than non-existence." (I'm assuming we're not taking into account externalities and opportunity costs. An adult male lion's seems pretty comfortable and positive, but it entails huge costs for other animals) It's even harder if you have to take into account the perspectives of the interested parties. For instance, in the example we're discussing, SK people could also complain that your utility function implied that preventing one NK birth is equal to saving 10 SK lives. Even the implication that moving a NK person to SK is better than saving 10 SK lives is sort of implausible - for both NKs and SKs alike.
Why hasn’t EA found agreement on patient and urgent longtermism yet?

EA has consensus on shockingly few big questions. I would argue that not coming to widespread agreement is the norm for this community.

Think about:

  • neartermism v.s. longtermism
  • GiveWell style CEAs v.s. Open Phil style explicitly non-transparent hits-based giving
  • Total Utilitarianism v.s. Suffering-focused Ethics
  • Priors on the hinge-of-history hypothesis
  • Moral Realism

These are all incredibly important and central to a lot of EA work, but as far as I've seen, there isn't strong consensus.

I would describe the working solution as some combination of:

  • Pursu
... (read more)
A Red-Team Against the Impact of Small Donations

I think I see the confusion.

No, I meant an intervention that could produce 10x ROI on $1M looked better than an intervention that could produce 5x ROI on $1B, and now the opposite is true (or should be).

A Red-Team Against the Impact of Small Donations

Uhh, I'm not sure if I'm misunderstanding or you are. My original point in the post was supposed to be that the current scenario is indeed better.

2Denkenberger2moOk, so we agree that having $1 billion is better despite diminishing returns. So I still don't understand this statement: Are you saying that in 2011, we would have preferred $1M over $1B? Or does "look better" just refer to the benefit to cost ratio?
How should Effective Altruists think about Leftist Ethics?

I sort of expect the young college EAs to be more leftist, and expect them to be more prominent in the next few years. Though that could be wrong, maybe college EAs are heavily selected for not being already committed to leftist causes.

I don't think I'm the best person to ask haha. I basically expect EAs to be mostly Grey Tribe, pretty democratic, but with some libertarian influences, and generally just not that interested in politics. There's probably better data on this somewhere, or at least the EA-related SlateStarCodex reader survey.

3Misha_Yagudin2moThis is fairly aligned with my take but I think EAs are more blue than grey and more left than you might be implying. (Ah, by you I meant Stefan, he does/did a lot of empirical psychological/political research into relevant topics.)
How should Effective Altruists think about Leftist Ethics?

Okay, as I understand the discussion so far:

  • The RP authors said they were concerned about PR risk from a leftist critique
  • I wrote this post, explaining how I think those concerns could more productively be addressed
  • You asked, why I'm focusing on Leftist Ethics in particular
  • I replied, because I haven't seen authors cite concerns about PR risk stemming from other kinds of critique

That's all my comment was meant to illustrate, I think I pretty much agree with your initial comment.

5Stefan_Schubert2moAh, I see. Thanks!
How should Effective Altruists think about Leftist Ethics?

As I understand your comment, you think the structure of the report is something like:

  1. Here's our main model
  2. Here are it's implications
  3. By the way, here's something else to note that isn't included in the formal analysis

That's not how I interpret the report's framing. I read it more as:

  1. Here's our main model focused on direct benefits
  2. There are other direct benefits, such as Charter Cities as Laboratories of Governance
  3. Those indirect benefits might out-weight the direct ones, and might make Charter Cities attractive from a hits-based perspective
  4. One co
... (read more)
A Red-Team Against the Impact of Small Donations

Yeah, that's a good question. It's underspecified, and depends on what your baseline is.

We might say "for $1 donated, how much can we increase consumption". Or "for $1 donated, how much utility do we create?" The point isn't really that it's 10x or 5x, just that one opportunity is roughly 2x better than the other.

https://www.openphilanthropy.org/blog/givewells-top-charities-are-increasingly-hard-beat

So if we are giving to, e.g., encourage policies that increase incomes for average Americans, we need to increase them by $100 for every $1 we spend to get a

... (read more)
2Denkenberger2moSo it's like a benefit to cost ratio. So I can see with diminishing returns to more money, the benefit to cost ratio could be half. So with $1 million in the early days of EA, we could have $10 million of impact. But now that we have $1 billion, we can have $5 billion of impact. It seems like the latter scenario is still much better. Am I missing something?
How should Effective Altruists think about Leftist Ethics?

Thanks! Really appreciate getting a reply for you, and thanks for clarifying how you meant this passage to be understood.

I agree that you don't claim the PR risks should disqualify charter cities, but you do cite it as a concern right? I think part of my confusion stems from the distinction between "X is a concern we're noting" and "X is a parameter in the cost-effectiveness model", and from trying to understand the relative importance of the various qualitative and quantitative arguments made throughout.

I.e., one way of interpreting your report would be:

... (read more)
4Jason Schukraft2moThe distinction is largely pragmatic. Charter cities, like many complex interventions, are hard to model quantitatively. For the report, we replicated, adjusted, and extended a quantitative model that Charter Cities Institute originally proposed [https://www.chartercitiesinstitute.org/post/case-for-charter-cities-effective-altruism] . If that's your primary theory of change for charter cities, it seems like the numbers don't quite work out. But there are many other possible theories of change, and we would love to see charter city advocates spend some time turning those theories of change into quantitative models. I think PR risks are relevant to most theories of change that involve charter cities, but they are certainly not my main concern.
How should Effective Altruists think about Leftist Ethics?

Yes that's true. Though I have not read any EA report that includes a paragraph of the flavor "Libertarians are worried about X, we have no opinion on whether or not X is true, but it creates substantial PR-risk."

That might be because libertarians are less inclined to drum up big PR-scandals, but it's also because EAs tend to be somewhat sympathetic to libertarianism already.

My sense is that people mostly ignore virtue ethics, though maybe Open Phil thinks about them as part of their "worldview diversification" approach. In that case, I think it would be u... (read more)

4Stefan_Schubert2moI'm not sure I understand your reasoning. I thought you were saying that we should focus on whether ethical theories are true (or have some chance of being true), and not so much on the PR-risk? And if so, it doesn't seem to matter that Libertarians tend to have fewer complaints (which may lead to bad PR). Fwiw libertarianism and virtue ethics were just two examples. My point is that there's no reason to single out Leftist Ethics among the many potential alternatives to utilitarianism.
Is it no longer hard to get a direct work job?

EA Funds is also just way bigger than it used to be https://funds.effectivealtruism.org/stats/overview

This dashboard only gives payout amounts, so I'm not sure what's happened to # of grants or acceptance rate, but the huge increase in sheer cumulative donation from last year to this one is encouraging.

Don’t wait – there’s plenty more need and opportunity today

a large-scale study evaluating our program in Kenya found each $1 transferred drove $2.60 in additional spending or income in the surrounding community, with non-recipients benefitting from the cash transfers nearly as much as recipients themselves. **Since 2018, we have asked GiveWell to fully engage with this study and others, but they have opted not to, citing capacity constraints. ** [emphasis mine]

This sounded pretty concerning to me, so I looked into it a bit more.

This GiveWell post mentions that they did engage with the study, or at least private... (read more)

I'm not GiveDirectly, but in my view. It does make sense for GiveWell to deprioritise doing a more in-depth evaluation of GiveDirectly given resource constraints. However, when GiveWell repeatedly says in current research that certain interventions are or "5-8x cash", I think it would be helpful for them to make it more clear that it might be only "2-4x cash" - they just haven't had the time to re-evaluate the cash

Don’t wait – there’s plenty more need and opportunity today

I want to agree with your points on delegating as much decision making directly to the affected populations, but my sense is that this is something GiveWell already thinks very seriously about, and has deeply considered.

For example, I personally felt very persuaded by some of Alex Berger's comments explaining that the advantage of buying bed-nets over direct transfers are that many of the beneficiaries are children, who wouldn't be eligible for GiveDirectly, and that ~50% of the benefits are from the positive externalities of killing mosquitoes, so people ... (read more)

Don’t wait – there’s plenty more need and opportunity today

We applaud the work they did with IDinsight to understand better preferences of potential aid recipients, but the scale and scope of this survey doesn’t go nearly far enough in correcting the massive imbalances in power and lived experience that exist in their work and in philanthropy in general.

Was happy to see you link to this. I agree the IDinsight surveys are simultaneously super useful and nowhere near enough.

My own sense is that more work in the vein of surveying people in extreme poverty to better calibrate moral weights would eventually alleviat... (read more)

A Red-Team Against the Impact of Small Donations

Agreed that my arguments don't apply to donations to GiveDirectly, it's just that they're 5-10x less effective than top GiveWell charities.

I think that part of my arguments don't apply to other GiveWell charities, but the general concern still does. If AMF (or whoever) has funding capacity, why shouldn't I just count on GiveWell to fill it?

1banx2moFor GiveWell and its top charities, excluding GiveDirectly, I think a lot depends on whether you expect GiveWell to have more RFMF than funds anytime in the near future. The obvious question is why wouldn't OpenPhil fill in the gaps. Maybe if GiveWell's RFMF expands enough then OpenPhil won't want to spend that much on GiveWell-level interventions? GiveWell gives some estimates here (Rollover Funding FAQ | GiveWell [https://www.givewell.org/rollover-funds#What_are_GiveWells_long-term_forecasts_for_growth] ) saying they expect to have capacity to spend down their funds in 2023, but they admit they're conservative on the funding side and ambitious on the RFMF side. If GiveWell will actually be funding constrained within a few years, I feel pretty good about donating to them, effectively letting them hold the money in OpenPhil investments until they identify spending opportunities at the 5-10x GD level (especially where donating now yields benefits like matching). If they're ultimately going to get everything 5x+ funded by OpenPhil no matter what, then your argument that I'm donating peanuts to the huge pile of OpenPhil or Moskovitz money seems right to me. GiveWell does say "If we’re able to raise funds significantly faster than we've forecast, we will prioritize finding additional RFMF to meet those funds." So it sounds like they're almost-committing to not letting donor money get funged by OpenPhil for more than a few years.
6Khorton2moI have also been thinking about whether GiveWell/other donors will fill the funding needs at AMF and I should look for something in between AMF and GiveDirectly that needs funding.
A Red-Team Against the Impact of Small Donations

I agree EA is really good as funding weird things, but every in-group has something they consider weird. A better way of phrasing that might have been "fund things that might create PR risk for OpenPhil".

See this comment from the Rethink Priorities Report on Charter Cities:

Finally, the laboratories of governance model may add to the neocolonialist critique of charter cities. Charter cities are not only risky, they are also controversial. Charter cities are likely to be financed by rich-country investors but built in low-income countries. If rich develope

... (read more)
2Linch2moOne possible source of confusion here is that EA grantmakers and (in the report) Rethink Priorities tend to think of charter cities (and for that matter, climate change [https://forum.effectivealtruism.org/posts/D499oMCiFiqHT92TT/we-re-rethink-priorities-ask-us-anything?commentId=r8rSbgL7BuhFJz5NK] ) as a near-/medium- termist intervention in global health and development, whereas perhaps other EAs or EA-adjacent folks (including yourself?) think of it as a longtermist intervention.

Though is that case, is the upshot that I should donate to EA Funds, or that I should tell EA Funds to refer weird grant applicants to me?

If you're a <$500k/y donor, donate to EA Funds; otherwise tell EA Funds to refer weird grant applications to you (especially if you're neartermist – I don't think we're currently constrained by longtermist/meta donors who are open to weird ideas).

Regarding Charter Cities, I don't think EA Funds would be worried about funding them. However, I haven't yet encountered human-centric (as opposed to animal-inclusive) nearte... (read more)

A Red-Team Against the Impact of Small Donations

But the more you think everyone else is doing that, the more important it is to give now right? Just as an absurd example, say the $46b EA-related funds grows 100% YoY for 10 years, then we wake up in 2031 with $46 trillion. If anything remotely like that is actually true, we'll feel pretty dumb for not giving to CEPI now.

4Jonas Vollmer2moYeah, I agree. (Also, I think it's a lot harder / near-impossible to sustain such high returns on a $100b portfolio than on a $1b portfolio.)
A Red-Team Against the Impact of Small Donations

Hey, thanks. That's a good point.

I think it depends partially on how confident you are that Dustin Moskovitz will give away all his money, and how altruistic you are. Moskovitz seems great, I think he's pledged to give away "more than half" his wealth in his lifetime (though I current find a good citation, it might be much higher). My sense is that some other extremely generous billionaires (Gates/Buffet) also made pledges, and it doesn't currently seem like they're on track. Or maybe they do give away all their money, but it's just held by the foundation,... (read more)

2Luke Eure2moThat’s helpful thank you! I think the mode is more “I’m going to give OpenPhil more money”. It only becomes “I’m going to give Dustin more money” if it’s true that Dustin adjusts his donations to OpenPhil every year based on how much OpenPhil disburses, such that funging OpenPhil = funging Dustin But in any case I’d say most EAs are probably optimistic that these organizations and individuals will continue to be altruistic and will continue to have values we agree with. And in any any case, I strongly agree that we should be more entrepreneurial
Despite billions of extra funding, small donors can still have a significant impact

I believe that GiveWell/OpenPhil often try to avoid providing over 50% of a charity's funding to avoid fragility / over-reliance.

Is an upshot of that view that personal small donations are effectively matched 1:1?

I.e. Suppose AMF is 50% funded by GiveWell, when I give AMF $100, I'm allowing GiveWell to give another $100 without exceeding the threshold.

Curious if anyone could corroborate this guess.

I believe that GiveWell/OpenPhil often try to avoid providing over 50% of a charity's funding to avoid fragility / over-reliance.

Holden Karnofsky clarified on the 80,000 Hours podcast that Open Phil merely feels nervous about funding >50% of an organizations budget (and explained why), but often does fund >50% anyway.

Is an upshot of that view that personal small donations are effectively matched 1:1?

Holden thinks that there is some multiplier there, but it's less than 1:1:

And I do think there is some kind of multiplier for people donating t

... (read more)
5Benjamin_Todd2moI think this dynamic has sometimes applied in the past. However, Open Philanthropy are now often providing 66%, and sometimes 100%, so I didn't want to mention this as a significant benefit. There might still be some leverage in some cases, but less than 1:1. Overall, I think a clearer way to think about this is in terms of the value of having a diversified donor base, which I mention in the final section.
Our Criminal Justice Reform Program Is Now an Independent Organization: Just Impact

This is great to see, a huge congratulations to everyone involved!

Side note: Sorry for a totally inane nitpick, but I was curious about the phrasing in your opening line:

Mass incarceration in America has devastated communities, particularly communities of color: 1 in 2 Americans has a family member who’s been incarcerated, 1 in 4 women in America have a loved one in jail or prison, and millions of children have a parent in prison.

From a glance at Wikipedia, US incarceration rates are 7.5x higher for males, or within Black adults, 16.7x higher for males... (read more)

It also seems like a very high fraction! According to a quick google, 1.8m/330m ~=  0.5% of the US population is in prison or jail. Typically when people say 'loved one[s]' they mean close relatives; presumably that is generally fewer than 50 people. So even if they were non-overlapping (which seems unlikely as criminality/incarceration runs in families) I'd expect this to apply to fewer than 1/4 women.

I tried to track down the source of this stat. It appears it might (?) come from Hedwig (2015)'s analysis of data from 2006, which in turn attributes i... (read more)

3evelynciara2moHaha, I didn't write this! I don't know why they emphasized that either.
What's the GiveDirectly of longtermism & existential risk?

Here's Will MacAskill at EAG 2020:

I’ve started to view [working on climate change] as the GiveDirectly of longtermist interventions. It's a fairly safe option.

Command-f for the full context on this.

3Larks2moThe climate change scenarios that EAs are most worried about are tail-risks of extreme warming, in comparison to GiveDirectly's effects which seem slightly positive in most worlds. And while the best climate change interventions might be robustly not-bad, that's not true for the entire space. Given the relatively modest damage in the median forecasts (e.g. 10% counterfactual GDP, greatly outweighed by economic growth) many proposals, like banning all air travel, or anti-natalism, would do far more harm than good. Will suggests that climate change policies are robustly good for the very long term growth rate (not just level), but I don't understand why - virtually all very long-term growth will not take place on this planet.
7HaukeHillebrandt2mo"[Climate change interventions are] just so robustly good, especially when it comes to what Founders Pledge typically champions funding the most: clean tech. Renewables, super hot rock geothermal, and other sorts of clean energy technologies are really good in a lot of worlds, over the very long term — and we have very good evidence to think that. A lot of the other stuff we're doing is much more speculative. So I’ve started to view [working on climate change] as the GiveDirectly of longtermist interventions. It's a fairly safe option." But then this might be a bit outdated now (see Good news on climate change [https://forum.effectivealtruism.org/posts/ckPSrWeghc4gNsShK/good-news-on-climate-change ] ).
What does the growth of EA mean for our priorities and level of ambition?

Basically, funders are holding their bar significantly higher than GiveDirectly. And that’s because they believe that by waiting, we’ll be able to find and also create new opportunities that are significantly more cost effective than GiveDirectly, and therefore over the long term have a much bigger impact. So I’d say the kind of current bar of funding within GiveDirectly is more around the level of Against Malaria Foundation, which GiveWell estimates is 15 times more cost effective than GiveDirectly. So generally, charities that are around that level of c

... (read more)
8Benjamin_Todd2moHey, that seems like I mis-spoke in the talk (or there's a typo in the transcript). I think it should be "current bar of funding with global development". I think in general new charities need to offer some combination of the potential for higher or similar cost-effectiveness of AMF and scalability. Exactly how to weigh those two is a difficult question.
What does the growth of EA mean for our priorities and level of ambition?

Sorry, really silly nit. But curious if the 80k transcripts are auto generated or manually transcribed?

This one felt a bit harder to parse than the typical 80k podcast transcripts, I think because you left in pretty much all of the "ok"/"so"/"yeah" filler words.

Or might just be because it's a public talk instead of an interview? Though I would actually expect the latter to be more conversation and less prepared.

3Benjamin_Todd2moNormally with the podcasts we cut the filler words in the audio. This audio was unedited so ended up with more filler than normal. We've just done a round of edits to reduce the filler words.
[Linkpost] Apply For An ACX Grant

I agree that s-risks are highly neglected relative to their importance, but are they neglected by existing sources of funding? I'm genuinely asking because I'm not sure. The question is roughly:

  1. Are they currently funded by any large EA donors?
  2. Is funding a bottleneck, such that more funding would result in better results?
3Question Mark2moThe Center for Reducing Suffering is definitely underfunded. To quote them directly [https://centerforreducingsuffering.org/2020-fundraiser/]: "As a small, early-stage organisation, we currently operate on a very limited budget; in fact, we only recently started paying researchers at all. The marginal benefit of additional funding is therefore particularly large: we have much room for funding, and funding at this early stage is critical for enabling CRS to get properly off the ground. the same amount makes a much bigger difference at this stage, compared to a more established or less funding-constrained organisation." The Center on Long-Term Risk has significantly more funding. CLR has an annual transparency report [https://longtermrisk.org/transparency] where you can see their financial information. Open Philanthropy [https://www.openphilanthropy.org/giving/grants/effective-altruism-foundation-research-operations] also recommended a $1 million dollar grant to the Effective Altruism Foundation, the parent organization of CLR [https://ea-foundation.org/projects/]. More funding for S-risk research could potentially result in these organizations being able to hire more people and acquire more top talent. In the case of the Center for Reducing Suffering, as mentioned above, they are an early stage organization with a lot of room for funding.
How many people should get self-study grants and how can we find them?

That's true, but feels less deadweight to me. You have fewer friends, but that results in more time. You move out of one town, but into another with new opportunities.

How many people should get self-study grants and how can we find them?

This is an important point. You want some barrier to entry, while also minimizing deadweight loss from signaling / credentialing. So "you can join EA and get funding, but only if you complete a bunch of arbitrary tasks" is bad, but "you can join EA and get funding, but only if you move to this town" is pretty good!

Of course it would be nice to have an EA Hotel equivalent that is more amenable to people with visa/family/health restrictions (Especially now that the UK is not part of the EU and has Covid-related entry requirements), but I think it's a fairly good model for unblocking potential talent without throwing money around.

3toonalfrink3moThere is also significant loss caused by moving to a different town, i.e. loss of important connections with friends and family at home, but we're tempted not to count those.
How Do We Make Nuclear Energy Tractable?

Epistemic status: loose impressions and wild guesses.

Note that this is not true across the globe. See this Wikipedia list, and command-F "202" (for 202X).

Some are even plant closures (mostly US, Canada, Germany), but China has a ton of new plants. Other countries with new plants and planned plants include Finland, Egypt, France, Poland, Russia, Turkey, even the US!

My loose impression is that some recent excitement is driven by Small Modular Reactors, and of course, climate change.

This chart is useful, showing that nuclear as a share of all energy plateaued... (read more)

6Evan_Gaensbauer3moI'm aware of SMRs but most of the info I'm exposed to about them is either mainstream media. That tends to be more sensational and focuses on how exciting the are instead of on data. Mainstream science reporting is better but such articles commonly rehash the basics, like simply describing what SMRs are, instead of getting into details like projected development timelines. I can check whether anyone in the forecasting community, either in or outside of EA, are asking those questions. Others in EA who've studied or worked in a relevant field may also know more or know someone who does. Please let me know if you otherwise know of sources providing more specific information on the future of SMRs. In the tweet you've linked to, Patrick Collison's comment on the subject frames the matter as though the dramatic increase in regulation that has slowed down the construction of new nuclear power plants is irrational in general. That may be the case but it was only after the 1950s that problems that nuclear power plants may pose if not managed properly become apparent. It's because of nuclear meltdowns that more regulations were introduced. It shouldn't be surprising if constructing new nuclear power plants even years longer than it took to construct them decades ago. They should take significantly longer to construct at present for them to be constructed and managed safely in perpetuity. One could argue that at this point it's been an over-correction and now the construction and maintenance of new nuclear power plants is over-regulated. The case for that specific argument must be made on its own but it wasn't either by Patrick or the person who posted the original tweet he quote-tweeted/retweeted. I of course appreciate you providing that link to get the point across, and it's not your fault, but the tweet itself is useless. The Foreign Policy article they're quoting is behind a paywall on their website I don't have access to right now but I'll try getting access to it.
6Evan_Gaensbauer3mo(I've got a longer response to the part of your comment comparing the rate of development of nuclear energy in different countries, so I'm posting it as its own comment. I'll respond to the other points you've made in a separate comment. ) The primary motivation for plant closures I'm aware of are concerns about health, safety, pollution and potential catastrophe. That's the case in North America after the meltdown on Three Mile Island and also Japan after the Fukashima Daiyichi reactor meltdown. A difference with Germany is that Germany has had an exceptionally strong Green movement, as a social and political movement. That's resulted in Germany shutting down more nuclear power plants down over environmental concerns but also a greater proportionate development of renewable energy compared to many other Western countries. One pattern is that the countries where nuclear power plants tend to either be shut down at greater rates or built at lower rates is that they are liberal democracies. It's easy to presume that because liberal-democratic governments are more subject to the pressure(s) of public opinion, (relatively more) authoritarian governments face fewer political hurdles to building nuclear power plants. Yet as the country that has built the most nuclear power plants the fastest in China, I would expect the greater factor is not necessarily that it's an authoritarian but a more technocratic government that's able to overcome more easily what would otherwise be political barriers. Egypt, Russia, Turkey and Poland are all countries that are rated as having become more authoritarian over the last several years. Yet the development of nuclear power plants takes as many if not even more years, so the increasing rate of development of nuclear energy in those countries could easily precede their more authoritarian political pivots. All of those other countries are neither building nuclear power plants as fast as China is nor are their governments particularly tech
Why aren't you freaking out about OpenAI? At what point would you start?

That's pretty wild, especially considering getting Holden on the board was a major condition of OpenPhilanthropy's $30,000,000 grant: https://www.openphilanthropy.org/focus/global-catastrophic-risks/potential-risks-advanced-artificial-intelligence/openai-general-support#Details_on_Open_Philanthropy8217s_role

Thought it also says the grant was for 3 years, so maybe it shouldn't be surprising that his board seat only lasted that long.

9MichaelStJules3moHolden might have agreed to have Helen replace him. She used to work at Open Phil, too, so Holden probably knows her well enough. Open Phil bought a board seat, and it's not weird for them to fill it as they see fit, without having it reserved only for a specific individual.
If I have a strong preference for remote work, should I focus my career on AI or on blockchain?

Hey, would recommend reading a bit more of the 80k materials https://80000hours.org/

Or starting here https://www.effectivealtruism.org/articles/introduction-to-effective-altruism/

Of course you're free to do whatever you want with your career, but the standard EA advice is going to be to follow the 80k recommendations for high impact careers https://80000hours.org/career-reviews/

Liberty in North Korea, quick cost-effectiveness estimate

very speculative

Say you're hit by a car tomorrow and die. An angel comes down, and they don't quite offer you a second chance at life, they just offer you a day of life, with none of your current memories, as an average middle class person in South Korea.

Do you accept? I probably would, I expect the median South Korean to have a net-positive existence.

But here's the catch: you also have to spend a day as an average political dissident in North Korea. Would you take that trade? I definitely would not. I think the disutility of the second scenario far outwei... (read more)

In other words, putting very rough guesses on the utility of each scenario:

  • Middle class in South Korea: 10
  • Muzak and potatoes: 0
  • Political dissident in North Korea: -100

I tend to agree that helping NK refugees prevents suffering, and that we should really have some back-of-the-envelope calculation to measure it. (Usually, when I assess the value of helping a refugee, I consider HDI differences between countries as a proxy for the increase in wellbeing; but we can't do this for NK because we can't rely on what they publish - and even if we could I don't think... (read more)

Can EA leverage an Elon-vs-world-hunger news cycle?

Agreed. The proper approach is probably to develop a playbook for rapidly evaluating whether or not a news cycle is worth thinking about at all, and then executing on a specific pre-determined plan when it is.

Annual donation rituals?

Ohh mugs are a great idea! I just found (ACE top charity) the Humane League's gift shop: https://thehumaneleague.org/shop

Water bottle and mug are pretty compelling.

Annual donation rituals?

One example idea might be a specific family dinner every year where we all research and discuss where we want to give and what the impact might be.

I tried this last year, spent several hours with a friend doing research... and then sighed and gave it all to GiveWell charities as usual.

FWIW, I specifically don't discuss giving with any other friends. Most of them are not EAs, and giving away a significant chunk of money would likely be alienating (for financial reasons), or scrutiny inducing ("aren't you just spending a lot to signal how good you are?"),... (read more)

5jared_m3moCute animal cards never hurt. We bought a mug last winter with the logo of the cause we gave the most to in 2020. It has been a nice reminder of that giving, and well worth the price. (Proceeds from the mug also went to that cause.) A few other "end-of-year ritual" thoughts: 1. Outdoor rituals, paired with giving: holiday walks [https://www.youtube.com/watch?v=DBiJDmkpZQc] (or kicking a soccer ball around) have made our holidays more memorable. We live near a beach, so giving to environmental and animal welfare charities over coffee feels more meaningful right after watching chipper coastal birds racing around the surf 2. Movie or story rituals, paired with giving: without debating her politics, I think there's a potentially nice model in Elizabeth Warren's annual NYE viewing of Casablanca with her husband. As she has shared several times [https://twitter.com/ewarren/status/1079910775618265088?lang=en]: "It’s a story about love & sacrifice, & also how people survive & fight back. Every time we watch it on New Year's, it gives me hope." Stories or movies like Casablanca can give one a sense of a) ennobling solidarity, b) perspective that making effective donations and reducing one's bank account is both great to do, and an easier lift for us than (say) fighting totalitarianism has been for many
Load More