All of AppliedDivinityStudies's Comments + Replies

Most* small probabilities aren't pascalian

The problem (of worrying that you're being silly and getting mugged) doesn't arise when probabilities are tiny, it's when probabilities are tiny and you're highly uncertain. We have pretty good bounds in the three areas you listed, but I do not have good bounds on say, the odds that "spending the next year of my life on AI Safety research" will prevent x-risk.

In the former cases, we have base rates and many trials. In the latter case, I'm just doing a very rough fermi estimate. Say I have 5 parameters with an order of magnitude of uncertainty on each one, ... (read more)

1Drew Housman18d
I liked that post a lot, thanks for sharing. I wholeheartedly agree with your first footnote — "The moral importance of time-dilation turns out to be an absolutely fascinating question" It's something I want to learn more about.
4Linch20d
Perhaps a pedantic point, but re: I think most likely your subjective experience of time was not dilated as much as your subjective experience of the experience of time. (E.g., if you have a dream where a year "went by", it is not the case that subjective experience of time actually went through an entire year, in the sense of massive acceleration of clock speeds). Whether or not you care about this distinction is of course another matter. I mention this because "people cannot be wrong about their own subjective experiences" used to be one of the strongest beliefs I had, but now I think this is pretty wrong, and I consider it a moderately important example of philosophical progress in myself.
Punching Utilitarians in the Face

Under mainstream conceptions of physics (as I loosely understand them), the number of possible lives in the future is unfathomably large, but not actually infinite.

3Lukas_Finnveden1mo
I'm not saying it's infinite, just that (even assuming it's finite) I assign non-0 probability to different possible finite numbers in a fashion such that the expected value is infinite. (Just like the expected value of an infinite st petersburg challenge is infinite, although every outcome has finite size.)
3MichaelStJules1mo
Is the expected number finite, though? If you assign nonzero probability to a distribution with infinite EV, your overall EV will be infinite. If you can't give a hard upper bound, i.e. you can't prove that there exists some finite number N, such that the number of possible lives in the future is at most N with probability 1, it seems hard to rule out giving any weight to such distributions with infinite EV (although I am now just invoking Cromwell's rule).
Punching Utilitarians in the Face

Longtermism does mess with intuitions, but it's also not basing its legitimacy on a case from intuition. In some ways, it's the exact opposite: it seems absurd to think that every single life we see today could be nearly insignificant when compared to the vast future, and yet this is what one line of reasoning tells us.

Punching Utilitarians in the Face

I originally wrote this post for my personal blog and was asked to cross-post here. I stand by the ideas, but I apologize that the tone is a bit out of step with how I would normally write for this forum.

300+ Flashcards to Tackle Pressing World Problems

I read the title and thought this was a really silly approach, but after reading through the list I am fairly surprised how sold I am on the concept. So thanks for putting this together!

Minor nit: One concern I still have is over drilling facts into my head which won't be true in the future. For example, instead of:
> The average meat consumption per capita in China has grown 15-fold since 1961

 I would prefer:
> Average meat consumption per capita in China grew 15x in the 60 years after 1961

1AndreFerretti1mo
Great feedback on the longevity of flashcards, will apply it, thanks!
Will faster economic growth make us happier? The relevance of the Easterlin Paradox to Progress Studies

This is great, thanks Michael. I wasn't aware of the recent 2022 paper arguing against the Stevenson/Wolfers result. A couple questions:

In this talk (starting around 6:30), Peter Favaloro from Open Phil talks about how they use a utility function that grows logarithmically with income, and how this is informed by  Stevenson and Wolfers (2008). If the scaling were substantially less favorable (even in poor countries), that would have some fairly serious implications for their cost-effectiveness analysis. Is this something you've talked to them about?

Se... (read more)

Questions to ask Will MacAskill about 'What We Owe The Future' for 80,000 Hours Podcast (possible new audio intro to longtermism)

For some classes of meta-ethical dilemmas, Moral Uncertainty recommends using variance voting, which requires you to know the mean and variance of each theory under consideration.

How is this applied in practice? Say I give 95% weight to Total Utilitarianism and 5% weight to Average Utilitarianism, and I'm evaluating an intervention that's valued differently by each theory. Do I literally attempt to calculate values for variance? Or am I just reasoning abstractly about possible values?

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

If this dynamic leads you to put less “trust” in our decisions, I think that’s a good thing!


I will push back a bit on this as well. I think it's very healthy for the community to be skeptical of Open Philanthropy's reasoning ability, and to be vigilant about trying to point out errors.

On the other hand, I don't think it's great if we have a dynamic where the community is skeptical of Open Philanthropy's intentions. Basically, there's a big difference between "OP made a mistake because they over/underrated X" and "OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants."

 

Basically, there's a big difference between "OP made a mistake because they over/underrated X" and "OP made a mistake because they were politically or PR motivated and intentionally made sub-optimal grants."

The synthesis position might be something like "some subset of OP made a mistake because they were subconsciously politically or PR motivated and unintentionally made sub-optimal grants."

I think this is a reasonable candidate hypothesis, and should not be that much of a surprise, all things considered. We're all human.

I agree that there's a difference in the social dynamics of being vigilant about mistakes vs being vigilant about intentions. I agree with your point in the sense that worlds in which the community is skeptical of OP's intentions tend to have worse social dynamics than worlds in which it isn't.
But you seem to be implying something beyond that; that people should be less skeptical of OP's intentions given the evidence we see right now, and/or that people should be more hesitant to express that skepticism. Am I understanding you correctly, and what's your re... (read more)

2Guy Raveh2mo
I agree with you, but unfortunately I think it's inevitable that people doubt the intentions of any privately-managed organisation. This is perhaps an argument for more democratic funding (though one could counter-argue about the motivations of democratically chosen representatives).
A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

In general, WSJ reporting on SF crime has been quite bad. In another article they write

Much of this lawlessness can be linked to Proposition 47, a California ballot initiative passed in 2014, under which theft of less than $950 in goods is treated as a nonviolent misdemeanor and rarely prosecuted.

Which is just not true at all. Every state has some threshold, and California's is actually on the "tough on crime" side of the spectrum.

Shellenberger himself is an interesting guy, though not necessarily in a good way.

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Conversely, if sentences are reduced more than in the margin, common sense suggests that crime will increase, as observed in, for instance, San Francisco.

 

A bit of a nit since this is in your appendix, but there are serious issues with this reasoning and the linked evidence. Basically, this requires the claims that:
1. San Francisco reduced sentences
2. There was subsequently more crime

1. Shellenberger at the WSJ writes:

 the charging rate for theft by Mr. Boudin’s office declined from 62% in 2019 to 46% in 2021; for petty theft it fell from 58% to

... (read more)
9NunoSempere2mo
Added a note in that sentence of the appendix to point to this comment pending further investigation (which, realistically, I'm not going to do).
4NunoSempere2mo
Oh wow, I wasn't expecting the guy to just lie about this.
Announcing a contest: EA Criticism and Red Teaming

I can't speak for everyone, but will quickly offer my own thoughts as a panelist:
1. Short and/or informally written submissions are fine. I would happily award a tweet thread it if was good enough. But I'm hesitant to say "low effort is fine", because I'm not sure what else that implies.
2. It might sound trite, but I think the point of this contest (or at least the reason I'm excited about it) is to improve EA. So if a submission is totally illegible to EA people, it is unlikely to have that impact. On "style of argument" I'll just point to my own backlog ... (read more)

Announcing a contest: EA Criticism and Red Teaming

The crucial complementary question is "what percentage of people on the panel are neartermists?"

FWIW, I have previously written about animal ethics, interviewed Open Phil's neartermist co-CEO, and am personally donating to neartermist causes.

Open Philanthropy's Cause Exploration Prizes: $120k for written work on global health and wellbeing

Are there any limitations on the kinds of feedback we can/should get before submitting? For example, is it okay to:
- Get feedback from an OpenPhil staff member?
- Publish on the forum, get feedback, and make edits before submitting a final draft?
- Submit an unpublished piece of writing which has previously been reviewed?

If so, should reviewers be listed in order to provide clarity on input? Or omitted to avoid the impression of an "endorsement"?

3ChrisSmith2mo
All of those things are ok. Open Phil staff shouldn't be listed as co-authors since they are not eligible for the prizes. A brief acknowledgement section is welcome if you've had substantial input from others who are not co-authors. If you are submitting an unpublished piece of writing which you've already produced, please make sure it is answering a question that we've put forward and is geared towards the perspective of a funder (see our guidance page [https://www.causeexplorationprizes.com/guidance] for more detail)
Thought experiment: If you had 3 months to turn a stressed and unhappy person into a calm and happy one, what meta approach would you take?

Antidepressants do actually seem to work, and I think it's weird that people forget/neglect this. See Scott's review here and more recent writeup. Those are both on SSRIs, there is also Wellbutrin (see Robert Wiblin's personal experience with it here) and at least a few other fairly promising pharmacological treatments.

I would also read the relevant Lorien Psych articles and classic SSC posts on depression treatments and anxiety treatments.

Since you asked for the meta-approach: I think the key is to stick with each thing long enough to see if it works, but... (read more)

COVID memorial: 1ppm

Ideas are like investments, you don't want just want a well diversified portfolio, you want to intentionally hedge against other assets. In this view, the best way to develop a scout's mindset for yourself is to read a wide variety of writers, many of whom will be quite dogmatic. The goal shouldn't be to only read other reasonable people, but to read totally unreasonable people across domains and synthesize their claims into something coherent.

As you correctly note, Graeber is a model thinker in a world of incoherent anarchist/marxist ramblings. I think ou... (read more)

3Gavin4mo
Agree with all of this. I will read every one of his books and treasure his variance. But his mistakes and distortions are not incidental to his positions, and people take him more seriously than they should.
EA should taboo "EA should"

Strongly agree on this. It's been a pet peeve of mine to hear exactly these kinds of phrases. You're right that it's nearly a passive formulation, and frames things in a very low-agentiness way.

At the same time, I think we should recognize the phrasing as a symptom of some underlying feeling of powerlessness. Tabooing the phrase might help, but won't eradicate the condition. E.g.:
- If someone says "EA should consider funding North Korean refugees"
- You or I might respond "You should write up that analysis! You should make that case!"
- But the corresponding... (read more)

4Gus Docker5mo
I don't think we can say yet that Samo's forecasts have been "pretty wrong". As far as I'm aware, he's only made this forecast: "The Ukrainian forces are fighting valiantly and we are in the fog of war. Given my best understanding of the military situation, Russia looks to occupy Kyiv as well as at least 70% of internationally recognized Ukrainian territory by Day 50 of the invasion." https://twitter.com/SamoBurja/status/1499883211748433932 [https://twitter.com/SamoBurja/status/1499883211748433932] And this forecast has not been resolved yet. I hope he's wrong, of course. I read Scott Alexander as criticising Samo for not making more specific predictions, which is fair. And I think the reason Scott gives Samo a C is because of Samo's February tweets about the Russian military being strong. To my mind, this is bound up with the above forecast. I don't think we can say yet that Samo is wrong. Again, I hope he is. Anyways, I thought Samo provided an interesting perspective about the situation more broadly. Big fan of your blog, by the way :)
What is the new EA question?

I would add that we should be trying to increase the pool of resources. This includes broad outreach like Giving What We Can and the 80k podcast, as well as convincing EAs to be more ambitious, direct outreach to very wealthy people, and so on.

 

Some thoughts on vegetarianism and veganism

It sounds wild, but AFAIK, the cotton gin and maybe some other forms of automation actually made slavery more profitable! 

From Wikipedia:
> Whitney's gin made cotton farming more profitable, so plantation owners expanded their plantations and used more slaves to pick the cotton. Whitney never invented a machine to harvest cotton, it still had to be picked by hand. The invention has thus been identified as an inadvertent contributing factor to the outbreak of the American Civil War.

 

Future-proof ethics

across the board the ethical trend has been an extension of rights, franchise, and dignity to widening circles of humans

 

I have two objections here.
1) If this is the historical backing for wanting to future-proof ethics, shouldn't we just do the extrapolation from there directly instead of thing about systematizing ethics? In other words, just extent rights to all humans now and be done with it.
2) The idea that the ethical trend has been a monotonic widening is a bit self-fulfilling, since we don't no longer consider some agents to be morally importan... (read more)

2splinter6mo
I'm not totally sure what #1 means. But it doesn't seem like an argument against privileging future ethics over today's ethics. I view #2 as very much an argument in favor of privileging future ethics. We don't give moral weight to ghosts and ancestors anymore because we have improved our understanding of the world and no longer view these entities as having consciousness or agency. Insofar as we live in a world that requires tradeoffs, it would be actively immoral to give weight to a ghost's wellbeing when making a moral decision.
Idea: Red-teaming fellowships

One really useful way to execute this would be to bring in more outside non-EA experts in relevant disciplines. So have people in development econ evaluate GiveWell (great example of this here), engage people like Glen Wely to see how EA could better incorporate market-based thinking and mechanism design, engage hardcore anti-natalist philosophers (if you can find a credible one), engage anti-capitalist theorists skeptical of welfare and billionaire philanthropy, etc.

One specific pet project I'd love to see funded is more EA history. There are plenty of go... (read more)

4cryptograthor4mo
Hi there. This thread was advertised to me by a friend of mine, and I thought I would put a comment somewhere. In the spirit of red teaming, I'm a cryptography engineer with work in multi-party computation, consensus algorithms, and apologetically, nft's. I've done previous work with the Near grants board, evaluating technical projects. I also comaintain the https://zkmesh.substack.com/ [https://zkmesh.substack.com/] monthly newsletter on cryptography. I could have time to contribute toward a red-teaming effort and idea shaping and evaluation, for projects along these lines, if the project comes into existence.
Idea: Red-teaming fellowships

I agree that it's important to ask the meta questions about which pieces of information even have high moral value to begin with. OP gives as an example, the moral welfare of shrimps. But who cares? EA puts so little money and effort into this already on the assumption that they probably are valuable. Even if you demonstrated that they weren't or forced an update in that direction, the overall amount of funding shifted would be fairly small.

You might worry that all the important questions are already so heavily scrutinized as to bear little low-hanging fru... (read more)

Idea: Red-teaming fellowships

This is a good idea, but I think you mind find that there's surprisingly little EA consensus. What's the likelihood that this is the most important century? Should we be funding near-term health treatments for the global poor, or does nothing really matter aside from AI Safety? Is the right ethics utilitarian? Person-affecting? Should you even be a moral realist?

As far as I can tell, EAs (meaning both the general population of uni club attendees and EA Forum readers, alongside the "EA elite" who hold positions of influence at top EA orgs) disagree substant... (read more)

Thanks those are good points, especially when the focus is on making progress on issues that might be affected by group-think. Relatedly, I also like your idea of getting outside experts to scrutinize EA ideas. I've seen OpenPhil pay for expert feedback on at least one occasion, which seems pretty useful.

We were thinking about writing a question post along something like "Which ideas, assumptions, programmes, interventions, priorities, etc. would you like to see a red-teaming effort for?". What do you think about the idea, and would you add something to th... (read more)

Future-proof ethics

Do you have a stronger argument for why we should want to future-proof ethics? From the perspective of a conservative Christian born hundreds of years ago, maybe today's society is very sinful. What would compel them to adopt an attitude such that it isn't?

Similarly, say in the future we have moral norms that tolerate behavior we currently  see as reprehensible. Why would we want to adopt those norms? Should we assume that morality will make monotonic progress, just because we're repulsed by some past moral norms? That doesn't seem to follow. In fact,... (read more)

3Holden Karnofsky4mo
I don't think we should assume future ethics are better than ours, and that's not the intent of the term. I discuss what I was trying to do more here [https://www.cold-takes.com/moral-progress-vs-the-simple-passage-of-time/].
3mic6mo
We don't have to argue about Christians born hundreds of years ago; I know that conservative Christians today also think today's society is very sinful. This example isn't compelling to me, because as an inherently theistic religion, conservative Christianity seems fundamentally flawed to me in its understanding of empirical facts. But we could easily replace conservative Christianity in this example with more secular ancient philosophies, such as Confucianism, Jainism, or Buddhism, abstracting away the components that involve belief in the supernatural. It seems to me that these people would still perceive our society's moral beliefs as in a state of severe moral decline. We see moral progress over time simply because over time, morals have shifted closer to our own. But conversely, people in the past would see morals declining over time. I think we should expect future evolutions in morality to likewise be viewed by present-day people as moral decline. This undercuts much of the intuitive appeal of future-proof ethics, though I believe it is still worthwhile to aspire to.
3splinter6mo
These are good questions, and I think the answer generally is yes, we should be disposed to treating the future's ethics as superior to our own, although we shouldn't be unquestioning about this. The place to start is simply to note that obvious fact that moral standards do shift all the time, often in quite radical ways. So at the very least we ought to assume a stance of skepticism toward any particular moral posture, as we have reason to believe that ethics in general are highly contingent, culture-bound, etc. Then the question becomes whether we have reasons to favor some period's moral stances over any others. There are a variety of reasons we might do so: 1. Knowledge has been increasing monotonically, and in recent years extremely rapidly. Much of this knowledge is scientific , technological, or involves other kinds of expertise, and such knowledge does have a moral valence. E.g., we do not believe in witches anymore. 2. Some of our increasing knowledge is historical and philosophical. The Catholic church did a lot of things in the middle ages that to me seem very bad but seemed to the church at the time morally justified. But I also have access to a lot of historical information about the middle ages, and I can situate the church's actions in a broader story about politics, empire, religious conflict, etc., that undercuts the church's moral claims. Other things being equal, we probably are wise to privilege later time periods over earlier time periods because later time periods saw how things turned out. Nazism seemed like a moral imperative to Nazis, but here in 2022, I know how WWII played out. (Spoiler alert: not well!) 3. The moral changes that have occurred over time are not random, and we can apply meta-ethics to them to try to understand how things have changed. We used to condone slavery and now we abhor it. Is that just happenstance, such that in some alternate history we used to abho
Future-proof ethics

One candidate you don't mention is:

- Extrapolate from past moral progress to make educated guesses about where moral norms will be in the future.

On a somewhat generous interpretation, this is the strategy social justice advocates have been using. You look historically, see that we were wrong about treating women, minorities, etc less worthy of moral consideration, and try to guess which currently subjugated groups will in the future be seen as worthy of equal treatment. This gets you to feeling more concern for trans people, people with different sexual pr... (read more)

2splinter6mo
I think this is well-taken, but we should be cautious about the conclusions we draw from it. It helps to look at a historical analogy. Most people today (I think) consider the 1960s-era civil rights movement to be on the right side of history. We see the racial apartheid system of Jim Crow America as morally repugnant. We see segregated schools and restaurants and buses as morally repugnant. We see flagrant voter suppression as morally repugnant (google "white primaries" if you want to see what flagrant means). And so we see the people who were at the forefront of the civil rights movement as courageous and noble people who took great personal risks to advance a morally righteous cause. Because many of them were. If you dig deeply into the history of the civil rights movement, though, you will also find a lot of normal human stuff. Infighting. Ideological excess. Extremism. Personal rivalry. Some civil rights organizations of the time were organizationally paralyzed by a very 1960s streak of countercultural anti-authoritarianism that has not aged well. They were often heavily inflected with Marxist revolutionary politics that has not aged well. Many in the movement regarded now revered icons like MLK Jr. as overly cautious establishmentarian sellouts more concerned with their place in history than with social change. My point is not that the civil rights movement was actually terrible. Nor is it that because the movement was right about school integration, it was also right about the virtues of Maoism. My point is that if you look closely enough, history is always a total goddamned mess. And yet, I still feel pretty comfortable saying that we have made progress on slavery. So yes, I absolutely agree that many contemporary arguments about moral progress and politics will age terribly, and I doubt it will even take very long. Probably in ten years times, many of the debates of today will look quaint and misguided. But this doesn't mean we should lapse into a total
Research idea: Evaluate the IGM economic experts panel

If you read the expert comments, very often they complain that the question is poorly phrased. It's typically about wording like "would greatly increase" where there's not even an attempt to define "greatly". So if you want to improve the panel or replicate it, that is my #1 recommendation.

...My #2 recommendation is to create a Metaculus market for every IGM question and see how it compares.

8Larks7mo
Additionally, sometimes the question seems to ask about one specific cost or benefit of a policy, and respondents are unsure how to answer if they think that issue is unimportant but disagree/agree for other reasons.
Is EA over-invested in Crypto?

At what level of payoff is that bet worth it? Lets say the bet is a 50/50 triple-or-nothing bet. So, either EA ends up with half its money, or ends up with double. I'd guess (based on not much) that right now losing 50% of EA's money is more negative than doubling EA's money is positive.


There is an actual correct answer, at least the abstract. According to the Kelly criterion, on a 50/50 triple-or-nothing bet, you should put down 25% of your bankroll.

Say EA is now at around 50/50 Crypto/non-Crypto, what kind of returns would justify that allocation? At 50/... (read more)

Bryan Caplan on EA groups

People like to hear nice things about themselves from prominent people, and Bryan is non-EA enough to make it feel not entirely self-congratulatory. 

Is there a market for products mixing plant-based and animal protein? Is advocating for "selective omnivores" / reducitarianism / mixed diets neglected - with regards to animal welfare?

A while back I looked into using lard and/or bacon in otherwise vegan cooking. The idea being that you could use a fairly small amount of animal product to great gastronomical effect. One way to think about this is to consider whether you would prefer:
A: Rice and lentils with a tablespoon of bacon
B: Rice with 0.25lb ground beef

I did the math on this, and it works out surprisingly poorly for lard. You're consuming 1/8th as much mass, which sounds good, except that by some measures, producing pig induces 4x as much suffering as producing beef per unit o... (read more)

Bayesian Mindset

The tension between overconfidence and rigorous thinking is overrated:

Swisher: Do you take criticism to heart correctly?

Elon: Yes.

Swisher: Give me an example of something if you could.

Elon: How do you think rockets get to orbit?

Swisher: That’s a fair point.

Elon: Not easily. Physics is very demanding. If you get it wrong, the rocket will blow up. 
Cars are very demanding. If you get it wrong, a car won’t work. Truth in engineering and science is extremely important.

Swisher: Right. And therefore?

Elon: I have a strong interest in the truth.

Source and prev... (read more)

World's First Octopus Farm - Linkpost

Okay sorry, maybe I'm having a stroke and don't understand. The original phrasing and new phrasing look identical to me.

1Lumpyproletariat8mo
Oh, I'm sorry for being unclear! The second phrasing emphasizes different words (as and adult human) in a way I thought made the meaning of the original post clearer.
World's First Octopus Farm - Linkpost

Oh wait, did you already edit the original comment? If not I might have misread it. 

1Lumpyproletariat8mo
I haven't edited the original comment.
World's First Octopus Farm - Linkpost

I agree that it's pretty likely octopi are morally relevant, though we should distinguish between "30% likelihood of moral relevance" and "moral weight relative to a human".

1Lumpyproletariat8mo
Do you think the initial post would have read better as: "I think that an octopus is ~30% likely to be as morally relevant as an adult human (with wide error bars, I don't know as much about the invertebrates as I'd like to), so this is pretty horrifying to me."?
World's First Octopus Farm - Linkpost

I don't have anything substantive to add, but this is really really sad to hear. Thanks for sharing.

Bayesian Mindset

The wrong tool for many.... Some people accomplish a lot of good by being overconfident.

But Holden, rationalists should win. If you can do good by being overconfident, then bayesian habits can and should endorse overconfidence.

Since "The Bayesian Mindset" broadly construed is all about calibrating confidence, that might sound like a contradiction, but it shouldn't. Overconfidence is an attitude, not an epistemic state.

2Holden Karnofsky7mo
It might be true that the right expected utility calculation would endorse being overconfident, but "Bayesian mindset" isn't about behaving like a theoretically ideal utility maximizer - it's about actually writing down probabilities and values and taking action based on those. I think trying to actually make decisions this way is a very awkward fit with an overconfident attitude: even if the equation you write down says you'll do best by feeling overconfident, that might be tough in practice.

bayesian habits can and should endorse overconfidence

I disagree, Bayesian habits would lead one to the self-fulfilling prophecy point.

A Case for Improving Global Equity as Radical Longtermism

~50% of Open Phil spending is on global health, animal welfare, criminal justice reform, and other "shortermist" and egalitarian causes.

This is their recent writeup on one piece of how they think about disbursing funds now vs later https://www.openphilanthropy.org/blog/2021-allocation-givewell-top-charities-why-we-re-giving-more-going-forward

1LiaH6mo
Thank you, I have read the Global Health and Wellbeing portfolio and listened to Alexander Berger's podcast, but I am still left with the question, are they doing enough? Are their causes sufficiently broad? Have they left stones unturned? What innovative cause has been missed? I can't help but think this is a too-easy dismissal of the circumstance, and risks missing opportunities to save lives in very effective cause areas
EA megaprojects continued

This perspective strikes me as as extremely low agentiness.

Donors aren't this wildly unreachable class of people, they read EA forum, they have public emails, etc. Anyone, including you, can take one of these ideas, scope it out more rigorously, and write up a report. It's nobody's job right now, but it could be yours.

5John G. Halstead8mo
haha that's fair! there is of course a tragedy of the commons risk here though - of people discussing these ideas and it not being anyone's job to make them happen
What are some success stories of grantmakers beating the wider EA community?

Sure, but outside of OpenPhil, GiveWell is the vast majority of EA spending right?

Not a grant-making organization, but as another example, the Rethink Priorities report on Charter Cities seemed fairly "traditional EA" style analysis.

2Neel Nanda8mo
Sure. But I think the story there was that Open Phil intentionally split off to pursue this much more aggressive approach, and GiveWell is more traditional charity focused/requires high standards of evidence. And I think having prominent orgs doing each strategy is actually pretty great? They just fit into different niches
What are some success stories of grantmakers beating the wider EA community?

There's a list of winners here, but I'm not sure how you would judge counterfactual impact. With a lot of these, it's difficult to demonstrate that the grantee would have been unable to do their work without the grant.

At the very least, I think Alexey was fairly poor when he received the grant and would have had to get a day job otherwise.

What are some success stories of grantmakers beating the wider EA community?

I think the framing of good grantmaking as "spotting great opportunities early" is precisely how EA gets beat.

Fast Grants seems to have been hugely impactful for a fairly small amount of money, the trick is that the grantees weren't even asking, there was no institution to give no, and no cost-effectiveness estimate to run. It's a somewhat more entrepreneurial approach to grantmaking. It's not that EA thought it wasn't very promising, it's that EA didn't even see the opportunity.

I think it's worth noting that a ton of OpenPhil's portfolio would score reall... (read more)

I think it's worth noting that a ton of OpenPhil's portfolio would score really poorly along conventional EA metrics. They argue as much in this piece.

To be clear, to the extent your claim is true, giving money to things that ex ante have a lower cost-effectiveness than Givewell top charities + have low information value is more of a strike against Open Phil than it is against the idea of using cost-effectiveness analysis. 

So of course the community collectively gets credit because OpenPhil identifies as EA, but it's worth noting that their "hits based giving" approach divers substantially from more conventional EA-style (quantitative QALY/cost-effectiveness) analysis and asking what that should mean for the movement more generally.

My impression is that most major EA funding bodies, bar Givewell, are mostly following a hits based giving approach nowadays. Eg EA Funds are pretty explicit about this. I definitely agree with the underlying point about weaknesses of traditional EA methods, but I'm not sure this implies a deep question for the movement, vs a question that's already fairly internalised

The coronavirus Fast Grants were great, but their competitive advantage seems to have been that they were they first (and fastest) people to move in a crisis.

The overall Emergent Ventures idea is interesting and worth exploring (I say, while running a copy of it), but has it had proven cost-effective impact yet? I haven't been following the people involved but I don't remember MR formally following up.

I think Fast Grants may not be great on a longtermist worldview (though it might still be good in terms of capacity-building, hmm), and there are few competent EA grantmakers with a neartermist, human-centric worldview.

Liberty in North Korea, quick cost-effectiveness estimate

Saying "I'd rather die than live like that" is distinct from "this is worse than non-existence." Can you clarify?

Even the implication that moving a NK person to SK is better than saving 10 SK lives is sort of implausible - for both NKs and SKs alike. I don't know what they would find implausible. To me it seems plausible.

Liberty in North Korea, quick cost-effectiveness estimate

I believe NK people would likely disagree with this conclusion, even if they were not being coerced to do so. I don't have good intuitions on this, it doesn't seem absurd to me.

Unrelated to NK, many people suffer immensely from terminal illnesses, but we still deny them the right to assisted suicide. For very good reasons, we have extremely strong biases against actively killing people, even when their lives are clearly net negative.

So yes, I think it's plausible that many humans living in extreme poverty or under totalitarian regimes are experiencing e... (read more)

2Ramiro8mo
I tend to agree that there are lives (human or not) not worth living, but my point is that it's very difficult to consistently identify them by using my only own preference ordering. Saying "I'd rather die than live like that" is distinct from "this is worse than non-existence." (I'm assuming we're not taking into account externalities and opportunity costs. An adult male lion's seems pretty comfortable and positive, but it entails huge costs for other animals) It's even harder if you have to take into account the perspectives of the interested parties. For instance, in the example we're discussing, SK people could also complain that your utility function implied that preventing one NK birth is equal to saving 10 SK lives. Even the implication that moving a NK person to SK is better than saving 10 SK lives is sort of implausible - for both NKs and SKs alike.
Why hasn’t EA found agreement on patient and urgent longtermism yet?

EA has consensus on shockingly few big questions. I would argue that not coming to widespread agreement is the norm for this community.

Think about:

  • neartermism v.s. longtermism
  • GiveWell style CEAs v.s. Open Phil style explicitly non-transparent hits-based giving
  • Total Utilitarianism v.s. Suffering-focused Ethics
  • Priors on the hinge-of-history hypothesis
  • Moral Realism

These are all incredibly important and central to a lot of EA work, but as far as I've seen, there isn't strong consensus.

I would describe the working solution as some combination of:

  • Pursu
... (read more)
A Red-Team Against the Impact of Small Donations

I think I see the confusion.

No, I meant an intervention that could produce 10x ROI on $1M looked better than an intervention that could produce 5x ROI on $1B, and now the opposite is true (or should be).

A Red-Team Against the Impact of Small Donations

Uhh, I'm not sure if I'm misunderstanding or you are. My original point in the post was supposed to be that the current scenario is indeed better.

2Denkenberger8mo
Ok, so we agree that having $1 billion is better despite diminishing returns. So I still don't understand this statement: Are you saying that in 2011, we would have preferred $1M over $1B? Or does "look better" just refer to the benefit to cost ratio?
How should Effective Altruists think about Leftist Ethics?

I sort of expect the young college EAs to be more leftist, and expect them to be more prominent in the next few years. Though that could be wrong, maybe college EAs are heavily selected for not being already committed to leftist causes.

I don't think I'm the best person to ask haha. I basically expect EAs to be mostly Grey Tribe, pretty democratic, but with some libertarian influences, and generally just not that interested in politics. There's probably better data on this somewhere, or at least the EA-related SlateStarCodex reader survey.

3Misha_Yagudin8mo
This is fairly aligned with my take but I think EAs are more blue than grey and more left than you might be implying. (Ah, by you I meant Stefan, he does/did a lot of empirical psychological/political research into relevant topics.)
How should Effective Altruists think about Leftist Ethics?

Okay, as I understand the discussion so far:

  • The RP authors said they were concerned about PR risk from a leftist critique
  • I wrote this post, explaining how I think those concerns could more productively be addressed
  • You asked, why I'm focusing on Leftist Ethics in particular
  • I replied, because I haven't seen authors cite concerns about PR risk stemming from other kinds of critique

That's all my comment was meant to illustrate, I think I pretty much agree with your initial comment.

5Stefan_Schubert8mo
Ah, I see. Thanks!
Load More