All of AndrewDoris's Comments + Replies

This year I gave 13% of my income (+ some carryover from last year, which I had postponed) to EA charities. Of this, I gave about half to global health and development (mostly to GiveWell Top Charities, some to Give Directly) and the other half to animal welfare (mostly to the EA Funds Animal Welfare Fund, some to The Humane League). I also gave $1,250 to various political candidates I felt were EA-aligned. In prior years I've given overwhelmingly to global health and development and I still think that's very important: it's what initially drew me to EA an... (read more)

Great comment. I think "people who sacrifice significantly higher salaries to do EA work" is a plausible minimum definition of who those calling for democratic reforms feel deserve a greater say in funding allocation. It doesn't capture all of those people, nor solve the harder question of "what is EA work/an EA organization?" But it's a start.

Your 70/30 example made me wonder whether redesigning EA employee compensation packages to include large matching contributions might help as a democratizing force. Many employers outside EA/in the private sector off... (read more)

Another option would be to just directly give EA org employees regranting funds, with no need for them to donate their own money to regrant them. However, requiring some donation, maybe matching at a high rate, e.g. 5:1, gets them to take on at least some personal cost to direct funding.

Also, the EA org doesn't need to touch the money. The org can just confirm employment, and the employee can regrant through a system Open Phil (or GWWC) sets up or report a donation for matching to Open Phil (or GWWC, with matching funds provided by Open Phil).

I think this is a proposal worth exploring. Open Phil could earmark additional funding to orgs for employee donation matching.

It is often the explicit job of a journalist to uncover and release publicly important information from sources who would not consent to its release.

"Moral authority" and "intellectual legitimacy" are such fuzzy terms that I'm not really sure what this post is arguing.

Insofar as they just denote public perceptions, sure: this is obviously bad PR for the movement. It shows we're not immune from big mistakes, and raises fair questions about the judgment of individual EAs, or certain problematic norms/mindsets among the living breathing community of humans associated with the label. We'll probably get mocked a bit more and be greeted with more skepticism in elite circles. There are meta-EA problems that n... (read more)

8
Sarah Levin
1y
See Samo's essay series here for the definition of "intellectual legitimacy" as it's being used in the OP: ...

Thanks Thomas - appreciate the updated research. And that wasn't a typo, just a poorly expressed idea. I meant to say, "Only 17% of respondents reported less than 90% confidence that HLMI will eventually exist."

If you are a consequentialist, then incorporating the consequences of reputation into your cost-benefit assessment is "actually behaving with integrity." Why is it more honest - or even perceived as more honest - for SBF to exempt reputational consequences from what he thinks is most helpful?

Insofar as SBF's reputation and EA's reputation are linked, I agree with you (and disagree with OP) that it could be seen as cynical and hypocritical for SBF to suddenly focus on American beneficiaries in particular. These have never otherwise been EA priorities, so he... (read more)

6
Stefan_Schubert
2y
Thank you, this is helpful. I do agree with you that there is a difference between supporting GiveWell-recommended charities and supporting American beneficiaries. More generally, my argument wasn't directly about what donations Sam Bankman-Fried or other effective altruists should make, but rather about what arguments are brought to bear on that issue. Insofar as an analysis of direct impact suggests that certain charities should be funded, I obviously have no objection to that. My comment rather concerned the fact that the OP, in my view, put too much emphasis on reputational considerations relative to direct impact. (And I think this has been a broader pattern on the forum lately, which is part of the reason I thought it was worth pointing out.)

I disagree with this for two reasons. First, it's odd to me to categorize political advertising as "direct impact" but short-term spending on poverty or disease as "reputational." There is overlap in both cases; but if we must categorize I think it's closer to the opposite. Short-term, RCT-backed spending is the most direct impact EA knows how to confidently make. And is not the entire project of engaging with electoral politics one of managing reputations? 

To fund a political campaign is to attempt to popularize a candidate and their ideas; that is, ... (read more)

First, it's odd to me to categorize political advertising as "direct impact" but short-term spending on poverty or disease as "reputational."

The OP focused on PR/reputation, which is what I reacted to.

If you accept that reputation matters, why is optimizing for an impression of greater integrity better than optimizing for an impression of greater altruism? In both cases, we're just trying to anticipate and strategically preempt a misconception people may have about our true motivations.

I think there's a difference between creating a reputation for integrit... (read more)

I do the same, but I think we should be transparent about what those harmful ideas are. Have posted rules about what words or topics are beyond the pale, which a moderator can enforce unilaterally with an announcement, much like they do on private Facebook groups or Reddit threads. Where a harmful comment doesn't explicitly violate a rule, users can still downvote it into oblivion - but it shouldn't be up to one or two people's unilateral discretion.

*(Note: This neighbor threatened me with a kitchen knife when we were both eight years old, and seemed generally prone to violence and antisocial behavior. So I don't think his apparent indifference to mosquito suffering should be taken as a counter-example suggesting that most people are also indifferent.)

TL;DR - Thanks for an interesting and accessible post! With the caveat that I've done no research and have only anecdotes to back this up, I wonder if you may underestimate people's intuitive ability to feel empathy for insects. Perhaps the more daunting obstacle to social concern for insect welfare overlaps with our indifference toward wild animal welfare in general?

***

When I was about 7, one of my young neighbors used to pin large mosquitoes against his playset slide and slowly tear off one limb at a time.* My siblings, parents, and I universally found t... (read more)

7
JamieGittins
2y
Thanks for a really interesting comment Andrew! I think you're definitely correct that we shouldn't underestimate people's moral concern for insects. I recently saw this poll by Rethink Priorities which shows that around half to two thirds of Americans believe that insects can feel pain, which isn't too far off the kind of responses you get when you ask about fish. I think ultimately insect welfare is currently so overlooked for a mixture of reasons, not just the lack of empathy that I address in my post. And I think you're spot on in identifying that the wild/farmed distinction is probably a key part of this.
3
AndrewDoris
2y
*(Note: This neighbor threatened me with a kitchen knife when we were both eight years old, and seemed generally prone to violence and antisocial behavior. So I don't think his apparent indifference to mosquito suffering should be taken as a counter-example suggesting that most people are also indifferent.)

I suspect it would be easier to convince people who HAVE been bitten by a snake to go to the hospital than it will be to convince people who have not yet been bitten by a snake to constantly wear some kind of protective wraparound shinguards every time they're on the farm. The daily inconvenience level seems high for such a rare event. Even malaria nets are often not used for their intended purpose once distributed, and they seem to me like less of an inconvenience.

1
Pat Myron
1y
@Peter S. Park @MathiasKB  @AndrewDoris  rather than armoring where you're bit, less costly / inconvenient preventing bites in the first place by emitting odor / noise / light to ward off predators. Odor seems most promising for snakes according to https://www.callnorthwest.com/2019/04/home-remedies-to-keep-snakes-away/ Maybe more than superstition to carry around smelly garlic / onions :)

Makes sense, and I'm not surprised to hear Allison may overestimate the risk. By coincidence, I just finished a rough cost/benefit analysis of U.S. counterterrorism efforts in Afghanistan for my studies, and his book on Nuclear Terrorism also seemed to exaggerate that risk. (I do give him credit for making an explicit prediction, though, a few years before most of us were into that sort of thing).

In any case, I look forward to a more detailed read of your Founders Pledge report once my exams end next week. The Evaluating Interventions section seems like precisely what I've been looking for in trying to plan my own foreign policy career.

I took that from a Kelsey Piper writeup here, assuming she was summarizing some study:

"Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now."

Th... (read more)

Imagine if ants figured out a way to invent human beings. Because they spend all day looking for food, they might program us to "go make lots of food!" And maybe they'd even be cautious, and anticipate certain problems. So they also program us not to use any anteaters as we do it.  Those things are dangerous!

What would we do? Probably, we'd make a farm, that grows many times more food than the ants have ever seen. And then we'd water the crops - flooding the ant colony and killing all the ants. Of course, we didn't TRY to kill the ants; they were just... (read more)

Artificial Intelligence is very difficult to control. Even in relatively simple applications, the top AI experts struggle to make it behave. This becomes increasingly dangerous as AI gets more powerful. In fact, many experts fear that if a sufficiently advanced AI were to escape our control, it could actually extinguish all life on Earth. Because AI pursues whatever goals we give it with no mind to other consequences, it would stop at nothing – even human extinction – to maximize its reward.

We can't know exactly how this would happen - but to make it less ... (read more)

2
mic
2y
I'm not sure this is true, unless you use a very restrictive definition of "AI expert". I would be surprised if most AI researchers saw AI as a greater threat than climate change.

One-liner for policymakers:

"Most experts in the AI field think it poses a much larger risk of human extinction than climate change." - Kelsey Piper, here

One-liner: Artificial Intelligence may kill you and everyone you know. (from Scott Alexander, here)
 

I think the Thucydides Trap thing is a significant omission that might increase the probabilities listed here. If we forget the historical data for a minute and just intuitively look at how this could most foreseeably happen, the world's two greatest powers are at a moment of peak tensions and saber-rattling, with a very plausible conflict spark over Taiwan. Being "tough on China" is also one of the only issues that both Republicans and Democrats can agree on in an increasingly polarized society. All of which fits Allison's hypothesis that incumbent hegemo... (read more)

4
Stephen Clare
2y
I agree with this. I think there's multiple ways to generate predictions and couldn't cover everything in one post. So while here I used broad historical trends, I think that considerations specific to US-China, US-Russia, and China-India relations should also influence our predictions. I discuss a few of those considerations on pp. 59-62 of my full report for Founders Pledge and hope to at least get a post on US-China relations out within the next 2-3 months. One quick hot take: I think Allison greatly overestimates the proportion of power transitions that end in conflict. It's not actually true that "incumbent hegemons rarely let others catch up to them without a fight" (emphasis mine). So, while I haven't run the numbers yet, I'll be somewhat surprised if my forecast of a US-China war ends up being higher than ~1 in 3 this century, and very surprised if it's >50%. (Metaculus has it at 15% by 2035).

There are many reasons why appeasement and Neville Chamberlain make a poor comparison for most modern conflicts and are wildly overused as a historical analogy. One of the biggest is that Hitler was dead set on taking over the entire world due to idiosyncrasies of his worldview, era, and national history, and arguably capable of it given the size of his armies and absence of nuclear weapons at the time. There are strong reasons to believe that neither Xi Jinping nor Vladimir Putin has such ambitions or capabilities. Even if the United States had "appeased"... (read more)

2
Peter
2y
It's important to distinguish between concessions. Removing missiles that can strike a country in exchange for removing missiles that can strike another country is VERY different than helping a dictator to take over parts of a country when they threaten violence. Once you start, where are you going to stop? The problem also isn't just Putin, the problem is every tyrant watching the response here. Most of the world is not part of NATO. If Russia succeeds, China's government may be emboldened to go after Taiwan, for example. 

I am consistently bewildered by how many people seem to think Ukraine's membership in NATO is something that should be decided by Ukraine alone. NATO is a mutual defense compact: an international commitment involving multiple parties.  So it inherently affects the United States and other NATO member nations. For the United States (or France, or anyone) to decline a military alliance with Ukraine - that is, to "shut the door" on Ukrainian membership - would not be a denial of Ukrainian sovereignty, but an EXERCISE of American/French sovereignty.  ... (read more)

2
DPiepgrass
2y
Either side (Ukraine or NATO) can make the decision unilaterally, but if in fact "it would be in Ukraine's interests" then Ukraine could rationally make that call. If NATO had said "we're permanently shutting the door on you and it's for your own good!", the world would rightly question the "for your own good" part.

As someone working in U.S. foreign policy as a career, I strongly agree with you. I have watched this debate unfold for the past three months, only to see the fears and predictions of others who agreed with you largely come true. And mostly I feel that the whole ordeal validates my decision to pursue foreign policy - with a focus on reducing the risk of great power conflict - as an EA cause area.

The existence of disagreement in the comments might hint that this area is relatively under-researched or discussed by the EA community, given what a cross-cutting... (read more)

2
Charles He
2y
Although the EA forum seems to be literally raising funds for war material, I doubt the uneven reception to David's post is driven by emotion or ignorance (although I agree it didn't help that the post initially seemed to favor the Russian narrative).    I think "realism" or "consequentialism" is dominant (whatever that really means).  From that perspective, I don't think anyone believes NATO had an option to close the door on Ukraine. Also, the characterization of Russian behavior and irredentialism seems incomplete, and the value of the current situation is unclear.   In your other comments, you knocked down some bad takes. While I think you are right, you've only knocked down bad takes.   Great power conflict is important and neglected in EA, especially how to better bridge and communicate that peacefully. I'm less sure what that has to do with Russia.  There is a vast apparatus to study Russia already, it would be good to hear clearly what EA's contribution would be and why it should be increased by these events. 
1
DavidZhang
2y
Thanks Andrew - I'm glad you agree. I also agree that consequentialism encourages a high level of realism. That said, I was expecting a higher level of agreement from the EA community on this post, so it's interesting that not everyone shares my view.

Realizing I'm coming in late and many of my points have doubtlessly been addressed by other commenters, here are five thoughts:

  1. This reminds me of Eliezer Yudkowsky's 2015 criticism of using vague flow-through effects, with animal welfare cited specifically.  He noted that at the extremes, it seems like the sort of warm-glow reasoning someone might use to justify donating to your local performing arts center or running the 5k Susan G. Komen Race for the Cure - both of which are perfectly fine things to do, but not traditionally seen as EA, so much as i
... (read more)

Good catch - I confess I did not click through to Paulsen's page, and agree it was a reach for the page (and therefore, me) to explicitly link consequentialism with Mao's ideological roots.

As a fan of utilitarianism myself, I do think Mao's general approach to social reform remains a fair cautionary tale about moral views which would entirely discard a presumption in favor of act-based side constraints, or place too much faith in their own ability to predict cause and effect, both of which are criticisms of utilitarianism's usefulness. But the same could b... (read more)

4
Linch
2y
Thanks for the reply! I think the question that's relevant to me is something like  or  I think Mao's example is hardly evidence against a), and if anything is straightforwardly evidence for it. I think it is weak evidence against b), but overall quite weak. I think plenty of horrendous actions were committed by people without deep training in (systematized) ethics. Genghis Khan comes to mind for example. When I think about ways utilitarianism can be self-effacing, two obvious mechanisms comes to mind: 1. We may be bad at forecasting the future consequences of our actions, and cause lots of harm ex post due to genuine miscalculation, motivated reasoning, or bad luck. 1. Evidence for this: If we see many examples of dictators who studied Bentham and Mill diligently but (mis)applied their ethics and did horrendous things, for example by dropping a minus sign in a utility calculation somewhere, then this is evidence in favor of greater epistemic humility about our predictive power and our ability to predictably cause positive outcomes. 2. Utilitarian reasoning may create moral licensing/"create cover" for selfish actors to do horrendous actions in the name of good. 1. Evidence for this: If we see many examples of dictators who call themselves utilitarians, Benthamites, etc, and do many evil actions in the name of utilitarianism, then this will be evidence to me that utilitarianism-in-practice has horrifying consequences even if in the abstract "true" utilitarians may be immune to these issues. 1. (I think this is what you were getting at with the "no true Scotsman" argument?) But in fact while both mechanisms are abstractly reasonable, in practice I don't observe much of either (whereas there are clear and compelling harms of competing philosophers, or at least false Scotsman following philosophers, like Nietzsche, Marx, Kant etc). So overall I think it's a stretch to believe that utilitarianism is abstractly reasonable but the evidence is against

I share your doubts, partly for reasons you described. But also because the track record becomes even murkier when you look even slightly beyond "early utilitarians" (and especially, beyond philosophers themselves) to broadly utilitarian sentiments. "The ends justify the means" is perhaps most closely associated with Machiavelli, who called for ruthless violence and cruelty from leaders in order to stay in power. Mao Zedong's Wikipedia page notes he was drawn to a consequentialist worldview from an early age, believing "strong individuals were not bound by... (read more)

I was surprised to read this line:

Mao Zedong's Wikipedia page notes he was drawn to a consequentialist worldview from an early age,

Clicking through the wikipedia page, I see:

He was inspired by Friedrich Paulsen, whose liberal emphasis on individualism led Mao to believe that strong individuals were not bound by moral codes but should strive for the greater good, and that the "end justifies the means" conclusion of Consequentialism

And clicking further, I get:

Friedrich Paulsen (German: [ˈpaʊlzən]; July 16, 1846 – August 14, 1908) was a German Neo-Kantian phi

... (read more)

That's all good, intuitive advice. I'd considered something like moral luck before but hadn't heard the official term, so thanks for the link.

I imagine it could also help, psychologically, to donate somewhere safe if your work is particularly risky. That way you build a safety net. In the best case, your work saves the world; in the worst case, you're earning to give and saving lives anyway, which is nothing to sneeze at.

My human capital may best position me to focus my work on one cause to the exclusion of others. But my money is equally deliverable to any of them. So it shouldn't be inefficient to hedge bets in this way if the causes are equally good.

Thanks for the encouragement!  The framing's for my own benefit, too. I've found it helps me navigate big decisions to write out the best case I can think of for both sides, and then reread sometime later to see which best convinces me.

2
Rowan_Stanley
3y
Yeah, I can see how that would be helpful-- I'm thinking of having a go at it as a decision-making tool myself.  The approach kind of reminds me of internal family systems therapy, actually: trying to reconcile different parts of yourself by imagining them as different people. The main difference being that there's  no trauma in this kind of scenario (hopefully, anyway!), and a lot less psychotherapy jargon :)