Great comment. I think "people who sacrifice significantly higher salaries to do EA work" is a plausible minimum definition of who those calling for democratic reforms feel deserve a greater say in funding allocation. It doesn't capture all of those people, nor solve the harder question of "what is EA work/an EA organization?" But it's a start.
Your 70/30 example made me wonder whether redesigning EA employee compensation packages to include large matching contributions might help as a democratizing force. Many employers outside EA/in the private sector off...
Another option would be to just directly give EA org employees regranting funds, with no need for them to donate their own money to regrant them. However, requiring some donation, maybe matching at a high rate, e.g. 5:1, gets them to take on at least some personal cost to direct funding.
Also, the EA org doesn't need to touch the money. The org can just confirm employment, and the employee can regrant through a system Open Phil (or GWWC) sets up or report a donation for matching to Open Phil (or GWWC, with matching funds provided by Open Phil).
I think this is a proposal worth exploring. Open Phil could earmark additional funding to orgs for employee donation matching.
It is often the explicit job of a journalist to uncover and release publicly important information from sources who would not consent to its release.
"Moral authority" and "intellectual legitimacy" are such fuzzy terms that I'm not really sure what this post is arguing.
Insofar as they just denote public perceptions, sure: this is obviously bad PR for the movement. It shows we're not immune from big mistakes, and raises fair questions about the judgment of individual EAs, or certain problematic norms/mindsets among the living breathing community of humans associated with the label. We'll probably get mocked a bit more and be greeted with more skepticism in elite circles. There are meta-EA problems that n...
Thanks Thomas - appreciate the updated research. And that wasn't a typo, just a poorly expressed idea. I meant to say, "Only 17% of respondents reported less than 90% confidence that HLMI will eventually exist."
If you are a consequentialist, then incorporating the consequences of reputation into your cost-benefit assessment is "actually behaving with integrity." Why is it more honest - or even perceived as more honest - for SBF to exempt reputational consequences from what he thinks is most helpful?
Insofar as SBF's reputation and EA's reputation are linked, I agree with you (and disagree with OP) that it could be seen as cynical and hypocritical for SBF to suddenly focus on American beneficiaries in particular. These have never otherwise been EA priorities, so he...
I disagree with this for two reasons. First, it's odd to me to categorize political advertising as "direct impact" but short-term spending on poverty or disease as "reputational." There is overlap in both cases; but if we must categorize I think it's closer to the opposite. Short-term, RCT-backed spending is the most direct impact EA knows how to confidently make. And is not the entire project of engaging with electoral politics one of managing reputations?
To fund a political campaign is to attempt to popularize a candidate and their ideas; that is, ...
First, it's odd to me to categorize political advertising as "direct impact" but short-term spending on poverty or disease as "reputational."
The OP focused on PR/reputation, which is what I reacted to.
If you accept that reputation matters, why is optimizing for an impression of greater integrity better than optimizing for an impression of greater altruism? In both cases, we're just trying to anticipate and strategically preempt a misconception people may have about our true motivations.
I think there's a difference between creating a reputation for integrit...
I do the same, but I think we should be transparent about what those harmful ideas are. Have posted rules about what words or topics are beyond the pale, which a moderator can enforce unilaterally with an announcement, much like they do on private Facebook groups or Reddit threads. Where a harmful comment doesn't explicitly violate a rule, users can still downvote it into oblivion - but it shouldn't be up to one or two people's unilateral discretion.
*(Note: This neighbor threatened me with a kitchen knife when we were both eight years old, and seemed generally prone to violence and antisocial behavior. So I don't think his apparent indifference to mosquito suffering should be taken as a counter-example suggesting that most people are also indifferent.)
TL;DR - Thanks for an interesting and accessible post! With the caveat that I've done no research and have only anecdotes to back this up, I wonder if you may underestimate people's intuitive ability to feel empathy for insects. Perhaps the more daunting obstacle to social concern for insect welfare overlaps with our indifference toward wild animal welfare in general?
***
When I was about 7, one of my young neighbors used to pin large mosquitoes against his playset slide and slowly tear off one limb at a time.* My siblings, parents, and I universally found t...
I suspect it would be easier to convince people who HAVE been bitten by a snake to go to the hospital than it will be to convince people who have not yet been bitten by a snake to constantly wear some kind of protective wraparound shinguards every time they're on the farm. The daily inconvenience level seems high for such a rare event. Even malaria nets are often not used for their intended purpose once distributed, and they seem to me like less of an inconvenience.
Makes sense, and I'm not surprised to hear Allison may overestimate the risk. By coincidence, I just finished a rough cost/benefit analysis of U.S. counterterrorism efforts in Afghanistan for my studies, and his book on Nuclear Terrorism also seemed to exaggerate that risk. (I do give him credit for making an explicit prediction, though, a few years before most of us were into that sort of thing).
In any case, I look forward to a more detailed read of your Founders Pledge report once my exams end next week. The Evaluating Interventions section seems like precisely what I've been looking for in trying to plan my own foreign policy career.
I took that from a Kelsey Piper writeup here, assuming she was summarizing some study:
"Most experts in the AI field think it poses a much larger risk of total human extinction than climate change, since analysts of existential risks to humanity think that climate change, while catastrophic, is unlikely to lead to human extinction. But many others primarily emphasize our uncertainty — and emphasize that when we’re working rapidly toward powerful technology about which there are still many unanswered questions, the smart step is to start the research now."
Th...
Imagine if ants figured out a way to invent human beings. Because they spend all day looking for food, they might program us to "go make lots of food!" And maybe they'd even be cautious, and anticipate certain problems. So they also program us not to use any anteaters as we do it. Those things are dangerous!
What would we do? Probably, we'd make a farm, that grows many times more food than the ants have ever seen. And then we'd water the crops - flooding the ant colony and killing all the ants. Of course, we didn't TRY to kill the ants; they were just...
Artificial Intelligence is very difficult to control. Even in relatively simple applications, the top AI experts struggle to make it behave. This becomes increasingly dangerous as AI gets more powerful. In fact, many experts fear that if a sufficiently advanced AI were to escape our control, it could actually extinguish all life on Earth. Because AI pursues whatever goals we give it with no mind to other consequences, it would stop at nothing – even human extinction – to maximize its reward.
We can't know exactly how this would happen - but to make it less ...
I think the Thucydides Trap thing is a significant omission that might increase the probabilities listed here. If we forget the historical data for a minute and just intuitively look at how this could most foreseeably happen, the world's two greatest powers are at a moment of peak tensions and saber-rattling, with a very plausible conflict spark over Taiwan. Being "tough on China" is also one of the only issues that both Republicans and Democrats can agree on in an increasingly polarized society. All of which fits Allison's hypothesis that incumbent hegemo...
There are many reasons why appeasement and Neville Chamberlain make a poor comparison for most modern conflicts and are wildly overused as a historical analogy. One of the biggest is that Hitler was dead set on taking over the entire world due to idiosyncrasies of his worldview, era, and national history, and arguably capable of it given the size of his armies and absence of nuclear weapons at the time. There are strong reasons to believe that neither Xi Jinping nor Vladimir Putin has such ambitions or capabilities. Even if the United States had "appeased"...
I am consistently bewildered by how many people seem to think Ukraine's membership in NATO is something that should be decided by Ukraine alone. NATO is a mutual defense compact: an international commitment involving multiple parties. So it inherently affects the United States and other NATO member nations. For the United States (or France, or anyone) to decline a military alliance with Ukraine - that is, to "shut the door" on Ukrainian membership - would not be a denial of Ukrainian sovereignty, but an EXERCISE of American/French sovereignty. ...
As someone working in U.S. foreign policy as a career, I strongly agree with you. I have watched this debate unfold for the past three months, only to see the fears and predictions of others who agreed with you largely come true. And mostly I feel that the whole ordeal validates my decision to pursue foreign policy - with a focus on reducing the risk of great power conflict - as an EA cause area.
The existence of disagreement in the comments might hint that this area is relatively under-researched or discussed by the EA community, given what a cross-cutting...
Realizing I'm coming in late and many of my points have doubtlessly been addressed by other commenters, here are five thoughts:
Good catch - I confess I did not click through to Paulsen's page, and agree it was a reach for the page (and therefore, me) to explicitly link consequentialism with Mao's ideological roots.
As a fan of utilitarianism myself, I do think Mao's general approach to social reform remains a fair cautionary tale about moral views which would entirely discard a presumption in favor of act-based side constraints, or place too much faith in their own ability to predict cause and effect, both of which are criticisms of utilitarianism's usefulness. But the same could b...
I share your doubts, partly for reasons you described. But also because the track record becomes even murkier when you look even slightly beyond "early utilitarians" (and especially, beyond philosophers themselves) to broadly utilitarian sentiments. "The ends justify the means" is perhaps most closely associated with Machiavelli, who called for ruthless violence and cruelty from leaders in order to stay in power. Mao Zedong's Wikipedia page notes he was drawn to a consequentialist worldview from an early age, believing "strong individuals were not bound by...
I was surprised to read this line:
Mao Zedong's Wikipedia page notes he was drawn to a consequentialist worldview from an early age,
Clicking through the wikipedia page, I see:
He was inspired by Friedrich Paulsen, whose liberal emphasis on individualism led Mao to believe that strong individuals were not bound by moral codes but should strive for the greater good, and that the "end justifies the means" conclusion of Consequentialism
And clicking further, I get:
...Friedrich Paulsen (German: [ˈpaʊlzən]; July 16, 1846 – August 14, 1908) was a German Neo-Kantian phi
Just following up with a link to the debate recording: Debate on Alternative Voting Systems - YouTube.
That's all good, intuitive advice. I'd considered something like moral luck before but hadn't heard the official term, so thanks for the link.
I imagine it could also help, psychologically, to donate somewhere safe if your work is particularly risky. That way you build a safety net. In the best case, your work saves the world; in the worst case, you're earning to give and saving lives anyway, which is nothing to sneeze at.
My human capital may best position me to focus my work on one cause to the exclusion of others. But my money is equally deliverable to any of them. So it shouldn't be inefficient to hedge bets in this way if the causes are equally good.
Thanks for the encouragement! The framing's for my own benefit, too. I've found it helps me navigate big decisions to write out the best case I can think of for both sides, and then reread sometime later to see which best convinces me.
This year I gave 13% of my income (+ some carryover from last year, which I had postponed) to EA charities. Of this, I gave about half to global health and development (mostly to GiveWell Top Charities, some to Give Directly) and the other half to animal welfare (mostly to the EA Funds Animal Welfare Fund, some to The Humane League). I also gave $1,250 to various political candidates I felt were EA-aligned. In prior years I've given overwhelmingly to global health and development and I still think that's very important: it's what initially drew me to EA an... (read more)