Garrison

Topic Contributions

Comments

Choosing causes re Flynn for Oregon

Another thing to consider is the enormous amount of info value we got out of this campaign. It looks like large amounts of money are not a sufficient condition for victory, but if Carrick hadn't been able to raise the amount of hard money needed to make the campaign happen, we would've learned a lot less. 

Choosing causes re Flynn for Oregon

Epistemic status: very tired.

As others mentioned, this feels like too much of an update based on one data point. 

One of the largest advantages EAs running for office will have is their ability to fundraise from other EAs. I worry that skepticism of EAs in politics and/or slowness to act on time sensitive donation oppos will kneecap the success of future candidates. 

Big picture, I think the impact case was pretty solid. The US govt is enormously influential. It moves a lot of money, regulates important industries, has the largest military, and can uniquely affect x risk. Members of congress exert significant control over the govt. Senators more, president most. 

Having an extremely committed EA in govt seems worth A LOT to me. 

Raising some amount of money is essential to winning, no matter how much outside money is committed to a race. Campaigns need to hire staff, get on the ballot, and do other things that super PACs can't do. They also get much more favorable rates on TV ad buys, can make better ads, etc. "Hard money", i.e. that raised by campaigns by retail donors and governed by donor caps, is way more valuable than "soft money", i.e. independent expenditure made by super PACs. 

It seems clear to me that marginal hard dollars increase the odds of success, and it doesn't have to be that big of an increase for it to be a good bet in expected value terms. 

I would guess that almost no EAs donating to GiveWell charities really understand the evidence base and models going into the recommendation, but we outsource our thinking to people/orgs we trust. Obviously, there's way less of a track record with running EAs for office and a lot of uncertainty baked into politics. But the most experienced, aligned people in the political data science world were supportive of this particular race happening, and A LOT of thinking went into this decision. 

Being Open and Honest

I've definitely noticed this as a part of the EA NYC community (and I wouldn't be surprised if this were true elsewhere). I think it might come from a place of trying to pre-empt common criticisms/characterizations of EA, but comes off as weird, especially when the person has no preconceptions about EA. EA has a strong culture that's pretty different from every other community I've ever been a part of, but it doesn't exert control over my life. Obviously, ideas and people from EA influence me in big ways, but because I believe those ideas and respect those people.

Free-spending EA might be a big problem for optics and epistemics

A few thoughts on how we could mitigate some of these risks:

  1. Have generous reimbursement policies at EA orgs but don't pay exorbitant salaries. 
    1. I think most EAs should value their time higher and be willing to trade money for time, and in these cases, I think you can justify a business expense. I think this will help clarify which spending choices are meant to actually boost productivity and which are just for fun. To be clear, I think spending some fraction of your income on just "fun" things like vacations, concerts, and eating out is fine in moderation. But to me at least, the shallow pond thought experiment is still basically true and there is plenty of need left in the world, even with the current funding situation. 
    2. I think we systematically overestimate how much spending more on personal consumption will make us happy/productive. I know plenty of people in finance/consulting/tech who have convinced themselves that they "need" to spend hundreds of thousands on personal consumption every year. I've lived in NYC on <$50K after taxes and donating for 4 years and feel like I've been able to do basically everything I want to do.
  2. Emphasize costly signals of altruism. 
    1. We should encourage people to take the GWWC pledge and go vegetarian/vegan because they're probably good things to do on their merits and because they signal a commitment to making a sacrifice to help others. 
Free-spending EA might be a big problem for optics and epistemics

This is a great post, and I'm glad these points are being raised. I share a lot of the same concerns (basically, what happens to EA long term when it's just a good deal to join it?).

A big and small personal win from these changes in funding:

  1. I decided to launch a magazine reporting on what matters in the long-term in large part because of the change in funding situation and related calls for more ambition. I had the idea for doing this more than 3 years ago, but didn't pursue it. (We're aiming to launch in Mar 2023).
  2. In August, I quit my job at GiveDirectly to pursue freelance journalism full time, and planned to make basically no money for possibly 1-2 years. I cut a lot of costs to maximize my runway. A few months later, I got a job with an EA org that paid better than any job I had in the past. Now my time was scarce and money was not. I bought a free-standing dishwasher for ~$1000, which bought back ~45 minutes a day. I think this decision, and other smaller ones like it, were very good. 

But it's easy to get into self-serving territory where you value your time so highly that you can justify almost any expense (or don't think of cheaper ways to meet the same goals). This can also move us into territory where, to do ostensibly altruistic work, we don't give anything up, and, in fact, argue that others should give things to us. 

This feels fundamentally different from the movement that attracted me 5 years ago (though the reasoning is very consistent, and may well be right). 

What is the strongest case for nuclear weapons?

Unilateral disarmament by the US seems bad, but if the US and USSR eliminated all nukes, as they almost did in 1986, that seems good to me. No other countries had anywhere close the number, and we could have been much more convincing in getting other countries to follow suit. 

Is there a good summary of EA's impact to date?

Great, thank you! This is definitely out of date, at least for GiveDirectly, where I used to work. GD has moved over $500M to people in poverty, though some substantial fraction of that (>$200M if my memory serves) was to people in the US. The Impact site says $100M. 

Announcing What We Owe The Future

Pre-ordered a hardcover copy! 

Curious for more specifics on the hardcover vs. Kindle thing. Are Kindle pre-orders counted as some fraction of a hardcover order? If so, what is that fraction?

Experimental longtermism: theory needs data

I'm excited for this series! I'm a big believer in EAs doing more things out in the world, both for the direct impacts but probably even more for the information value.

For example, I'm thrilled that Longview is getting into nuclear security grantmaking. I think this is:

  1. good in its own terms
  2. will teach us more about how international relations, coordination, and treaties work, which seems essential to ensure AI and synthetic bio advances go well
  3. gives us something concrete to point to that almost everyone can agree is valuable

(disclosure that I contract for Longview on something totally different and learned about this when everyone else did). 

I think the sociology of EA will make us overly biased towards research and away from action, even when action would be more effective, in the near and long term. For example, I think there are major limitations to developing AI governance strategies in the absence of working with and talking to governments.

TBC, research is extremely important, and I'm glad the community is so focused on asking and answering important questions, but I'd be really happy to see more people "get after it" the way you have. 

EA covered on "Stuff You Should Know" Podcast

Thanks for this writeup!

Josh Clark also did a podcast series on x-risk called the End of the World. It's very good! Almost everyone he quotes is from FHI and it's very aligned with EA thinking on x-risk. 

Load More