Stephen Clare

Topic Contributions

Comments

New 80k problem profile - Climate change

I think you'll find answers to those questions in section 1 of John and Johannes's recent post on climate projections. IIRC the answers are yes, and those numbers correspond to RCP4.5.

New 80k problem profile - Climate change

I think this comment demonstrates the importance of quantifying probabilities. e.g. you write:

Could agriculture cope with projected warming? Possibly, maybe probably. Can it do so while supply chains, global power relations and financial systems are disrupted or in crisis? That's a much harder prospect.

I can imagine either kinda agreeing with this comment, or completely disagreeing, depending on how we're each defining "possibly", "probably", and "much harder".

For what it's worth, I also think it's probably that agriculture will cope with projected warming. In fact, I think it's extremely likely that, even conditional on geopolitical disruptions, the effects of technological change will swamp any negative effects of warming. To operationalize, I'd say something like: there's a 90% chance that global agricultural productivity will be higher in 50 years than it is today.[1]

Note that this is true at the global level. I do expect regional food crises due to droughts. On the whole, I again believe with high confidence (again, like 90%) that the famine death rate in the 21st century will be lower than it was in the 20th century. But of course it won't be zero. I'd support initiatives like hugely increasing ODA and reforming the World Food Program (which is literally the worst).

  1. ^

    I haven't modelled this out and I'd expect that probability would change +/- 10 p.p. if I spent another 15 minutes thinking about it.

Where are the cool places to live where there is still *no* EA community? Bonus points if there is unlikely to be one in the future

That's true, good point. Depending on what they're looking for, I can actually see myself encouraging more people to try this out.

Where are the cool places to live where there is still *no* EA community? Bonus points if there is unlikely to be one in the future

If you like the location you're currently in, it seems pretty worth it to try to hang out with other people in your current community first. Join a sports team or games club or something. If you're worried about incentives, then ask a friend for accountability. Say you'll pay them $20 if you don't actually go to the event and ask them to follow up on it.

I'm a bit worried you're underestimating how difficult it would be to move to an entirely different continent on your own. Life as an expat can be expensive and alienating.

EA and the current funding situation

Can you give an example of communication that you feel suggests "only AI safety matters"?

Can we agree on a better name than 'near-termist'? "Not-longermist"? "Not-full-longtermist"?

I don't think a good name for this exists, and I don't think we need one. It's usually better to talk about the specific cause areas than to try and lump all of them together as not-longtermism.

As you mention, there are lots of different reasons one might choose not to identify as a longtermist, including both moral and practical considerations.

But more importantly, I just don't think that longtermist vs not-longtermist is sufficiently important to justify grouping all the other causes into one group.

Trying to find a word for all the clusters other than longtermism  is like trying to find a word that describes all cats that aren't black, but isn't "not-black cats".

One way of thinking about these EA schools of thought is as clusters of causes in a multi-dimensional space. One of the dimensions along which these causes vary is longtermism vs. not-longtermism. But there are many other dimensions, including  animal-focused vs. people-focused, high-certainty vs low-certainty, etc. Not-longtermist causes all vary along these dimensions, too. Finding a simple label for a category that includes animal welfare, poverty alleviation, metascience, YIMBYism, mental health, and community building is going to be weird and hard.

"Not-longtermism" would just be everything outside of some small circle in this space. Not a natural category.

It's because there are so many other dimensions that we can end up with people working on AI safety and people working on chicken welfare in the same movement. I think that's cool.  I really like that EA space has enough dimensions that a  really diverse set of causes can all count as EA. Focusing so much on the longtermism vs. not-longtermism dimension under-emphasizes this.

The Vultures Are Circling

I downvoted this post because it doesn't present any evidence to back up its claims. Frankly I also foudn the tone off-putting ("vultures"? really?) and the structure confusing. 

I also think it underestimates the extent to which the following things are noticeable to grant evaluators. I reckon they'll usually be able to tell when applicants (1) don't really understand or care about x-risks, (2) don't really understand or care about EA, (3) are lying about what they'll spend the money on, or (4) have a theory of change that doesn't make sense. Of course grant applicants tailor their application to what they think the funder cares about. But it's hard to fake it, especially when questioned.

Also, something like the Atlas Fellowship is not "easy money". Applicants will be competing against extremely talented and impressive people from all over the world. I don't think the "bar" for getting funding for EA projects has fallen as much as  this post, and some of the comments on this post, seem to assume.

How likely is World War III?

I agree with this. I think there's multiple ways to generate predictions and couldn't cover everything in one post. So while here I used broad historical trends, I think that considerations specific to US-China, US-Russia, and China-India relations should also influence our predictions. I discuss a few of those considerations on pp. 59-62 of my full report for Founders Pledge and hope to at least get a post on US-China relations out within the next 2-3 months.

One quick hot take: I think Allison greatly overestimates the proportion of power transitions that end in conflict. It's not actually true that "incumbent hegemons rarely let others catch up to them without a fight" (emphasis mine). So, while I haven't run the numbers yet, I'll be somewhat surprised if my forecast of a US-China war ends up being higher than ~1 in 3 this century, and very surprised if it's >50%. (Metaculus has it at 15% by 2035).

Unsurprising things about the EA movement that surprised me

Now it’s more like any and all causes are assumed effective or potentially effective off the get-go and then are supported by some marginal amount of evidence.

This doesn't seem true to me, but I'm not an "old guard EA".  I'd be curious to know what examples of this you have in mind.

Load More