-

91 karmaJoined Jan 2019

Comments
6

-
2y13
0
0

I'm in a pretty similar situation (I'm a CS senior in the US and took a SWE job offer from a unicorn). If I had the choice, I would go into direct work. Ben Todd said the following earlier this year:

I’d typically prefer someone in [non-leadership] roles to an additional person donating $400,000–$4 million per year (again, with huge variance depending on fit).

To me, those numbers sound high enough to swamp the downsides of working on community building that you mentioned, if you'd like to choose whichever option is higher-impact. 

(I'm posting this as a comment because it's not as well-thought-out as the answers others have given.)

-
2y16
0
0

[...] you would have to argue that:

  1. There are longtermist organizations that are currently funding-constrained,
  2. Such that more funding would enable them to do more or better work,
  3. And this funding can't be met by existing large EA philanthropists.

It's not clear to me that any of these points are true.

It seems to me that those points might currently be true of Rethink Priorities. See these relevant paragraphs from this recent EA Forum post on their 2021 Impact and 2022 Strategy:

If better funded, we would be able to do more high-quality work and employ more talented researchers than we otherwise would.

Currently, our goal is to raise $5,435,000 by the end of 2022 [...]. However, we believe that if we were maximally ambitious and expanded as much as is feasible, we could effectively spend the funds if we raised up to $12,900,000 in 2022.

Not all of this is for their longtermist work, but it seems that they plan to spend at least 26% of additional funding on longtermism in 2022-2023 (if they succeed at raising at least  $5,435,000), and up to about 41% if they raise $12,900,000.

It seems that they aren't being funded as much as they'd like to be by large donors. In the comments of that post, RP's Director of Development said that there have been several instances in which major grantmakers gave them only 25%-50% of the amount of funding they requested. Also, Linch, on Facebook, asked EAs considering donating this year to donate to Rethink Priorities. So I think there's good evidence that all of those points you mentioned are currently true. 

That being said, great funding opportunities like this can disappear very quickly, if a major grantmaker changes their mind.

-
2y1
0
0

I would have counted it as a big company and not a startup in thinking about this post, but maybe that's not how the author intended it?

I thought Stripe would be in the reference class of startups (since it still raises money from VCs), until I read Michael Dickens's reply to this comment. I agree that it was a supremely bad example, though. The other companies I mentioned probably count?

There are also a lot of smaller/newer companies that pay about as much as Google/Meta that I didn't mention in my first comment. They're mostly unicorns (though not all of them are), but I think they might be a substantial fraction of the set of companies people actively trying to work at startups might end up in -- they're large compared to the average startup, but the average startup is less likely to have the necessary infrastructure to absorb more people or recruit in a predictable way, and/or might hire exclusively from its personal networks. 

-
2y6
0
0

Face value compensation at the top tech companies is generally much higher than what you would get at a startup. Have a look at https://levels.fyi

It's more complicated than that. Some top startups (e.g. Stripe, Airtable, Databricks, Scale AI, ByteDance, Benchling, and several others) pay at least as much or a lot more than e.g. Google/Meta. Some of those (Stripe, Airtable, Scale AI) seem to offer new grads close to $100k more than Google does on average in the first year (counting signing bonuses, and assuming the valuation of all of those companies doesn't change). Also, Levels.fyi's 2020 report showed that a lot of the top-paying companies were startups. 

But it is probably the case that startups with more room for growth pay much less.

-
3y13
0
0

FWIW my completely personal and highly speculative view is that EA orgs and EA leaders tend to talk too much about x-risk and not enough about s-risk, mostly because the former is more palatable, and is currently sufficient for advocating for s-risk relevant causes anyway. Or more concretely: It's pretty easy to imagine an asteroid hitting the planet, killing everyone, and eliminating the possibility of future humans. It's a lot wackier, more alienating and more bizarre to imagine an AI that not only destroys humanity, but permanently enslaves it in some kind of extended intergalactic torture chamber.

I’m pretty sure that risks of scenarios a lot broader and less specific than extended intergalactic torture chambers count as s-risks. S-risks are defined as merely “risks of astronomical suffering.” So the risk of having, for example, a sufficiently extremely large future with a small but nonzero density of suffering would count as an s-risk. See this post from Tobias Baumann for examples.