"To see the world as it is, rather than as I wish it to be."
Currently I work for EA Funds. My job title is still tbd, but I'm responsible for a lot of the communications on behalf of EA Funds and constituent funds. I also work on grantmaking, fundraising, hiring, and some strategy setting.
I used to be a Senior Researcher on the General Longtermism team at Rethink Priorities. Concurrently, I also volunteered* as a funds manager for EA Funds' Long-term Future Fund.
*volunteering was by choice. LTFF offers payment for fund managers, but I was unsure whether it made sense to be paid for a second job while I was a salaried employee for RP with a lot of in-practice independence to do what I think is best for the world.
I believe we changed the text a bunch in August/early September. I think there were a few places we didn't catch the first time, and we made more updates in ~the following month (September). AFAIK we no longer have any (implicit or explicit) commitments for response times anywhere, we only mention predictions and aspirations.
Eg here's the text at near the beginning of the application form:
The Animal Welfare Fund, Long-Term Future Fund and EA Infrastructure Fund aim to respond to all applications in 2 months and most applications in 3 weeks. However, due to an unprecedentedly high load, we are currently unable to achieve our desired speedy turnarounds. If you need to hear back sooner (e.g., within a few weeks), you can let us know in the application form, and we will see what we can do. Please note that: EA Funds is low on capacity and may not be able to get back to you by either your stated deadline or the above aims -- we encourage you to apply to other funders as well if you have a time-sensitive ask.
I'd be a bit surprised if you could find people on this forum who (still) work at Cohere. Hard to see a stronger signal to interview elsewhere than your CEO explaining in a public memo why they hate you.
but making an internal statement about it to your company seems really odd to me? Like why do your engineers and project managers need to know about your anti-EA opinions to build their products?
I agree it's odd in the sense that most companies don't do it. I see it as a attempt to enforce a certain kind of culture (promoting conformity, discouragement of dissent, "just build now" at the expense of ethics, etc) that I don't much care for. But the CEO also made it abundantly clear he doesn't like people who think like me either, so ¯\_(ツ)_/¯.
Thank you for your detailed, well-informed, and clearly written post.
America has about five times more vegetarians than farmers — and many more omnivores who care about farm animals. Yet the farmers wield much more political power.
This probably doesn't address your core points, but the most plausible explanation for me is that vegetarians on average just care a lot less about animal welfare than farmers care about their livelihoods. Most people have many moral goals in their minds that compete with other moral goals as well as more mundane concerns (which by revealed preferences they usually care about more), while plausibly someone's job is in top 1-3 of their priorities.
Sure there are some animal advocates (including on this forum!) who care about animals being tortured more than even farmers care about their jobs. But they're the exception rather than the rule; I'd be very very surprised if they are anywhere close to 20% of vegetarians.
Minor, but: searching on the EA Forum, your post and Quentin Pope's post are the only posts with the exact phrase "no evidence" (EDIT: in the title, which weakens my point significantly but it still holds) The closest other match on the first page is There is little (good) evidence that aid systematically harms political institutions, which to my eyes seem substantially more caveated.
Over on LessWrong, the phrase is more common, but the top hits are multiple posts that specifically argue against the phrase in the abstract. So overall I would not consider it an isolated demand for rigor if someone were to argue against the phrase "no evidence" on either forum.
The point is not that 1.5 is a large number, in terms of single variables -- it is -- the point is that 2.7x is a ridiculous number.
2.7x is almost exactly the amount world gdp per capita has changed in the last 30 years. Obviously some individual countries (e.g. China) have had bigger increases in that window.
30 years isn't that high in the grand scheme of things; it's far smaller than most lifetimes.
(EDIT: nvm this is false, the chart said "current dollars" which I thought meant inflation-adjusted, but it's actually not inflation adjusted)
Makes sense! I agree that fast takeoff + short timelines makes my position outlined above much weaker.
e.g. decisions and character traits of the CEO of an AI lab will explain more of the variance in outcomes than decisions and character traits of the US President.
I want to flag that if an AI lab and the US gov't are equally responsible for something, then the comparison will still favor the AI lab CEO, as lab CEOs have much greater control of their company than the president has over the USG.
I'm not convinced that he has "true beliefs" in the sense you or I mean it, fwiw. A fairly likely hypothesis is that he just "believes" things that are instrumentally convenient for him.
Thanks! I don't have much expertise or deep analysis here, just sharing/presenting my own intuitions. Definitely think this is an important question that analysis may shed some light on. If somebody with relevant experience (eg DC insider knowledge, or academic study of US political history) wants to cowork with me to analyze things more deeply, I'd be happy to collab.
I can try, though I haven't pinned down the core cruxes behind my default story and others' stories. I think the basic idea is that AI risk and AI capabilities are both really big deals. Arguably the biggest deals around by a wide variety of values. If the standard x-risk story is broadly true (and attention is maintained, experts continue to call it an extinction risk, etc), this isn't difficult for nation-state actors to recognize over time. And states are usually fairly good at recognizing power and threats, so it's hard to imagine they'd just sit at the sidelines and let businessmen and techies take actions to reshape the world.
I haven't thought very deeply or analyzed exactly what states are likely to do (eg does it look more like much more regulations or international treaties with civil observers or more like almost-unprecedented nationalization of AI as an industry) . And note that my claims above are descriptive, not normative. It's far from clear that State actions are good by default.
Disagreements with my assumptions above can weaken some of this hypothesis:
I don't have a good sense of what it means for someone to agree with my 3 assumptions above but still think state interference is moderate to minimal. Some possibilities:
Interested in hearing alternative takes and perspectives and other proposed cruxes.
At the risk of saying the obvious, literally every single person at Alameda and FTX's inner circle worked in large corporations in the for-profit sector out of college and before Alameda/FTX. (SBF: Jane Street, Gary Wang: Google, Caroline Ellison: Jane Street, Nishad Singh: Facebook/Meta). It wasn't 5 whole years though, so maybe that made a difference? But they also joined a for-profit company pretty quickly, rather than working at EA nonprofits.
(though you said you were less sure about this claim, and I don't want to harp on it)