Thanks for your thoughtful engagement! Chalmers made a similar point during our interview (that socialist societies would also experience strong pressures to build AGI).
I tried to describe the landscape as it exists right now, without making many claims about what would likely be true under a totally different economic/political system. That being said, I do think it's interesting that the leading labs are all corporations.
If you look at firms in a market economy as profit-maximizing agents and governments as agents trying to balance many interests, such as stability, economic growth, geopolitical/military advantage, popular support, international respect etc. then I think it's easier to see why firms are pursuing AGI far more aggressively (by decreasing the cost of labor via automation, you can dramatically increase your profitability). For a government, AGI may boost economic growth and geopolitical/military advantage at the expense of stability and popular support.
And if you look at existential risk from AI as an externality, governments are more likely to take on the costs of mitigating that kind of risk whereas firms are more likely to pass them on to the broader society.
I've seen some claims that the CCP is less interested in AGI and more interested in narrow applications, like machine vision, facial recognition, natural language processing, which can all help shore up its power long term. I haven't gone deep into this yet. I'll dig into the China links you sent later.
Idk of any online communities explicitly focused on this intersection, but would be interested in participating in one! Facebook groups historically have been good for this sort of thing (especially bc of the mod approval questions you could include), but I've basically stopped using FB entirely, as have lots of others I know. A Slack channel within the larger EA Slack may work (eagreconnect.slack.com), but I just experimented with this and there doesn't seem to be a native feature like the FB mod approval questions. You could have channel admins that add people manually, but that seems work-intensive.
One problem I can envision is that people may be wary of having candid conversations in public-ish spaces because of the possibility of journalists or others quoting them now that EA is more high profile.
One thing I will note is that there are way more leftist EAs than is commonly assumed. As one of the more public ones, I have a biased sample I'm sure (people will reach out to me). But one anecdote: at the last EAG Bay Area, I was sitting at a random table of ~6 other people in the main food area and 4 of them were leftists.
Fin Moorhouse asked something along these lines on Twitter. Pasting his question and my response below:
Fin: "Great article. I'm curious: are there estimates for how many extra fish deaths are caused by fishing wild-caught fish, especially high on the food chain (like tuna and salmon)? Seems complicated if fishing diminishes fish stocks and ∴ reduces predation in the long run?"
Me: "I didn't come across any. I think this is an interesting line of reasoning, and it makes me a bit more uncertain about the ethics of wild-fishing, but ultimately, it doesn't move me much.
1. If killing predators in the wild is good, why stop at fish? Why not systematically hunt tigers and lions to extinction? Some people bite this bullet, but I feel like we don't know nearly enough to know what the welfare effects of such a large ecosystem change would be.
2. Given how clueless we are, I think that having clear signals that we care about the wellbeing of others is more robust than coming up with a byzantine diet where eating wild-caught predator fish is good, but eating other kids of fish is bad.
As our knowledge of the world gets better, I think diets like vegetarianism and veganism are more likely to lead to good welfare outcomes, both because they're easier memes to spread & because someone eating wild-caught fish because they are predators may have motivated reasoning to keep eating them even when our understanding of the welfare effects change.
Wow, thanks so much – very cool to hear!
Totally agreed RE the central nervous system!
Unfortunately, I wasn't able to find good data on something that specific. Obviously, someone going from an omnivorous diet where they replace all land animals with plants and eat the same number of fish is going to consume fewer animals. But at least in my case, and in others of people I know, they increased their fish consumption as a result of going pescetarian.
There are also lots of recommendations to swap out land animals for fish for climate and health reasons, so I wanted to focus more on the animal welfare implications of doing that.
Interesting, will check these out.
Given that many fish we eat come from farms (and that number is increasing), do you think these arguments still hold?
Congratulations Clara! I think this is a really valuable project and am excited to see it come to fruition.
Another thing to consider is the enormous amount of info value we got out of this campaign. It looks like large amounts of money are not a sufficient condition for victory, but if Carrick hadn't been able to raise the amount of hard money needed to make the campaign happen, we would've learned a lot less.
Epistemic status: very tired.
As others mentioned, this feels like too much of an update based on one data point.
One of the largest advantages EAs running for office will have is their ability to fundraise from other EAs. I worry that skepticism of EAs in politics and/or slowness to act on time sensitive donation oppos will kneecap the success of future candidates.
Big picture, I think the impact case was pretty solid. The US govt is enormously influential. It moves a lot of money, regulates important industries, has the largest military, and can uniquely affect x risk. Members of congress exert significant control over the govt. Senators more, president most.
Having an extremely committed EA in govt seems worth A LOT to me.
Raising some amount of money is essential to winning, no matter how much outside money is committed to a race. Campaigns need to hire staff, get on the ballot, and do other things that super PACs can't do. They also get much more favorable rates on TV ad buys, can make better ads, etc. "Hard money", i.e. that raised by campaigns by retail donors and governed by donor caps, is way more valuable than "soft money", i.e. independent expenditure made by super PACs.
It seems clear to me that marginal hard dollars increase the odds of success, and it doesn't have to be that big of an increase for it to be a good bet in expected value terms.
I would guess that almost no EAs donating to GiveWell charities really understand the evidence base and models going into the recommendation, but we outsource our thinking to people/orgs we trust. Obviously, there's way less of a track record with running EAs for office and a lot of uncertainty baked into politics. But the most experienced, aligned people in the political data science world were supportive of this particular race happening, and A LOT of thinking went into this decision.