Not an expert on this/haven't read all the prior discourse, but it seems like AIs acting like they're humans is a major cause of "LLM psychosis" now and is implicated in a lot of future concerns about AI takeovers due to making it seem like people should trust the AIs more than they really should, plus anything about being worried about AIs turning evil due to the representation of scheming AIs in their training text (as well as concerns about handing over the future to AIs that aren't actually moral patients). This work might make AIs act more human, or at least be useful for people who want to do that.
This post also criticizes AI 2027 (https://forum.effectivealtruism.org/posts/KgejNns3ojrvCfFbi/a-deep-critique-of-ai-2027-s-bad-timeline-models) and its critiques seem much more concerning? Including a bunch of links to papers that don't really back up points is not great practice or anything but also we don't have AI papers from 2027 yet, so I'd just presume that they were clumsily trying to go for something like "we'll have a better version of this paper in 2027" from what you've said.
Are there any organizations out there that would describe their niche as advising for small/medium-sized donors? I can't think of any, and I'm wondering why not. I'm not exactly sure what organizations that claim to advise large donors actually do, but it seems plausible that some things are also effective for smaller donors just because there are larger numbers of those. I'm thinking of, for instance:
Among people who call themselves vegans who I've met irl, about a third were actually some form of reducetarian already. One ate dairy and eggs that had some form of ethical certification, one ate fish (I believe only certain wild-caught species) and honey, and another was a strict vegan for a while (I think?) but then shifted to identifying as plant-based and eating chicken. Some of them were more vegan for health reasons than for animal welfare reasons, and for some I know health concerns were why they weren't strictly vegan. So I think that this is more a debate for highly online/enfranchised vegans, while a lot of people have already gone ahead and adopted looser standards for veganism.
I do think there's also a significant chance of a larger bubble, to be fair, affecting the big AI companies. But my instinct is that a sudden fall in investment into small startups and many of them going bankrupt would get called a bubble in the media, and that that investment wouldn't necessarily just go into the big companies.
What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?
I'm not exactly sure about the operationalization of this question, but it seems like there's a bubble among small AI startups at the very least. The big players might be unaffected however? My evidence for this is some mix of not seeing a revenue pathway for a lot of these companies that wouldn't require a major pivot, few barriers to entry for larger players if their product becomes successful, and having met a few people who work in AI startups who claim to be optimistic about earnings and stuff but can't really back that up.
This is too tangential from the forecasting discussion to justify being a comment there so I'm putting it here:
Forecasting makes no sense as a cause area, because cause areas are problems, something like "people lack resources/basic healthcare/etc.", "we might be building superintelligent AI and we have no idea what we're doing". Forecasting is more like a tool. People use forecasting to address AI, global poverty, and all sorts of more general problems, including ones that aren't major EA focuses.
For instance, we could treat vaccines as a cause area. All the funding to some AI-x-biosecurity people, GAVI campaigns for existing vaccines, and people working on bird flu vaccines could be treated like they're doing the same thing. And then we could argue about whether vaccines meet the funding bar. But that would be a pretty pointless argument, when really all those projects are trying to do different things with similar tools.
So I'd rather judge the AI forecasting by AI standards, the general-purpose forecasting by metascience standards, and the global development forecasting by global development standards, rather than trying to lump them in as a single entity. That being said, I do side with the view that there's too much money and enthusiasm being spent on forecasting, but it's a weakly held view, and that doesn't mean that every forecasting project isn't worth being funded, or even that they're all equally inflated.