Are there any organizations out there that would describe their niche as advising for small/medium-sized donors? I can't think of any, and I'm wondering why not. I'm not exactly sure what organizations that claim to advise large donors actually do, but it seems plausible that some things are also effective for smaller donors just because there are larger numbers of those. I'm thinking of, for instance:
Among people who call themselves vegans who I've met irl, about a third were actually some form of reducetarian already. One ate dairy and eggs that had some form of ethical certification, one ate fish (I believe only certain wild-caught species) and honey, and another was a strict vegan for a while (I think?) but then shifted to identifying as plant-based and eating chicken. Some of them were more vegan for health reasons than for animal welfare reasons, and for some I know health concerns were why they weren't strictly vegan. So I think that this is more a debate for highly online/enfranchised vegans, while a lot of people have already gone ahead and adopted looser standards for veganism.
I do think there's also a significant chance of a larger bubble, to be fair, affecting the big AI companies. But my instinct is that a sudden fall in investment into small startups and many of them going bankrupt would get called a bubble in the media, and that that investment wouldn't necessarily just go into the big companies.
What is the probability that the U.S. AI industry (including OpenAI, Anthropic, Microsoft, Google, and others) is in a financial bubble — as determined by multiple reliable sources such as The Wall Street Journal, the Financial Times, or The Economist — that will pop before January 1, 2031?
I'm not exactly sure about the operationalization of this question, but it seems like there's a bubble among small AI startups at the very least. The big players might be unaffected however? My evidence for this is some mix of not seeing a revenue pathway for a lot of these companies that wouldn't require a major pivot, few barriers to entry for larger players if their product becomes successful, and having met a few people who work in AI startups who claim to be optimistic about earnings and stuff but can't really back that up.
I think you're right that frugality is good, but I'm not sure where you're getting the idea that it isn't discussed any, although it maybe could use a bit more discussion on the margin. I also think the main con is that it would alienate people who aren't willing to be particularly frugal, but will donate some anyways. The personal finance tag has some posts you might be interested in.
This might not fit the idea of a prioritization question, but it seems like there are a lot of "sure bets" in global development, where you can feel highly confident an intervention will be useful, and not that many in AI-related causes (high chance it either ends up doing nothing or being harmful), with animal welfare somewhere in between. It would be interesting to find projects in global development that look good for risk-tolerant donors, and ones in AI (and maybe animal welfare or other "longtermist" causes) that look good for less risk-tolerant donors.
Not really a criticism of this post specifically, but I've seen a bunch of enthusiasm about the idea of some sort of AI safety+ group of causes and not that much recognition of the fact that AI ethicists and others not affiliated with EA have already been thinking about and prioritizing some of these issues (particularly thinking of the AI and democracy one, but I assume it applies to others). The EA emphases and perspectives within these topics have their differences, but EA didn't invent these ideas from scratch.
I was being purposely kind of vague, but let's say people donating <100k a year? Whatever's too small for the organizations that advise large donors.