Jonas loves his wife, being in nature, and exploring interesting worlds both fictional and real. He uses his bamboo bike daily to get around in Munich. He's currently a freelance software engineer, and was working at the Against Malaria Foundation and Google before that. Jonas enjoys playing Ultimate and dancing.
The economic data seems to depend on one's point of view. I'm no economist and I certainly can't prove to you that AI is having an economic impact. Its use grows quickly though: Statistics on AI market size
It's also important, I think, to distinguish between AI capabilities and AI use. The AI-2027 text argues that a selected few AI capabilities matter most, namely those related to software and AI engineering. These will drive the recursive improvements. Changes to other parts of the industry are downstream of that. Both our viewpoints seem to be consistent with this model: I see rapidly increasing capabilities in software, and you see that other fields have not been so affected yet.
I'll finish with yet another anecdote, because it happened just yesterday. I was on a mountain hike with my nephew (11 years). He proudly told me that they had a difficult math task in school, and "I was one of the few that could solve it without ChatGPT".
It's an anecdote, of course. At the same time, effects of AI seem to be large in education, and changes in education probably lead to changes in the industry.
Why do you think that? Personally, I've lost several bets. For example, I've bet NO on "Will an AI win a gold medal on the IOI (competitive programming contest) before 2027?" and have already lost that 20 months before the start of 2027.
As a former IOI participant, that achievement feels amazing. As a software engineer, I absolutely find AI tools useful, practical, and economically valuable.
This was a valuable read for me. Thanks!
I share some of your skepticism. At the same time, I think the argument relies on reasons that are quite speculative, such as:
I can't shake off the feeling that this type of argument has often aged poorly when it comes to AI. I've certainly been baffled many times by AI solving tasks that I predicted to be very hard.
In contrast, texts like "Preparing for the Intelligence Explosion" and ai-2027.com essentially assume that some trends continue for a few more years. While that also relies on many assumptions, and the results sound like Sci-Fi, it seems to carry a lower burden of proof. Or at least, I find it more compelling.
Reflecting on this, I think a big difference between me and others is that I work as a software engineer. In this field, AI progress feels very visceral. I'm one of the people who shifted from writing 95% code myself to having >50% of code AI-generated, within one year or so. I'm doing a lot more code reviews these days, and spend time thinking about how tasks could be split up and partially solved by agents. There's a keen sense that this is only the beginning. New tools and significantly better workflows arrive every few weeks.
I agree that it will take time for advances to reach other fields. Many of my non-software-engineer friends hardly use AI today. But that said, advances in AI-powered software engineering might be all that's needed for AIs to improve at breakneck pace.
Thanks for the response!
I understand that you are worried about chicken and fish consumption. I have no knowledge about why these charts are the way they are, or why people in the UK consume twice as much chicken as those in Germany. It's also difficult to guess the impact of Veganuary in these trends. Insofar, I find the charts a bit distracting.
What I intended to say with my comment is that Veganuary has clearly visible impacts around me: when I go shopping, when I see ads, when I eat out. This seems to correlate with a general trend of seeing more vegan products, brands, and menu choices. Maybe the general trend that I identified is similarly distracting as your chicken and fish charts... yet it does seem to be something that Veganuary directly works on and influences.
I suspect that you brought up the chicken and fish charts because you worry about shifts in consumption from larger animals to higher numbers of small animals. This is a real possibility, but I would be wary of accusing Veganuary to cause such a shift, without good evidence. I grant that Veganuary tries to appeal to a broad range of people with various reasons for reducing meat consumption, including climate reasons which might cause a shift away from ruminants. But I recall there was a lot of Veganuary content around animal welfare. Personally, Veganuary shifted my views to care more about animals.
Animal welfare seems to be the main participant motivation. Here's a figure from the 2023 survey report:
Taking a step back, it's a little sad that this article feels so hostile towards Veganuary, and shows Veganuary in a bad light primarily because of discounts and back-of-the-envelope numbers that seem quite arbitrary. I see a lot less competition than you do between Veganuary and work on shrimp welfare or cage-free campaigns. On the contrary, people who have participated in Veganuary are likely more receptive for that type of work, and this is a benefit that we won't find in CEAs ;-)
It's great to try and analyze the cost-effectiveness of Veganuary. I'm thankful for this post and also for the responses by @Toni Vernelli and others.
While I appreciate the effort, I find it hard to agree with Vasco's conclusions. There are many discounts in the analysis that feel pretty arbitrary to me. Toni has answered to this much better than I could. I'd just like to share a few personal impressions. These are of course biased, but might explain why I'm suspicious about the many downward adjustments (and lack of upward adjustments) in Vasco's analysis:
Overall, there seems to be a clear trend in Germany toward more vegan products. Oat milk shelves are larger than cow milk shelves in many retailers nowadays; there are many meat alternatives; vegan products are becoming popular also in other areas such as chocolate and baked goods. It's difficult to isolate the effect that Veganuary has played in all this... but I'd be surprised if it was as small as Vasco estimates.
EA charities can also combine education and global health, like https://healthlearn.org/blog/updated-impact-model
HealthLearn builds a mobile app for health workers (nurses, midwives, doctors, community health workers) in Nigeria und Uganda. Health workers use it to learn clinical best practices. This leads to better outcomes for patients.
I'm personally very excited by this. Health workers in developing countries often have few training resources available. There are several clinical practices that can improve patient outcomes while being easy to implement (such as initiating breastfeeding immediately after birth). These are not as widely used as we would like.
HealthLearn uses technology as a way to faithfully scale the intervention to thousands of health workers. At this point, AI does not play a significant role in the learning process yet. Courses are manually designed. This was important to get started quickly, but also to get approval from government health agencies and professional organizations such as nursing councils.
The impact model that I've linked to above estimates that the approach has been cost-effective so far, and could become better with scale.
(disclaimer: I'm one of the software engineers building the app)
Personally, I'm not using the forum as much as I could and as much as I used to, because it is a time-sink. I'm the kind of person who can easily get lost on the Internet; clicking a link here and opening another tab there, and... look where those two hours went. Because of this, I'm wary of spending too much time here.
I don't know whether my declining forum use is due to changes in my behavior or changes to the forum. Probably it's a combination. On the forum side, the home page feels a bit more cluttered than it used to be. The forum feels slightly more gamified (e.g., emoji reactions).
I don't have concrete suggestions, other than thinking about what would be an ideal time for users to spend on the forum. A time that takes both the forum quality and its user's productivity into account.
OP here :) Thanks for the interesting discussion that the two of you have had!
Lukas_Gloor, I think we agree on most points. Your example of estimating a low probability of medical emergency is great! And I reckon that you are communicating appropriately about it. You're probably telling your doctor something like "we came because we couldn't rule out complication X" and not "we came because X has a probability of 2%" ;-)
You also seem to be well aware of the uncertainty. Your situation does not feel like one where you went to the ER 50 times, were sent home 49 times, and have from this developed a good calibration. It looks more like a situation where you know about danger signs which could be caused by emergencies, and have some rules like "if we see A and B and not C, we need to go to the ER".[1]
Your situation and my post both involve low probabilities in high-stakes situations. That said, the goal of my post is to remind people that this type of probability is often uncertain, and that they should communicate this with the appropriate humility.
Richard Chappell writes something similar here, better than I could. Thanks Lizka for linking to that post!
Pascalian probabilities are instead (I propose) ones that lack robust epistemic support. They're more or less made up, and could easily be "off" by many, many orders of magnitude. Per Holden Karnofsky's argument in 'Why we can't take explicit expected value estimates literally', Bayesian adjustments would plausibly mandate massively discounting these non-robust initial estimates (roughly in proportion to their claims to massive impact), leading to low adjusted expected value after all.
Maybe I should have titled this post differently, for example "Beware of non-robust probability estimates multiplied by large numbers".
Thanks! This sounds like good advice
I have two related thoughts that I would love to hear your opinion on:
There seems to be quite a large opportunity cost. Instead of investing, you could spend the money on effective causes now. Or take a lower-paying job now rather than wait until you've reached some investment goal. Presumably, many effective organizations would benefit from getting money/talent earlier? If you want to maximize your life's impact, would that be a good strategy?
Depending on your AI timelines, money that is locked until retirement is... maybe not lost, but carries a high risk. I'm personally more motivated to invest money in a way that I could use or reallocate it in cases where AI causes the economy to massively change. Do you think that makes sense, or are the tax benefits of pensions more important?