All of CalebMaresca's Comments + Replies

Hi Jason, thanks for this. I was not aware of this review article. There is a new review article that came out this year, which concludes that there is insufficient evidentiary basis for harm from high protein intake. In particular, it seems like some of the results of previous studies may have been confounded with calorie intake.

I'm not a nutritionist or an exercise scientist, so I could be interpreting this incorrectly, but I think you are overly dismissive of the idea that people should be eating more protein.

The guideline's recommendation of 0.7 to 1 gram per kilogram daily represents the minimum intake needed to prevent malnutrition and maintain nitrogen balance; it is insufficient for optimal muscle growth when combined with strength training.[1] For people who are trying to increase their muscle mass, Huberman's suggestion is accurate and helpful.[2] Since muscle m... (read more)

7
Jason
This analysis doesn't account for the potential downsides of excess protein consumption -- cf., based on a quick non-AI search, the discussion here for a specific risk, this older review article for a broader discussion. I don't claim to be qualified to balance those potential tradeoffs against the potential advantages, but think they should be acknowledged.

Now, this isn’t the best way to maximize expected value. If you were an expected value maximizing robot, you would not pursue this strategy. You would say “bleep bloop, this brings about 93.5 fewer expected utils than the other strategy.” But I assume you are not an EV maximizing robot.

 

Hmmm. This is interesting, as diversification is expected utility maximizing in the finance context. The fact that it is not EV maximizing in the utilitarian framework makes me wonder if there is something wrong with the framing.

The obvious difference is that EV is ris... (read more)

2
Bentham's Bulldog
Interesting point, though I disagree--I think there are strong arguments for thinking that you should just maximize utility https://joecarlsmith.com/2022/03/16/on-expected-utility-part-1-skyscrapers-and-madmen/
4
David T
Feels like the most straightforwardly rational argument for portfolio diversification is the assumption your EV and probability estimates almost certainly aren't the accurate or at least unbiased estimator they need to be for the optimal strategy to be to stick everything on the highest EV outcome. Even more so when the probability that a given EV estimate is accurate is unlikely to be uncorrelated with whether it scores particularly highly (the good old optimiser's curse, with a dose of wishful thinking thrown in). Financiers don't trust themselves to be perfectly impartial about stuff like commodity prices in central Asia or binary bets on the value of Yen on Thursday, and it seems unlikely that people who are extremely passionate about the causes they and their friends participate in ahead of a vast range of other causes that nominally claim to do good achieve a greater level of impartiality. Pascalian odds seem particularly unlikely to be representative of the true best option (in plain English, a 0.0001% subjective probability assessment of a 1 shot event is roughly "I don't really know what the outcome of this will be and it seems like there could be many, many things more likely to achieve the same end"). You can make the assumption that if they appear to be robustly positive and neglected they might deserve funding anyway, but that is a portfolio argument...

Thanks for this Phil,

A couple of questions regarding SWE:

“Second-wave endogenous” (I’ll write “SWE”) growth models posit instead that technology grows exponentially with a constant or with a growing population. The idea is that process efficiency—the quantity of a given good producible with given labor and/or capital inputs—grows exponentially with constant research effort, as in a first-wave endogenous model; but when population grows, we develop more goods, leaving research effort per good fixed.

 

Only one SWE model avoids a conclusion along these li

... (read more)
5
trammell
In Young's case the exponent on ideas is one, and progress looks like log(log(researchers)). (You need to pay a fixed cost to make the good at all in a given period, so only if you go above that do you make positive progress.) See Section 2.2. Peretto (2018) and Massari and Peretto (2025) have SWE models that I think do successfully avoid the knife-edge issue (or "linearity critique"), but at the cost of, in some sense, digging the hole deeper when it comes to the excess variety issue.

I'm imagining something that is Cobb-Douglas between capital and land. Growth should be exponential (not super exponential) when A_auto is growing at a constant rate, same as a regular Cobb-Douglas production function between capital and labor. Specifically, I was thinking something like this:

X_old^beta(A_old K_old^alpha L^{1-alpha})^(1-beta) + X_auto^beta(A_auto K_auto)^(1-beta)

st X_old + X_auto = X_total (allocating land between the two production technologies)

 

As to your second point, yes, you are correct, as long as A_old is constant wages would not increase.

1
Casey Barkan
Ah yes that makes sense that growth will be exponential if A_auto has a fixed growth rate. Thanks!

Diverging utilities can be an issue. You can also get infinite output in finite time. The larger issue is that the economy has no steady state. In economic growth models, a steady state (or balanced growth path (BGP)) represents a long-run equilibrium where key economic variables per capita (like capital per worker, output per worker, consumption per worker) grow at constant rates. This greatly simplifies the analysis.

For example, I have a paper in which I analyze how households would behave if they expected TAI to transform the economy. To do this, I calc... (read more)

1
Casey Barkan
Ah I didn't realize that a balanced growth path is important for analytical tractability, thanks for that insight. And yes I'm reading your paper (haven't gone thoroughly through the model yet though), really interesting! That's a great idea to add land to the production function to bring back diminishing returns to capital. I am hoping to extend the model to include capital accumulation, and I'll think about including land when I do that. I suppose this will lead to a balanced growth path if A_auto is constant, but if A_auto is growing then growth could still be super exponential. (Regarding whether wages will eventually rise, I actually think wages would remain at zero in this model with capital accumulation and without land, because the marginal product of labor with the old tech will remain below the reservation wage).

Thanks for this. If I understand correctly, the result is primarily driven by the elastic labor supply, which is a function of W and not of R, and the constant supply of capital. This seems most relevant for very fast takeoff scenarios.

My intuition is that as people realize that their jobs are being automated away, they will want to work more to bolster their savings before we move into the new regime where their labor is worthless and capital is all that matters. This would require fully modeling the household's intertemporal utility and endogenizing the ... (read more)

3
Casey Barkan
Hi Caleb, thanks for your comment! That’s a great point that workers’ desire for savings prior to their jobs being automated may drive people to work more (similarly, people may work more as wages drop in order to maintain a subsistence level of consumption). On the other hand, if wages drop so low that workers can’t subsist, then labor supplied will drop regardless. When you say it would be tricky to model savings with my production function, is the reason that superexponential growth with exponential time discounting in the utility function will lead to diverging utilities? That’s an interesting point. Do you know if there has been work on more realistic utility functions to address this issue?

Thanks for this excellent primer and case study! I learned a lot about causal analysis from your explanation. The section on using three waves to control for confounders while avoiding controlling for potential mediators was particularly helpful. I would be interested in hearing more about how the sensitivity analysis for unmeasured confounders works.

The positive effect of activism on meat consumption that you found is especially concerning and important. I hope that we can gain more insight into this soon. If this finding replicates, then a lot of organizations might have to reevaluate their methods.

3
MMathur🔸
Thanks, Caleb – here's the technical paper detailing the methods Jared used in the re-analysis, and here's a more accessible paper introducing some closely related sensitivity analyses. The second is paywalled, but if you'd like, I can send you a copy if you message me with your email address. You can play with the latter method with this online calculator.

Hi Matthew,

Thank you for your comment. I think this is a reasonable criticism! There is definitely an endogenous link between investment and AI timelines that this model misses. I think that this might be hard to model in a realistic way, but I encourage people to try!

On the other hand, I think the strategic motivation is important as well. For example, here is Satya Nadella on the Dwarkesh Podcast:

And by the way, one of the things is that there will be overbuild. To your point about what happened in the dotcom era, the memo has gone out that, hey, you kno

... (read more)

I don't think that the possible outcomes of AGI/superintelligence are necessarily so binary. For example, I am concerned that AI could displace almost all human labor, making traditional capital more important as human capital becomes almost worthless. This could exacerbate wealth inequality and significantly decrease economic mobility, making post-AGI wealth mostly a function of how much wealth you had pre-AGI.

In this scenario, saving more now would enable you to have more capital while returns to capital are increasing. At the same time, there could be b... (read more)

Why would Knightian uncertainty be an argument against AI as an existential risk? If anything, our deep uncertainty about the possible outcomes of AI should lead us to be even more careful.

1
astaroth
Not sure what the author's argument is, but here's my interpretation: AI risk being a Knightian uncertainty is an argument against assigning P(doom) to it.

Similar campaigns have worked really well for animal advocacy, so I’m excited to see what you can accomplish.

I’m wondering, what kinds of tasks can volunteers help with? If I have no social media accounts or experience trying to promote causes on social media is there anything I can do?

3
Tyler Johnston
Thank you! You’re right that the main tasks are digital advocacy - but even if you’re not on social media, there are some direct outreach tasks that involve emailing and calling specific stakeholders. We have one task like that live on our action hub now, and will be adding more soon. Outside of that, we could use all sorts of general volunteer support - anything from campaign recruitment to writing content. Also always eager to hear advice on strategy. Would love to chat more if you’re interested.
  • However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)

My future profits aren't very relevant if I'm dead, but I might still care about it even if I'm super rich. Sure, my marginal utility will be very low, but on the other hand the profit from my investments will be very large. Even if everyone is stupendously rich by today's terms, there might be a tangible difference between having a trillion dollars in your bank account and having a quadrillion dollars in your bank account. May... (read more)

2
Marcel2
Are you assuming this holds true even in some scenario where a single company or government has total, decisive control over the future of civilization? Will the entity in power really still prioritize such exchanges if they could plausibly just take it directly from you (since they are not accountable to a higher power)? Or are you assuming that such a Singleton is unlikely to exist? (Or, is the focus on the possibility that such a Singleton does not exist)
1
Jakob
I agree that the marginal value of money won't be literally zero after TAI (in the growth scenario; if we're all dead, then it is exactly equal to zero). But (if we still assume those two TAI scenarios are the only possible ones), on a per-dollar basis it will be much lower than today, which will massively skew the incentives for traders - in the face of uncertainty, they would need overwhelming evidence before making trades that pay off only after TAI. And importantly, if you disagree with this and believe the marginal utility of money won't change radically, then that further undermines the point made in the original post, since their entire argument relies on the change in marginal utility - you can't have it both ways! (why would you posit that consumers change their savings rate when there is still benefits from being richer?) Still, I see your point that even in such a world, there's a difference between being a trillionaire, or a quadrillionaire. If there are quadrillion-dollar profits to be made, then yes, you will get those chains of backwards induction up and working again. But I find that scenario very implausible, so in reality I don't think this is an important consideration.

I am not aware of any international treaties which sanction the use of force against a non-signatory nation except for those circumstances under which one of the signatory nations is first attacked by a non-signatory nation (e.g. collective defense agreements such as NATO). Your counterexample of the Israeli airstrike on the Osirak reactor is not a precedent as it was not a lawful use of force according to international law and was not sanctioned by any treaty. I agree that the Israeli government made the right decision in orchestrating the attack, but it ... (read more)

My argument doesn't hang on whether an X-risk occurs during my PhD. If AGI is 10 years away, it's questionable whether investing half of that remaining time into completing a PhD is optimal.

I think that when discussing career longtermism we should keep the possibility of short AGI timelines in consideration (or the possibility of some non-AI related existential catastrophe occuring in the short-term). By the time we transition from learning and building career capital to trying to impact the world, it might be too late to make a difference. Maybe an existential catastrophe has already occurred or AGI was successful and so outclasses us that all of that time building career capital was wasted.

For example, I am in my first year of an economics ... (read more)

1
Tilly P
Thanks for your comment, and that's a fair point/critique - I agree about impact through academia being slow. However, at this stage it's pretty difficult to plan for what jobs you should be training for if AI replaces your current role, so it still makes sense to do something that broadly expands your career capital as you state, whether this is a PhD or something else. I would have thought the likelihood of an X-risk happening within the time you do your PhD is probably quite small, but I'll leave the quantification to the experts! AI is probably least likely to impact some more practical and non-academic roles so this could be an argument for gaining career capital outside of the knowledge sector (e.g. see this Times article: bit.ly/3M8Utpr). I didn't know the Bing AI had been rolled out yet - I'll have to give that a try and I'm curious how it will develop over time, and how quickly - and whether it will make my new job quicker and/or ultimately replace me or some of the workforce.

I don't have an answer to which countries would be more receptive to the idea, definitely don't try here in Israel!


I am however interested in the claimed effectiveness of open borders. Do these estimates take into account potential backlash or political instability that a large number of immigrants could cause? I understand that theoretically, closed borders are economically inefficient and solidify inequality, but I fear that open borders could cause significant political problems and backlash. Even if we were to consider this backlash to be unj... (read more)

3
Eevee🔹
I think these concerns are valid. The website Open Borders: The Case addresses many of the main arguments against open borders, including the possibility of nativist backlash to increased immigration. "Nativist backlash" refers to the hypothesis that a country opening its borders to all immigration would cause a significant portion of current residents to subsequently turn against immigration. The problem with this claim is that the probability of backlash depends on how a country adopts open borders in the first place. Nathan Smith writes:

I agree that the urban/rural divide as opposed to clear cut boundaries is not a significant reason to discredit the possibility of civil war, however, there are other reasons to think that civil war is unlikely.


This highly cited article provides evidence that the main causal factors of civil wars are what the authors call conditions that favor insurgency, rather than ethnic factors, discrimination, and grievances (such as economic inequality). The argument is that even in the face of grievances that cause people to start a civil war if the right condition... (read more)

Thanks for your input. Option value struck me as a subject that is not only relevant to EA, but also has not disseminated effectively from the academic literature to a larger audience. It’s very hard to find concrete information on option value outside of the literature. For example, the Wikipedia article on the subject is a garbled mess.

Hi Viadehi, I'm part of the new research group at EA Israel. For me personal fit and building career capital are the main reasons why I want to take part. I don't think that research I do now will save the world, but hopefully it will help me build relevant skills and knowledge and develop a passion for research.