CM

Caleb_Maresca

PhD student of economics @ New York University
61 karmaJoined Dec 2019Pursuing a doctoral degree (e.g. PhD)

Posts
1

Sorted by New

Comments
8

Topic contributions
3

  • However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)

My future profits aren't very relevant if I'm dead, but I might still care about it even if I'm super rich. Sure, my marginal utility will be very low, but on the other hand the profit from my investments will be very large. Even if everyone is stupendously rich by today's terms, there might be a tangible difference between having a trillion dollars in your bank account and having a quadrillion dollars in your bank account. Maybe I want my own galaxy in which I alone have the rights to build Dyson spheres and that is out of the price range of your average joe with a trillion-dollar net wealth. Maybe (and this might be more salient to your typical investor who isn't actively thinking about far out sci-fi scenarios) I want the prestige, political control, etc, that come with being wealthy compared to everyone else.

 

A bet that interest rates will rise is not a bet on short AI timelines. Rather, it is a bet that:

  1. Most consumers will correctly perceive that AI timelines are short, and
  2. Most consumers will realize this long enough before TAI that there is enough time to benefit from profitable bets made now, and
  3. Most consumers will believe that transformative AI will significantly reduce the marginal utility they get from their savings - and not, say, increase the marginal value of saving, because they could lose their jobs without taking part in the newfound prosperity from AI

I believe that this is almost correct. My objection is with the second bullet point, "interest rates can rise before we get TAI". This is possible, but we no longer have a reason to believe that it will happen - unless very many people decide to reduce their savings rates. By then, this is no longer a bet on short AI timelines, but rather a bet about whether the typical consumer will realize that AI timelines are short sufficiently long enough before AI that you have time to enjoy your profits

If future benefits exist for being even richer after TAI, interest rates could rise due to inductive reasoning even before consumers begin adjusting their savings rates in response to TAI. If I know that consumers will adjust their savings rate one day before TAI (assuming a deterministic timeline where TAI occurs in one discontinuous jump and very unrealistic timescales for consumers changing their savings rate for simplicity's sake), then I should place a bet on the interest rate rising (e.g. shorting government bonds) two days before TAI. If enough investors take this action, then interest rates will rise two days before TAI. Knowing this, I should short government bonds three days before TAI, etc... Similar to how if the government promises to print a lot of money in one month, then inflation will begin to rise immediately.

I am not aware of any international treaties which sanction the use of force against a non-signatory nation except for those circumstances under which one of the signatory nations is first attacked by a non-signatory nation (e.g. collective defense agreements such as NATO). Your counterexample of the Israeli airstrike on the Osirak reactor is not a precedent as it was not a lawful use of force according to international law and was not sanctioned by any treaty. I agree that the Israeli government made the right decision in orchestrating the attack, but it is important to point out the differences between that and what you are suggesting.

Ultimately, to quibble about whether your suggestion is an "act of violence" or not misses the point. What you suggest would be an unprecedented sanctioning of force. I believe the introduction of such an agreement would be very incendiary and would offer a bad precedent. Note that no such agreement was signed in order to prevent nuclear proliferation. Many experts were very worried that nuclear weapons would proliferate much further than they ultimately did. Sometimes the use of force was used, but always with a lighter hand than "let's sign a treaty to bomb anyone we think has a reactor."

My argument doesn't hang on whether an X-risk occurs during my PhD. If AGI is 10 years away, it's questionable whether investing half of that remaining time into completing a PhD is optimal.

I think that when discussing career longtermism we should keep the possibility of short AGI timelines in consideration (or the possibility of some non-AI related existential catastrophe occuring in the short-term). By the time we transition from learning and building career capital to trying to impact the world, it might be too late to make a difference. Maybe an existential catastrophe has already occurred or AGI was successful and so outclasses us that all of that time building career capital was wasted.

For example, I am in my first year of an economics PhD. Social impact through academia is very slow. I worry that before I am able to create any impact through my research it might be too late. I chose this path because I believe it will give me valuable and broadly robust skills that I could apply to creating impactful research. But now I wonder if I should have pursued a more direct and urgent way of contributing to the long-term future.

Many EAs, like me, have chosen paths in academia, which has a particularly long impact trajectory and thus is more prone to short timelines.

PS: I recently switched to the Microsoft Edge web browser and was intrigued to see if the Bing AI could help me write this comment. The final product is a heavily edited version of the final output it gave after multiple prompt attempts. Was it faster/better than just writing the entire comment myself? Probably not.

I don't have an answer to which countries would be more receptive to the idea, definitely don't try here in Israel!


I am however interested in the claimed effectiveness of open borders. Do these estimates take into account potential backlash or political instability that a large number of immigrants could cause? I understand that theoretically, closed borders are economically inefficient and solidify inequality, but I fear that open borders could cause significant political problems and backlash. Even if we were to consider this backlash to be unjustified or immoral, we need to keep it in consideration when thinking of the effects of this policy. Am I unjustified in thinking that significant negative political effects are possible?

I agree that the urban/rural divide as opposed to clear cut boundaries is not a significant reason to discredit the possibility of civil war, however, there are other reasons to think that civil war is unlikely.


This highly cited article provides evidence that the main causal factors of civil wars are what the authors call conditions that favor insurgency, rather than ethnic factors, discrimination, and grievances (such as economic inequality). The argument is that even in the face of grievances that cause people to start a civil war if the right conditions are not in place the civil war cannot even get off the ground. A huge caveat here is that political polarization is not measured in this article, so this article does not rule it out as a significant factor.


The conditions in America do not favor insurgency. America has huge military, intelligence, and surveillance resources that she can use to counter insurgency, and there are few underdeveloped regions where the insurgents could hide.

Thanks for your input. Option value struck me as a subject that is not only relevant to EA, but also has not disseminated effectively from the academic literature to a larger audience. It’s very hard to find concrete information on option value outside of the literature. For example, the Wikipedia article on the subject is a garbled mess.

Hi Viadehi, I'm part of the new research group at EA Israel. For me personal fit and building career capital are the main reasons why I want to take part. I don't think that research I do now will save the world, but hopefully it will help me build relevant skills and knowledge and develop a passion for research.