All of Jakob's Comments + Replies

Also, @Joel Becker, at this point you have called my thinking "pretty tortured" twice (in comments to the original post) and "4D-chess" here. Especially the first phrase seems - at least to me - more like solider mindset than scout mindset, in that I don't see how you'd make a discussion more truth-seeking, or enlighten anyone when using words like that.

I try to ask both "what does Joel know that I don't" and "what do I know that Joel doesn't, and how can I help him understand that". This post is my attempt at engaging in that way. In contrast, I don't see... (read more)

5
Joel Becker
6mo
Jakob, I sincerely apologize for my unhelpful (or at the very least unelightening) phrases that have come across as soldier mindset/rude. I was commenting as I would on the unshared google doc of a friend asking for feedback. But perhaps this way of going about things is too curt for a public forum. Again, I'm sorry. (I will probably reply on the substance later; currently too busy. I think there's a decent chance that I will agree with you that, in addition to being rude and craply communicated and coming across as soldier mindset, my previous comments reflected sloppy thinking.)

It seems to me you don’t get the point. The point of the post is that the equilibrium you’re hypothesizing doesn’t really exist. Individuals can only amp up their own consumption by so much, so you need a ton of people partying like it’s the end of the world to move capital markets. And that’s what you’d be betting on - not if the end is near but if everyone will believe it to the degree that they materially shift their saving behavior.

At least, if you only consider the capital supply side argument in the original post, this would be why it would fail. IIR... (read more)

Thanks Harrison! Indeed, the "holding the bag" problem is what removes the incentive to "short the world", compared to any other short positions you may wish to take in the market (which also have a timing problem - the market can stay irrational even if you're right - but where there is at least a market mechanism creating incentives for the market to self-correct. The "holding the bag" problem removes this self-correction incentive, so the only way to beat the market is to consume more, and so a few investors won't unilaterally change the market price

See my response to Carl further up. This follows from accepting the assumptions of the former post. I wanted to show that even with said assumptions, their conclusions don’t follow. But I don’t think the assumptions are realistic either.

1
Jakob
1y
I have updated the post to reflect this

Yes, in isolation I see how that seems to clash with what Carl is saying. But that’s after I’ve granted the limited definition of TAI (x-risk or explosive, shared growth) from the former post. When you allow for scenarios with powerful AI where savings still matter, the picture changes (and I think that’s a more accurate description of the real world). I see that I could’ve been more clear that this post was a case of “even if blindly accepting the (somewhat unrealistic) assumptions of another post, their conclusions don’t follow”, and not an attempt at describing reality as accurately as possible

6
Jakob
1y
I have now updated the post to reflect this

I agree that the marginal value of money won't be literally zero after TAI (in the growth scenario; if we're all dead, then it is exactly equal to zero). But (if we still assume those two TAI scenarios are the only possible ones), on a per-dollar basis it will be much lower than today, which will massively skew the incentives for traders - in the face of uncertainty, they would need overwhelming evidence before making trades that pay off only after TAI. And importantly, if you disagree with this and believe the marginal utility of money won't change radica... (read more)

I don't think this. Where do you think I say that?

These are the scenarios defined in the former post. I just run with the assumptions of the argument they present, and show that their conclusion doesn't follow from those assumptions. That doesn't mean I think all the assumptions are accurate reflections of reality. The fact that TAI can play out in many ways, and investors may have very differing beliefs about what it means for their optimal saving rate today, is just another argument for why we shouldn't use interest rates as a measure of AI timelines, which is what I argue in this post.

1[anonymous]1y
The wording you used in the post was about "savvy" investors, but my naive understanding of markets is that the savviness or not doesn't particularly matter here.    If there are non-negligible portions of investors who believe in near-term TAI and also value future profits, doesn't that put a hole through the argument? 

Carl, I agree with everything you're saying, so I'm a bit confused about why you think you disagree with this post.

This post is a response to the very specific case made in an earlier forum post, where they use a limited scenario to define transformative AI, and then argue that we should see interest rates rising if if traders believe that scenario to be near. 

I argue that we can't use interest rates to judge if said, specific scenario is near or not. That doesn't mean there are no ways to bet on AI (in a broader sense). Yes, when tech firms are tradi... (read more)

It seems to me like you disagree with Carl because you write:

  • The reason for an investor to make a bet, is that they believe they will profit later
  • However, if they believe in near-term TAI, savvy investors won't value future profits (since they'll be dead or super rich anyways)
  • Therefore, there is no way for them to win by betting on near-term TAI

So you're saying that investors can't win from betting on near-term TAI. But Carl thinks they can win.

I think I'll try and type up my objections in a post rather than a comment - it seems to me that this post is so close to being right that it takes effort to pinpoint the exact place where I disagree, and so I want to take the time to formalize it a bit more.

 

But in short, I think it's possible to have 1) rational traders, 2) markets that largely function well, and 3) still no 5+ year advance signal of AGI in the markets, without making very weird assumptions. (note: I choose the 5+ year timeline because I think once you get really close to AGI, say, ... (read more)

I see that I wasn't being super clear above. Others in the comments have pointed to what I was trying to say here:

 - The window between when "enough" traders realize that AI is near and when it arrives may be very short, meaning that even in the best case you'll only increase your wealth for a very short time by making this bet

 - It is not clear how markets would respond if most traders started thinking that AI was near. They may focus on other opportunities that they believe are stronger  than to go short interest rates (e.g., they may deci... (read more)

8
Joel Becker
1y
I don't think that you were being unclear above. The underlying reasoning still feels a little tortured to me. I mean, sure, it could be, but wouldn't it be weird to believe this confidently? The artists are storming parliament, the accountants are on the dole, foom just around the corner -- but a small number of traders have not yet clocked that an important change is coming? Traders are not dumb. At least, the small number of traders necessary to move the market are not dumb. They will understand the logic of this post. A mass ignoring of interest rates in favor of tech equity investing is not a stable equilibrium. In order to get the benefits of the best case of anything, you need to take on risk. You could make the same directional bet with less risk. If you weaken this statement to "exposure to a good chunk of the benefits of the implications of their beliefs, by taking on reasonable risk" then the interest rate conclusion still goes through.

(a short additional note here: yes some of this is addressed more at length in the post, e.g., in section X re my point 3, but IMO the authors are somewhat too strongly stating their case in those sections. You do not need a Yudkowskian "foom" scenario to happen overnight for the following point to be plausible: "timelines may be short-ish, say ~10 years, but the world will not realize until quite soon before, say 1-3 years, and in the meantime it won't make sense to bet on interest rate movements for most people")

Jakob
1y109
30
12

While this is a very valuable post, I don't think the core argument quite holds, for the following reasons:

  1. Markets work well as information aggregation algorithms when it is possible to profit a lot from being the first to realize something (e.g., as portrayed in "The Big Short" about the Financial Crisis).
  2. In this case, there is no way for the first movers to profit big. Sure, you can take your capital out of the market and spend it before the world ends (or everyone becomes super-rich post-singularity), but that's not the same as making a billion bucks.&n
... (read more)

You can argue that one could take a short position on interest rates (e.g., in the form of a loan) if you believe that they will rise at some point, but that is a different bet from short timelines - what you're betting on then, is when the world will realize that timelines are short, since that's what it will take before many people choose to pull out of the market, and thus drive interest rates up. It is entirely possible to believe both that timelines are short, and that the world won't realize AI is near for a while yet, in which case you wouldn't do t

... (read more)
7
Jakob
1y
(a short additional note here: yes some of this is addressed more at length in the post, e.g., in section X re my point 3, but IMO the authors are somewhat too strongly stating their case in those sections. You do not need a Yudkowskian "foom" scenario to happen overnight for the following point to be plausible: "timelines may be short-ish, say ~10 years, but the world will not realize until quite soon before, say 1-3 years, and in the meantime it won't make sense to bet on interest rate movements for most people")

Agree with many of the considerations above - the bar should probably rise somewhat after such a funding shortfall. One way to solve it in practice could be to sit down in the room with the old FTX FF team and ask "which XX% of your grants are you most enthusiastic about and why", and then (at least as an initial hypothesis; possibly requiring some further vetting) plan to fund that. The generalized point I'm trying to make is twofold: 1) that quite a bit of judgement already went into assessing these projects and it should be possible to use that to decid... (read more)

Jakob
1y115
49
14

Thank you for a good and swift response, and in particular, for stating so clearly that fraud cannot be justified on altruistic grounds.

I have only one quibble with the post: IMO you should probably increase your longtermist spending quite significantly over the next ~year or so, for the following reasons (which I'm sure you've already considered, but I'm stating them so others can also weigh in)

  • IIRC Open Philanthropy has historically argued that a lack of high-quality, shovel-ready projects has been limiting the growth in your longtermist portfolio. This
... (read more)

I want to push back on this a tiny bit. Just because some projects got funding from FTX, that doesn't necessarily mean Open Phil should fund them. There's a few reasons for this:

There will be projects that 1) have significant funding gaps, 2) have been vetted by people you trust for both their value alignment and competence, 3) are not only shovel-ready, but already started.

  1. When FTX Future Fund was functioning, there was lots more money available in the ecosystem, hence (I think) the bar for receiving a longtermist grant was lower. This money is now gone,
... (read more)

Thank you for your good work over the last months, and thank you for your commitment to integrity in these hard times. I'm sure this must also be hard for you on a personal level, so I hope you're able to find consolation in all the good that will be created from the projects you helped off the ground, and that you still find a home in the EA community. 

Hi Adam! Thanks for the detailed reply. From a brief look at your model, it seems you've understood my reasoning in this post correctly. I had indeed overlooked that their numbers were already discounted.

However, since they use a 3% discount rate and you use a 4% discount rate, you would still need to adjust for the difference. If we still assume that the economic impacts hit throughout your entire career, from 15 to 60 years into the future (note: 15 years into the future is not the average, but the initial year of impacts!), then you get to around $0.7 o... (read more)

5
GiveWell
2y
Thanks for the flag, Jakob!

Thank you for this post, this is excellent work! Are you aware of ongoing efforts for any of your proposed topics? I'm asking because I'd consider starting a project on some of the above.

2
Rémi T
2y
Thank you for your comment :) It looks like Lennart Stern has been working on a project related to "international cooperation", "prize and market design" and "preparedness in developing countries" I don't know about anything else, but I haven't looked much.

I agree that it is a decision to be made on a project-by-project basis, but you can still have some prior about what’s roughly the right thing to do in aggregate, and use that prior to assess if you’re clearly missing the mark. This may feel like an artificial or useless exercise, but in general it is how high-level strategy decisions are made. Perhaps we’re just talking around each other because we are on different abstraction levels - you’re perhaps imagining a grant maker asking “how should I achieve this outcome” while I’m imagining “what’s the right s... (read more)

The flip side of this is that people with less existing “reputation stock” may see the potential status upside as the main compensation from a prize contest, and not the monetary benefit

I think the “get lots of input in a short time from a crowd with different semi-informed opinions” feature of prizes are hard to replace through other mechanisms. Some companies have built up extensive expert networks that they can call on-demand to do this, but that still doesn’t have quite the same agility. However, in those cases you may often want to compensate more than just the best entry (in line with the OP)

One interesting debate would be: what’s the optimal % of funding that should go to prizes? Which parameters would allow us to determine this? One can imagine that the % should be higher in communities that are struggling more to hire enough, or where research agendas are unclear so more coordination is needed, but should be lower in communities with people with low savings, or where the funders have capacity to diversify risks.

One additional consideration is that the coordination benefits from prizes (in raising the salience of memes or the status of the w... (read more)

0
JoshuaBlake
2y
Is % of funding the right framing? I think it should be evaluated on if a prize is the best mechanism for a particular desired outcome. So work out your outcome, then decide on prize or other alternative.

Thank you for writing this up - I’ve wanted to do the same for a while! I think the only thing I see missing is that prizes can raise the salience of some concept or nuance, and therefore serve as a coordination mechanism in more ways than you list (e.g., say that we want more assessments of long-term interventions using the framework from WWOTF of significance - durability - contingency, then a prize for those assessments would also help signal boost the framework)

2
JulianHazell
2y
+1 I also think another similar bonus is that prizes can sometimes get people to do EA things who otherwise wouldn’t have done EA things counterfactually. E.g., some prize on alignment work could plausibly be done by computer scientists who otherwise would be doing other things. This could signal boost EA/the cause area more generally, which is good.
5
Peter Wildeford
2y
Cool! I added that
3
Peter Wildeford
2y
Cool! I added that

Thank you Max, and good point! While we did try to use the state-of-the-art evidence in this piece I think I’ll defer to Will’s research team on that one - his take is probably closer to the current consensus among the relevant experts

Hi Toby, thanks for the good insight and also relevant links - and apologies for the extremely delayed response! I thought I had already responded to this.

Agree that such a map would be valuable, though 1) I'm not sure if the data is rich enough to create a general map that works across all policy areas (due to substantial confounding factors throughout history), and 2) there may also be conceptual challenges (e.g., the strength of each arrow may differ by policy domain). Still, I think this is an important crux for the value of policy work in smaller countries, so agree that developing a better understanding would be valuable!

You can see their rationale in their public model: https://docs.google.com/spreadsheets/d/1tytvmV_32H8XGGRJlUzRDTKTHrdevPIYmb_uc6aLeas/edit#gid=1362437801

 

It's the sum of 1.7% "improving circumstances over time", 0.9% "compounding non-monetary benefits" and 1.4% "temporal uncertainty". They have 0.0% "pure time preference"

I think it is likely that increased attention will lead to increased funding, but the question is on what timescales, and by what magnitude. Relatively recent numbers showed that the clear majority of people, even among US college students, had not heard of EA, which means it's very unlikely that the potential funder pool is already saturated https://forum.effectivealtruism.org/posts/qQMLGqe4z95i6kJPE/how-many-people-have-heard-of-effective-altruism 

Good point! Indeed, the key funding sources for EA (tech billionaires) have notoriously volatile fortunes, though I'm not sure how tight the link is between their wealth in a given year, and the flow of money to EA.

 

Also, others seem to predict that the number of major funders will grow over the next years, which can increase both the average level of funding, and the stability https://forum.effectivealtruism.org/posts/Ze2Je5GCLBDj3nDzK/how-many-ea-billionaires-five-years-from-now 

Have you spoken to Jona Glade about it? He’s also working on setting up a consultancy. I’m also happy to chat about this.

2
Weaver
2y
I have not. If you could connect the two of us, I would appreciate it. I'll message you this week when I get a chance to talk shop and I hope it will be a productive discussion.

Would this be another organization like Rethink Priorities, or is it different from what they are doing? (Note: I don't think this space is crowded yet, so even if it is another organization doing the same things, it could still be very helpful!)

See one version of this here: https://forum.effectivealtruism.org/posts/KigFfo4TN7jZTcqNH/the-future-fund-s-project-ideas-competition?commentId=rTHGFbfr8DXwqnA2B

One potential niche could be betting markets around outcomes of political events (e.g., betting on outcome metrics such as GDP growth, expected lifespan, GINI coefficient, or carbon emissions; linked to events such as a national election, new regulatory proposals, or the passing of government budgets). Depending on legal restrictions, this market could even ask policy makers or political parties to place bets in these markets, to help the public assess which policy makers have the best epistemics, to hold policy makers accountable, and to incentivize polic... (read more)

A potential complementary strategy to this one, could be research into putting out large-scale wildfires (though I'm not sure about the feasibility of this - are anyone aware of existing research on this?)

Thanks Jona, agree! Also, many EA orgs seem to be experiencing growth pains at the moment, I think the case for helping them scale (in ops/mgt roles) is stronger than ever. Some consulting firms also allow their employees to do temporary (paid or unpaid) secondments with selected non-profits, which could be one way of exploring if this path is a fit.

Thanks! I think we meant to refer to https://total-portfolio.org/

Perhaps also some of Ellen Quigley's work on Universal Ownership https://www.cser.ac.uk/team/ellen-quigley/ 

Hi Ryan, thanks for your comment!

1) "The title should clarify that it's "national scale" rather than scale generally that's overrated."
We did not use “national scale” because we cover policy making on both national-, subnational- and multinational scale. However, we agree that "scale" is very useful as a parameter in cause prioritization frameworks. You're right that our claim is more narrow - only that it's overrated in this specific setting.


2) "US and China are probably more likely to copy their own respective states & provinces than copy the Nordics... (read more)

Hi Peter,

Thanks for the link - I was not aware of this but have added my name to it. 

To your question, I don't know if it would be helpful. I haven't tried to do consulting for EA orgs yet, and I know that some who have tried to do this have found it hard because of lack of demand. To the first point in your comment: Maybe a document like this and a forum post could unlock some demand, but I'm not sure. The best way to learn would be to simply test it! 

1
PeterSlattery
3y
Thanks. I agree! On the point of not knowing about the link, I'll mention that I think it is all too often the case that useful EA resources remain relatively unknown.  I occasionally find out about a resource after it would have been very helpful for me. Even when I know resources are out there I can't often remember where I found them. With that in mind, I think we could do better/more awareness raising for good resources. I think that EA forum posts are good for that because the forum is well indexed easily searchable. Posts can also be found via a google search. Hence the suggestion for  posting about it in the forum. I'd also recommend mentioning it wherever it is relevant (e.g.,  as a comment in any new posts by consultants new to EA for example).

+1 to all Jona writes here - with the caveat that consulting firms like McKinsey or BCG can also help you scope the project and prioritize what’s most important to work on. This of course requires some level of trust (like in all professional services where the client may not know their exact needs), which strengthens the case for using EA consultants at least for pilot projects until norms around using consultants are well-established.

Posting as an individual who is a consultant, not on behalf of my employer

Hi, I’m one of the co-organizers of EACN, running the McKinsey EA community and currently co-authoring a forum post about having an impact as a management consultant (to add some nuance and insider perspectives to what 80k is writing on the topic: https://80000hours.org/articles/alternatives-to-consulting/).

First let me voice a +1 to everything Jeremy has said here already - with the possible exception that I know several McKinsey partners are interfacing with the EA movement on part... (read more)

Jona
3y11
0
0

Love the idea of a having call and a pilot project (if this is what is most useful). We might even explore the options for pro bono work in the EACN as I know that some partners in BCG are looking for strong partnerships in their regions. I imagine that might also be the case for McKinsey, Accenture, Bain, ... .

I also agree that almost all consultancies already do EA-aligned work (not to the extent, we would like them to of course) and have expertise in many relevant fields. E.g., my last project was to do an impact assessment (incl. counterfactual impact ... (read more)