MichaelDickens

Wiki Contributions

Comments

The psychology of population ethics

Most disagreements between professional philosophers on population ethics come down to disagreements about intuition:

  • Alice supports the total view because she has an intuition that the Repugnant Conclusion is not actually repugnant
  • Bob adopts a person-affecting view and rejects the independence of irrelevant alternatives (IIA) because his intuition is that IIA doesn't matter
  • Carol rejects transitivity of preferences because her intuition is that that's the least important premise

But none of them ultimately have any justification beyond their intuition. So I think it's totally fair and relevant to survey non-philosophers' intuitions.

Mission Hedgers Want to Hedge Quantity, Not Price

Also, climate change would be more related to cumulative oil production, rather than annual.

True. I tested the correlation between S&P 500 price and cumulative oil production and got r=0.81 (p < 1e-29).

Investing in companies with large food storage would be a particularly good hedge against abrupt food catastrophes.

That's a neat idea. It behaves more like insurance—most of the time it doesn't do much, but when it matters, it will give you a lot of money.

How Do AI Timelines Affect Giving Now vs. Later?

The reason I made the model only have one thing to spend on pre-AGI is not because it's realistic (which it isn't), but because it makes the model more tractable. I was primarily interested in answering a simple question: do AI timelines affect giving now vs. later?

How Do AI Timelines Affect Giving Now vs. Later?

I don't have any well-formed opinions about what the post-AGI world will look like, so I don't think it's obvious that logarithmic utility of capital is more appropriate than simply trying to maximize the probability of a good outcome. The way you describe it is how my model worked originally, but I changed it because I believe the new model gives a stronger result even if the model is not necessarily more accurate. I wrote in a paragraph buried in Appendix B:

In an earlier draft of this essay, my model did not assign value to any capital left over after AGI emerges. It simply tried to minimize the probability of extinction. This older model came to the same basic conclusion—namely, shorter timelines mean we should spend faster. (The difference was that it spent a much larger percentage of the budget each decade, and under some conditions it would spend 100% of the budget at a certain point.[5]) But I was concerned that the older model trivialized the question by assuming we could not spend our money on anything but AI safety research—obviously if that's the only thing we can spend money on, then we should spend lots of money on it. The new model allows for spending money on other things but still reaches the same qualitative conclusion, which is a stronger result.

What is the role of public discussion for hits-based Open Philanthropy causes?

It seems to me that the problem isn't just with Open Phil-funded speculative orgs, but with all speculative orgs.

To give some more specific examples, it's unclear to me how someone outside of Open Philanthropy could go about advocating for the importance of an organization like New Science or Qualia Research Institute.

I think it's just as unclear how someone inside Open Phil could advocate for those. Open Phil might have access to some private information, but that won't help much with something like estimating the EV of a highly speculative nonprofit.

Is effective altruism growing? An update on the stock of funding vs. people

Some evidence in this direction: Eliezer Yudkowsky recently wrote on a Facebook post:

This is your regular reminder that, if I believe there is any hope whatsoever in your work for AGI alignment, I think I can make sure you get funded.

This implies that all the really good funding opportunities Eliezer is aware of have already been funded, and any that appear can get funded quickly. Eliezer is not Nick Bostrom, but they're in similar positions.

(Note: Eliezer's Facebook post is publicly viewable, so I think reposting this quote here is ok from a privacy standpoint.)

How Do AI Timelines Affect Giving Now vs. Later?

I think we are falling for the double illusion of transparency: I misunderstood you, and the thing I thought you were saying was even further off than what you thought I thought you were saying. I wasn't even thinking about capacity-building labor as analogous to investment. But now I think I see what you're saying, and the question of laboring on capacity vs. direct value does seem analogous to spending vs. investing money.

At a high level, you can probably model labor in the same way as I describe in OP: you spend some amount of labor on direct research, and the rest on capacity-building efforts that increase the capacity for doing labor in the future. So you can take the model as is and just change some numbers.

Example: If you take the model in OP and assume we currently have an expected (median) 1% of required labor capacity, a rate of return on capacity-building of 20%, and a median AGI date of 2050, then the model recommends exclusively capacity-building until 2050, then spending about 30% of each decade's labor on direct research.

One complication is that this super-easy model treats labor as something that only exists in the present. But in reality, if you have one laborer, that person can work now and can also continue working for some number of decades. The super-easy model assumes that any labor spent on research immediately disappears, when it would be more accurate to say that research labor earns a 0% return (or let's say a -3% return, to account for people retiring or quitting) while capacity-building labor earns a 20% return (or whatever the number is).

This complication is kind of hard to wrap my head around, but I think I can model it with a small change to my program, changing the line in run_agi_spending that reads

capital *= (1 - spending_schedule[y]) * (1 + self.investment_return)**10

to

        research_return = -0.03
        capital *= spending_schedule[y] * ((1 + research_return)**10) + (1 - spending_schedule[y]) * ((1 + self.investment_return)**10)

In that case, the model recommends spending 100% on capacity-building for the next three decades, then about 30% per decade on research from 2050 through 2080, and then spending almost entirely on capacity-building for the rest of time.

But I'm not sure I'm modeling this concept correctly.

How Do AI Timelines Affect Giving Now vs. Later?

That's an interesting question, and I agree with your reasoning on why it's important. My off-the-cuff thoughts:

Labor tradeoffs don't work in the same way as capital tradoffs because there's no temporal element. With capital, you can spend it now or later, and if you spend later, you get to spend more of it. But there's no way to save up labor to be used later, except in the sense that you can convert labor into capital and then back into labor (although these conversions might not be efficient, e.g., if you can't find enough talented people to do the work you want). So the tradeoff with labor is that you have to choose what to prioritize. This question is more about traditional cause prioritization than about giving now vs. later. This is something EAs have already written a lot about, and it's probably worth more attention overall than the question of giving (money) now vs. later, but I believe the latter question is more neglected and has more low-hanging fruit.

The question of optimal giving rate might be irrelevant if, say, we're confident that the optimal rate is somewhere above 1%, we don't know where, but it's impossible to spend more than 1% due to a lack of funding opportunities. But I don't think we can be that confident that the optimal spending rate is that high. And even if we are, knowing the optimal rate still matters if you expect that we can scale up work capacity in the future.

I'd guess >50% chance that the optimal spending rate is faster than the longtermist community[1] is currently spending, but I also expect the longtermist spending rate to increase a lot in the future due to increasing work capacity plus capital becoming more liquid—according to Ben Todd's estimate, about half of EA capital is currently too illiquid to spend.

[1] I'm talking about longtermism specifically and not all EA because the optimal spending rate for neartermist causes could be pretty different.

A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good

As of yesterday, my position on mission hedging was that it was probably crowded out by other investments with better characteristics[1], and therefore not worth doing. But I didn't have any good justification for this, it was just my intuition. After messing around with the spreadsheet in the parent comment, I am inclined to believe that the optimal altruistic portfolio contains at least a little bit of mission hedging.

Some credences off the top of my head:

  • 70% chance that the optimal portfolio contains some mission hedging
  • 50% chance that the optimal portfolio allocates at least 10% to mission hedging
  • 20% chance that the optimal portfolio allocates 100% to mission hedging

[1] See here for more on what investments I think have good characteristics. More precisely, my intuition was that the global market portfolio (GMP) + mission hedging was probably a better investment than pure GMP, but a more sophisticated portfolio that included GMP plus long/short value and momentum had good enough expected return/risk to outweigh the benefits of mission hedging.

EDIT: I should add that I think it's less likely that AI mission hedging is worth it on the margin, given that (at least in my anecdotal experience) EAs already tend to overweight AI-related companies. But the overweight is mostly incidental—my impression is EAs tend to overweight tech companies in general, not just AI companies. So a strategic mission hedger might want to focus on companies that are likely to benefit from AI, but that don't look like traditional tech companies. As a basic example, I'd probably favor Nvidia over Google or Tesla. Nvidia is still a tech company so maybe it's not an ideal example, but it's not as popular as Google/Tesla.

A generalized strategy of ‘mission hedging’: investing in 'evil' to do more good

As an extension to this model, I wrote a solver that finds the optimal allocation between the AI portfolio and the global market portfolio. I don't think Google Sheets has a solver, so I wrote it in LibreOffice. Link to download

I don't know if the spreadsheet will work in Excel, but if you don't have LibreOffice, it's free to download. I don't see any way to save the solver parameters that I set, so you have to re-create the solver manually. Here's how to do it in LibreOffice:

  1. Go to "Tools" -> "Solver..."
  2. Click "Options" and change Solver Engine to "LibreOffice Swarm Non-Linear Solver"
  3. Set "Target cell" to D32 (the green-colored cell)
  4. Set "By changing cells" to E7 (the blue-colored cell)
  5. Set two limiting conditions: E7 => 0 and E7 <= 1
  6. Click "Solve"

Given the parameters I set, the optimal allocation is 91.8% to the global market portfolio and 8.2% to the AI portfolio. The parameters were fairly arbitrary, and it's easy to get allocations higher or lower than this.

Load More