peterhartree

Working (6-15 years of experience)

Bio

Now: Independent study; Radio Bostrom; Parfit Archive.

New: ✨ Comment Helper for Google Docs. ✨

Previously: 80,000 Hours (2014-15; 2017-2021) Worked on web development, product management, strategy, internal systems, IT security, etc. Read my CV.

Also: Inbox When Ready; The Valmy.

Comments
151

Topic Contributions
4

As Byrne points out, and some notable examples testify, some people manage to:

  1. "Go to the monastery" to explore ideas as a hardcore believer.

  2. After a while, "return to the world", and successfully thread the needle between innovation, moderation, and crazy town.

This is not an easy path. Many get stuck in the monastery, failing gracefully (i.e. harmlessly wasting their lives). Some return to the world, and achieve little. Others return to the world, accumulate great power, and then cause serious harm.

Concern about this sort of thing, presumably, is a major motivation for the esotericism of figures like Tyler Cowen, Peter Thiel, Plato, and most of the other Straussian thinkers.

One thing this reminds me of is a segment of Holden Karnofsky's interview with Ezra Klein.


HOLDEN KARNOFSKY: At Open Philanthropy, we like to consider very hard-core theoretical arguments, try to pull the insight from them, and then do our compromising after that.

And so, there is a case to be made that if you’re trying to do something to help people and you’re choosing between different things you might spend money on to help people, you need to be able to give a consistent conversion ratio between any two things.

So let’s say you might spend money distributing bed nets to fight malaria. You might spend money [on deworming, i.e.] getting children treated for intestinal parasites. And you might think that the bed nets are twice as valuable as the dewormings. Or you might think they’re five times as valuable or half as valuable or ⅕ or 100 times as valuable or 1/100. But there has to be some consistent number for valuing the two.

And there is an argument that if you’re not doing it that way, it’s kind of a tell that you’re being a feel-good donor, that you’re making yourself feel good by doing a little bit of everything, instead of focusing your giving on others, on being other-centered, focusing on the impact of your actions on others,[where in theory it seems] that you should have these consistent ratios.

So with that backdrop in mind, we’re sitting here trying to spend money to do as much good as possible. And someone will come to us with an argument that says, hey, there are so many animals being horribly mistreated on factory farms and you can help them so cheaply that even if you value animals at 1 percent as valuable as humans to help, that implies you should put all your money into helping animals.

On the other hand, if you value [animals] less than that, let’s say you value them a millionth as much, you should put none of your money into helping animals and just completely ignore what’s going on factory farms, even though a small amount of your budget could be transformative.

So that’s a weird state to be in. And then, there’s an argument that goes […] if you can do things that can help all of the future generations, for example, by reducing the odds that humanity goes extinct, then you’re helping even more people. And that could be some ridiculous comic number that a trillion, trillion, trillion, trillion, trillion lives or something like that. And it leaves you in this really weird conundrum, where you’re kind of choosing between being all in on one thing and all in on another thing.

And Open Philanthropy just doesn’t want to be the kind of organization that does that, that lands there. And so we divide our giving into different buckets. And each bucket will kind of take a different worldview or will act on a different ethical framework. So there is bucket of money that is kind of deliberately acting as though it takes the farm animal point really seriously, as though it believes what a lot of animal advocates believe, which is that we’ll look back someday and say, this was a huge moral error. We should have cared much more about animals than we do. Suffering is suffering. And this whole way we treat this enormous amount of animals on factory farms is an enormously bigger deal than anyone today is acting like it is. And then there’ll be another bucket of money that says: “animals? That’s not what we’re doing. We’re trying to help humans.”

And so you have these two buckets of money that have different philosophies and are following it down different paths. And that just stops us from being the kind of organization that is stuck with one framework, stuck with one kind of activity.

[…]

If you start to try to put numbers side by side, you do get to this point where you say, yeah, if you value a chicken 1 percent as much as a human, you really are doing a lot more good by funding these corporate campaigns than even by funding the [anti-malarial] bed nets. And [bed nets are] better than most things you can do to help humans. Well, then, the question is, OK, but do I value chickens 1 percent as much as humans? 0.1 percent? 0.01 percent? How do you know that?

And one answer is we don’t. We have absolutely no idea. The entire question of what is it that we’re going to think 100,000 years from now about how we should have been treating chickens in this time, that’s just a hard thing to know. I sometimes call this the problem of applied ethics, where I’m sitting here, trying to decide how to spend money or how to spend scarce resources. And if I follow the moral norms of my time, based on history, it looks like a really good chance that future people will look back on me as a moral monster.

But one way of thinking about it is just to say, well, if we have no idea, maybe there’s a decent chance that we’ll actually decide we had this all wrong, and we should care about chickens just as much as humans. Or maybe we should care about them more because humans have more psychological defense mechanisms for dealing with pain. We may have slower internal clocks. A minute to us might feel like several minutes to a chicken.

So if you have no idea where things are going, then you may want to account for that uncertainty, and you may want to hedge your bets and say, if we have a chance to help absurd numbers of chickens, maybe we will look back and say, actually, that was an incredibly important thing to be doing.

EZRA KLEIN: […] So I’m vegan. Except for some lab-grown chicken meat, I’ve not eaten chicken in 10, 15 years now — quite a long time. And yet, even I sit here, when you’re saying, should we value a chicken 1 percent as much as a human, I’m like: “ooh, I don’t like that”.

To your point about what our ethical frameworks of the time do and that possibly an Open Philanthropy comparative advantage is being willing to consider things that we are taught even to feel a little bit repulsive considering—how do you think about those moments? How do you think about the backlash that can come? How do you think about when maybe the mores of a time have something to tell you within them, that maybe you shouldn’t be worrying about chicken when there are this many people starving across the world? How do you think about that set of questions?

HOLDEN KARNOFSKY: I think it’s a tough balancing act because on one hand, I believe there are approaches to ethics that do have a decent chance of getting you a more principled answer that’s more likely to hold up a long time from now. But at the same time, I agree with you that even though following the norms of your time is certainly not a safe thing to do and has led to a lot of horrible things in the past, I’m definitely nervous to do things that are too out of line with what the rest of the world is doing and thinking.

And so we compromise. And that comes back to the idea of worldview diversification. So I think if Open Philanthropy were to declare, here’s the value on chickens versus humans, and therefore, all the money is going to farm animal welfare, I would not like that. That would make me uncomfortable. And we haven’t done that. And on the other hand, let’s say you can spend 10 percent of your budget and be the largest funder of farm animal welfare in the world and be completely transformative.

And in that world where we look back, that potential hypothetical future world where we look back and said, gosh, we had this all wrong — we should have really cared about chickens — you were the biggest funder, are you going to leave that opportunity on the table? And that’s where worldview diversification comes in, where it says, we should take opportunities to do enormous amounts of good, according to a plausible ethical framework. And that’s not the same thing as being a fanatic and saying, I figured it all out. I’ve done the math. I know what’s up. Because that’s not something I think.

[…]

There can be this vibe coming out of when you read stuff in the effective altruist circles that kind of feels like […] it’s trying to be as weird as possible. It’s being completely hard-core, uncompromising, wanting to use one consistent ethical framework wherever the heck it takes you. That’s not really something I believe in. It’s not something that Open Philanthropy or most of the people that I interact with as effective altruists tend to believe in.

And so, what I believe in doing and what I like to do is to really deeply understand theoretical frameworks that can offer insight, that can open my mind, that I think give me the best shot I’m ever going to have at being ahead of the curve on ethics, at being someone whose decisions look good in hindsight instead of just following the norms of my time, which might look horrible and monstrous in hindsight. But I have limits to everything. Most of the people I know have limits to everything, and I do think that is how effective altruists usually behave in practice and certainly how I think they should.

[…]

I also just want to endorse the meta principle of just saying, it’s OK to have a limit. It’s OK to stop. It’s a reflective equilibrium game. So what I try to do is I try to entertain these rigorous philosophical frameworks. And sometimes it leads to me really changing my mind about something by really reflecting on, hey, if I did have to have a number on caring about animals versus caring about humans, what would it be?

And just thinking about that, I’ve just kind of come around to thinking, I don’t know what the number is, but I know that the way animals are treated on factory farms is just inexcusable. And it’s just brought my attention to that. So I land on a lot of things that I end up being glad I thought about. And I think it helps widen my thinking, open my mind, make me more able to have unconventional thoughts. But it’s also OK to just draw a line […] and say, that’s too much. I’m not convinced. I’m not going there. And that’s something I do every day.

Thank you for writing this.

I have thought about writing a critical post making broadly similar arguments, but with a greater focus on how the FTX disaster played out in November.

I don't plan to do this right now. At least some of the people who are working on this have a reasonable read on my views, and there are other things I want to focus on for now.

Again—thanks for writing this. I will follow the discussion with interest—and so will many journalists!

Very helpful, thanks Tyler!

If I understand you correctly, you'd agree that carefully saying true things, with extra attention to clarity (to avoid easy misreadings), would go a long way to reducing risk of (1).

+1 to looking forward to print copies.

My ideal format would be PDF eBook of all the articles. Willing to pay.

Tyler: thanks for the post.

A couple questions I'd love to hear your thoughts on:

  1. Would it be right to say that >90% of the legal risk to public figures and institutions was incurred prior to the FTX blow-up on November 8?

  2. Do you expect that >30% of prominent EA figures will end up involved in the litigation process, regardless of what they post after November 8?

  3. Do you expect that Effective Ventures (formerly: CEA) will end up involved in the litigation process, regardless of what they post after November 8?

EDIT (17:15 GMT): I think I've fixed all of these.

Thanks for flagging. Yes—it's an encoding issue.

I was unable to quickly fix this when I copy-pasted from my PDF copy of this article.

I have to work on some other stuff now, but I will get this fixed sometime today.

Hopefully it's legible in the meantime—I'm sorry for any trouble due to this.

Would he take the 51:49 bet repeatedly, as "maximise EV" might suggest? Why / why not?

(I skimmed some of his series on EV but want to reread.)

https://joecarlsmith.com/2022/03/18/on-expected-utility-part-2-why-it-can-be-ok-to-predictably-lose

If there's significant risk of this blowback, EA leadership better develop a pro-active plan for dealing with the PR crisis -- and quick.

I think it would be prudent for EA leadership to treat this FTX crisis as a potentially serious PR crisis for EA -- and not just a massive financial crisis for EA funding.

I've been talking to key people a fair bit since yesterday, broadly pushing the line and level of concern that you suggest. My current take is that the "pro-active plan" work is happening quickly and with appropriate investment.

Load More