jackmalde

I am working as an economist at the Confederation of British Industry (CBI) and previously worked in management consulting.

I am interested in longtermism, global priorities research and animal welfare

I’m happy to have a chat sometime: https://calendly.com/jack-malde

Feel free to connect with me on LinkedIn: https://www.linkedin.com/in/jack-malde/

Wiki Contributions

Comments

What would you do if you had half a million dollars?

Thanks, I understand all that. I was confused when Khorton said:

I meant increasing the number of grantmakers who have spent significant time thinking about where to donate significant capital

I wouldn't say the lottery increases the number of grantmakers who have spent significant time thinking, I think it in fact reduces it.

I agree with you when you say however:

The overall amount of time spent is actually less than before, but the depth is far greater, and with dramatically less redundancy.


 

What would you do if you had half a million dollars?

I think perhaps we agree then - if after significant research, you realize you can't beat an EA Fund, that seems like a reasonable fallback, but that should not be plan A.

Yeah that sounds about right to me.

I meant increasing the number of grantmakers who have spent significant time thinking about where to donate significant capital

I still don't understand this. The lottery means one / a small number of grantmakers get all the money to allocate. People who don't win don't need to think about where to donate. So really it seems to me that the lottery reduces the number of grantmakers and indeed the number of who spend time thinking about where to donate.

What would you do if you had half a million dollars?

I'm not sure I understand how the lottery increases the diversity of funding sources / increases the number of grantmakers if one or a small number of people end up winning the lottery. Wouldn't it actually reduce diversity / number of grantmakers? I might be missing something quite obvious here...

Reading this it seems the justification for lotteries is that it not only saves research time for the EA community as a whole, but also improves the allocation of the money in expectation. Basically if you don't win you don't have to bother doing any research (so this time is saved for lots of people), and if you do win you at least have the incentive to do lots of research because you're giving away quite a lot of money (so the money should be given away with a great deal of careful thought behind it).

Of course if everyone in the EA community just gives to an EA Fund and knows that they would do so if they won the lottery, that would render both of the benefits of the lottery redundant. This shouldn't be the case however as A) not everyone gives to EA Funds - some people really research where they give, and B) people playing donor lotteries shouldn't be certain of where they would give the money if they won - the idea is that they would have to research. I see no reason why this research shouldn't lead to giving to an EA Fund.

What would you do if you had half a million dollars?

Although whether increasing the population is a good thing depends of if you are an average utilitarian or a total utilitarian.

Yes that is true. For what it's worth most people who have looked into population ethics at all reject average utilitarianism as it has some extremely unintuitive implications like the "sadistic conclusion" whereby one can make things better by bringing into existence people with terrible lives, as long as they're still bringing up the average wellbeing level by doing so i.e. if existing people have even worse lives.

What would you do if you had half a million dollars?

I got the impression that their new, general-purpose pool would still be fairly longtermist, but it's possible they will have to make sacrifices.

To clarify it's not that I don't think they would be "longtermist" it's more that I think they may have to give to longtermist options that "seem intuitively good to a non-EA", e.g. giving to an established organisation like MIRI or CHAI, rather than give to longtermist options that may be better on the margin but seem a bit weirder at first glance like "buying out some clever person so they have more time to do some research".

That pretty much gets to the heart of my suspected difference between Longview and LTFF - I think LTFF funds a lot of individuals that may struggle to get funding from elsewhere whereas Longview tends to fund organisations that may struggle a lot less - although I do see on their website that they funded Paul Slovic (but he seems a distinguished academic so may have been able to get funding elsewhere).

What would you do if you had half a million dollars?

Yeah you probably should - unless perhaps you think there are scale effects to giving which makes you want to punt on being able to give far more.

Worth noting of course that Patrick didn’t know he was going to give to a capital allocator when he entered the lottery though, and of course still doesn’t. Ideally all donor lottery winners would examine the LTFF very carefully and honestly consider whether they think they can do better than LTFF. People may be able to beat LTFF, but if someone isn’t giving to LTFF I would expect clear justification as to why they think they can beat it.

What would you do if you had half a million dollars?

Would you mind linking some posts or articles assessing the expected value of the long-term future?

You're right to question this as it is an important consideration. The Global Priorities Institute has highlighted "The value of the future of humanity" in their research agenda (pages 10-13). Have a look at the "existing informal discussion" on pages 12 and 13, some of which argues that the expected value of the future is positive.

Sure, it's possible that some form of eugenics or genetic engineering could be implemented to raise the average hedonic set-point

I think you misunderstood what I was trying to say. I was saying that even if we reach the limits of individual happiness, we can just create more and more humans to increase total happiness.

What would you do if you had half a million dollars?

I'm not really sure what to think about digital sentience. We could in theory create astronomical levels of happiness, astronomical levels of suffering, or both. Digital sentience could easily dominate all other forms of sentience so it's certainly an important consideration.

It seems unlikely to me that we would go extinct, even conditional on "us" deciding it would be best.

This is a fair point to be honest!

What would you do if you had half a million dollars?

In general, it kind of seems like the "point" of the lottery is to do something other than allocate to a capital allocator.

If you enter a donor lottery your expected donation amount is the same as if you didn't enter the lottery. If you win the lottery, it will be worth the time to think more carefully about where to allocate the money than if you had never entered, as you're giving away a much larger amount. Because extra time thinking is more likely to lead to better (rather than worse) decisions, this leads to more (expected) impact overall, even though your expected donation size stays the same. More on all of this here.

So the point of the lottery really is just to think very carefully about where to give if you win, allowing you to have more expected impact than if you hadn't entered. It seems quite possible (and in my opinion highly likely) that careful thinking  would lead one to give to a capital allocator as they have a great deal of expertise.

What would you do if you had half a million dollars?

There is still the possibility that the Pinkerites are wrong though, and quality of life is not improving.

Sure, and there could be more suffering than happiness in the future, but people go with their best guess about what is more likely and I think most in the EA community side with a future that has more happiness than suffering.

happiness levels in general should be roughly stable in the long run regardless of life circumstances.

Maybe, but if we can't make people happier we can always just make more happy people. This would be very highly desirable if you have a total view of population ethics.

Regarding averting extinction and option value, deciding to go extinct is far easier said than done.

This is a fair point. What I would say though is that extinction risk is only a very small subset of existential risk so desiring extinction doesn't necessarily mean you shouldn't want to reduce most forms of existential risk.

Load More