Mati_Roy

Mati_Roy's Comments

Why do we need philanthropy? Can we make it obsolete?

Thanks for your comment. It makes me realize I failed to properly communicate some of my ideas. Hopefully this comment can elucidate them.

Better democracy won't help much with EA causes if people generally don't care about them

More democracy could even make things worse (see 10% Less Democracy). But much better democracy wouldn't because it would do things like:

  • Disentangling values from expertise (ex.: predicting which global catastrophes are most likely shouldn't be done democratically, but rather with expert systems such as prediction markets)
  • Representing the unrepresented (ex.: having a group representing the interest of non-human animals during elections)
we choose EA causes in part based on their neglectedness

I was claiming that with the best system, all causes would be equally (not) neglected. Although this wouldn't be entirely true as I conceded in the previous comment because people have different fundamental values.

Causes have to be made salient to people, and that's a role for advocacy to play,

I think most causes wouldn't have to be made salient to people if we had a great System. You can have something like (with a lot of details still to be worked out): 1) have a prediction market to predict what values existing people would vote on in the future, and 2) have a prediction market to predict which interventions will fulfill those values the most. And psychological research and education helping people to introspect is a common good that would likely be financed by such a System. Also, if 'advocacy' is about a way of enforcing cooperative social norms, then this would be fixed by solving coordination problems.

But maybe you want to declare ideological war, and aim to overwrite people's terminal values with yours, hence partly killing their identity in the process. If that's what you mean by 'advocacy', then you're right that this wouldn't be captured by the System, and 'philanthropy' would still be needed. But protecting ourselves against such ideological attacks is a social good: it's good for everyone individually to be protected. I also think it's likely better for everyone (or at least a supermajority) to have this protection for everyone rather than for no one. If we let ideological wars go on, there will likely be an evolutionary process that will select for ideologies adapted to their environment, which is likely to be worse from most currently existing people's moral standpoint than if there had been ideological peace. Robin Hanson has written a lot about such multipolar outcomes.

Maybe pushing for altruism right now is a good patch to fund social good in the current System. And maybe current ideological wars against weaker ideologies is rational. But I don't think it's the best solution in the long run.

Also relevant: Against moral advocacy.

I'm not sure you can or should try to capture this all without philanthropy

I proposed arguments for and against capturing philanthropy in the article. If you have more considerations to add, I'm interested.

Also, I don't think inequality will ever be fixed, since there's no well-defined target. People will always argue about what's fair, because of differing values.

I don't know. Maybe we settle on the Schelling point of splitting the Universe among all political actors (or in some other ways), and this gets locked-in through apparatuses like Windfall clauses (for example), and even if some people disagree with them, they can't change them. Although they could still decide to redistribute their own wealth in a way that's more fair according to their values, so in that sense you're right that their would still be a place for philanthropy.

Some issues may remain extremely expensive to address [...] so people as a group may be unwilling to fund them, and that's where advocates and philanthropists should come in.

I guess it comes down to inequality. Maybe someone thinks it's particularly unfair that someone has a rare disease, and so is willing to spend more resources on it than what the collective wants. And so they would inject more resources in a market for this value.

Another example: maybe the Universe is split equally among everyone alive at the point of the intelligence explosion, but some people will want to redistribute some of their wealth to fulfill the preferences of dead people, or will want to reward those that helped make this happen.

What is "just the right amount"?

I was thinking something like the amount one would spend if everyone else would spent the same amount than them, repeating this process for everyone and summing all those quantities. This would just be resource spent on a value; how to actually use the resources for that value would be decided by some expert systems.

And how do you see the UN coming to fund it if they haven't so far?

The UN would need to have more power. But I don't know how to make this happen.

If you got rid of Open Phil and other private foundations, redistributed the money to individuals proportionally, even if earmarked for altruistic purposes, and solved all coordination problems, do you think (longtermist) AI safety would be more or less funded than it is now?

At this point we would have formed a political singleton. I think a significant part of our entire world economy would be structured around AI safety. So more.

How else would you see (longtermist) AI safety make up for Open Phil's funding through political mechanisms, given how much people care about it?

As mentioned above, using something like Futarchy.

-----

Creating a perfect system would be hard, but I'm proposing moving in that direction. I updated that even with a perfect system, there would still be some people wanting to redistribute their wealth, but less so than currently.

Mati_Roy's Shortform

Good point. My implicit idea was to have the money in an independent trust, so that the "punishment" is easier to enforce.

EA Survey 2019 Series: Donation Data

I wonder how people in the EA community compare with people in general, notably controlling for income. I also wonder how much is given in the form of a reduced salary or volunteering, and how that compares to people in general.

Why We Sleep — a tale of institutional failure

cross-post means copy-pasting the entire article in the post on the EA forum

Why do we need philanthropy? Can we make it obsolete?

Thanks for our comment, it helped me clarified my model to myself.

especially politically unempowered moral beings

It proposes a lot of different voting systems to avoid (human) minorities being oppressed.

I could definitely see them develop systems to include future / past people.

But I agree they don't seem to tackle beings not capable (at least in some ways) of representing themselves, like non-human animals and reinforcement learners. Good point. It might be a blank spot for that community(?)

or many of the EA causes

Such as? Can you see other altruistic use of philanthropy beside coordination problems, politically empowering moral beings, and fixing inequality? Although maybe that assumes preference utilitarianism. With pure positive hedonistic utilitarianism, wanting to created more happy people is not really a coordination problem (to the extent most people are not positive hedonistic utilitarians), nor about empowering moral beings (ie. happiness is mandatory), nor about fixing inequalities (nor an egoist preference).

Maybe it can make solving them easier, but it doesn't offer full solutions to them all, which seems to be necessary for making philanthropy obsolete.

Oh, I agree solving coordination failures to finance public goods doesn't solve the AI safety problem, but it solves the AI safety funding problem. In that world, the UN would arguably finance AI safety at just the right amount, so there would be no need for philanthropists to fund the cause. In that world, 1$ at the margin of any public good would be just as effective. And egoists motivations to work in any of those field would be sufficient. Although maybe there are market failures that aren't coordination failures, like information asymmetries, in which case there might still be a used for personal sacrifices.

Mati_Roy's Shortform

Mind-readers as a neglected life extension strategy

Last updated: 2020-03-30

Status: idea to integrate in a longer article

Assuming that:

  • Death is bad
  • Lifelogging is a bet worth taking as a life extension strategy

It seems like a potentially really important and neglected intervention is improving mind readers as this is by far the most important part of our experience that isn't / can't be captured at the moment.

We don't actually need to be able to read the mind right now, just to be able to record the mind with sufficiently high resolution (plausibly along text and audio recording to be able to determine which brain patterns correspond to what kind of thoughts).

Questions:

  • Assuming we had extremely good software, how much could we read minds with our current hardware? (ie. how much is it worth recording your thoughts right now?)
  • How inconvenient would it be? How much would it cost?

To do:

  • Ask on Metaculus some operationalisation of the first question
Mati_Roy's Shortform

Nuke insurance

Category: Intervention idea

Epistemic status: speculative; arm-chair thinking; non-expert idea; unfleshed idea

Proposal: Have nuclear powers insure each other that they won't nuke each other for mutually assure destruction (ie. destroying my infrastructure means you will destroy your economy). Not accepting an offered of mutual insurances should be seen as extremely hostile and uncooperative, and possible even be severely sanctioned internationally.

What are EA project ideas you have?

The Bullshit Awards

Proposal: Give prizes to people spotting / blowing the whistle on papers bullshitting its readers, and explaining why.

Details: There could be a Bullshit Alert Prize for the one blowing the whistle, and a Bullshit Award for the one having done the bullshitting. This would be similar to the Darwin Awards in that you don't want to be the source of such an award.

Example: An analysis that could have won this is Why We Sleep — a tale of institutional failure.

Note: I'm not sure whether that's a good way to go about fixing that problem. Is shaming a useful tool?

Load More