Mati_Roy

Comments

Mati_Roy's Shortform

oh, of course, for-profit charities are a thing! that makes sense

I learned about it in "Economics without Illusion", chapter 8.

it's not because your organization's product/service/goal is to help other people and your customers are philanthropists that you can't make a profit.

profitable charities might increase competition to provide more effective altruism, and so still provide more value even though it makes a profit (maybe)

https://en.wikipedia.org/wiki/Charitable_for-profit_entity

x-post: https://www.facebook.com/mati.roy.09/posts/10159007824884579

The case for investing to give later

I can't find the donate button on FundersPledge. Do you have no more room for additional funding?

Mati_Roy's Shortform

Is there a name for a moral framework where someone cares more about the moral harm they directly cause than other moral harm?

I feel like a consequentialist would care about the harm itself whether or not it was caused by them.

And a deontologist wouldn't act in a certain way even if it meant they would act that way less in the future.

Here's an example (it's just a toy example; let's not argue whether it's true or not).

A consequentialist might eat meat if they can use the saved resources to make 10 other people vegans.

A deontologist wouldn't eat honey even if they knew they would crack in the future and start eating meat.

If you care much more about the harm caused by you, you might act differently than both of them. You wouldn't eat meat to make 10 other people vegan, but you might eat honey to avoid later cracking and start eating meat.

A deontologist is like someone adopting that framework, but with an empty individualist approach. A consequentialist is like someone adopting that framework, but with an open individualist approach.

I wonder if most self-label deontologist would actually prefer this framework I'm proposing.

EtA: I'm not sure how well "directly caused" can be cached out. Anyone has a model for that?

x-post: https://www.facebook.com/groups/2189993411234830/ (post currently pending)

Which norms would you like to see on the EA Forum?

I wish people x-posting between LessWrong and the EA Forum encouraged users to only comment on one to centralize comments. And to increase the probability that people do follow this suggestion, for posts (which take a long time to read anyway, compare to the time of clicking on a link), I would just put the post on one of the 2 and a link to it on the other

Mati_Roy's Shortform

Policy suggestion for countries with government-funded health insurance or healthcare: People using death-with-dignity can receive part of the money that is saved by the government if applicable.

Which could be used to pay for cryonics among other things.

The community's conception of value drifting is sometimes too narrow
EA isn't (supposed to be) dogmatic, and hence doesn't have clearly defined values.

I agree.

I think this is a big reason why people have chosen to focus on behavior and community involvement.

Community involvement is just instrumental to the goals of EA movement building. I think the outcomes we want to measure are things like career and donations. We also want to measure things that are instrumental to this, but I think we should keep those separated.

Related: my comment on "How have you become more (or less) engaged with EA in the last year?"

How have you become more (or less) engaged with EA in the last year?

I think it would be good to differentiate things that are instrumental to doing EA and things that are doing EA.

Ex.: Attending events and reading books is instrumental. Working and donating money is directly EA.

I would count those separately. Engagement in the community is just instrumental to the goal of EA movement building. If we entengle both in our discussions, we might end up with people attending a bunch of events and reading a lot online, but without ever producing value (for example).

Although maybe it does produce value in itself, because they can do movement building themselves and become better voters for example. And focusing a lot on engagement might turn EA into a robust superorganism-like entity. If that's the argument, then that's fine I guess.

Somewhat related: The community's conception of value drifting is sometimes too narrow.

Load More