P

PaulCousens

33 karmaJoined Dec 2021Downtown, Salt Lake City, UT, USA

Comments
42

This year, I donated to NPR, Women's March,  Zendo Project, (started donating to them a few months ago), and UNHCR (started donating earlier this year).

I don't have large amounts to donate so I don't have much of a donation plan. My donation decisions are mostly whim decisions.

I first started donating to NPR because I was listening to their podcasts and listening to their radio station often. I still do, but not as often.

I first started donating to Women's March because I feel aligned with their socioeconomic goal of more equity.

Not sure why I started donating to Zendo Project. Maybe it is because it seems to me like work, research, use, etc. around psychedelics is difficult because of their legal status so donating would be more impactful than if they did not have such legal status.

Maybe the invasion of Ukraine primed some process inside of me that lead to me donating to UNHCR. I don't remember exactly what I was thinking. Maybe it had to do with the invasion getting so much news coverage.

Edit: I forgot to mention I have also been donating to HRC. I do this for the same reason as I donate to Women's March.

Yes, I think you are right. Sorry, I made too broad of a statement when I only had things like strength and speed in mind.

I think that its utility being limited is true. It was just a first impression that occurred to me and I haven't thought it through. It seemed like anthropomorphizing AI could consistently keep people on their toes with regard to AI. An alternative way to become wary of AI would be less obvious thoughts like an AI that became a paperclip maximizer. However, growing and consistently having priors about AI that anthropomorphize them may be disadvantageous by constraining people's ability to have outside of the box suspicions (like what they already be covertly doing) and apprehensions (like them becoming paperclip maximizers) about them.

Anthropomorphizing AI could also help with thinking of AIs as moral patients. But I don't think that being human should be the sufficient standard for being a moral patient. So thinking of them as humans may just be useful insofar as they initiate thought of them as moral patients but maybe eventually an understanding of them as moral patients will involve considerations that are particular to them and not just because they are like us.

I have that audiobook by Deutsch and I never thought of making that connection to longtermism. 

I am reminded of the idea of a rubicon where a species' perspective is just a slice of the rulial space of all possible kinds of physics.

I am also reminded of the AI that Columbia Engineering researchers had that found new variables to predict phenomenon we already have formulas for. The AI's predictions using the variables worked well and it was not clear to the researchers what all the variables were.

That discoveries are unpredictable and the two things I mentioned seem to share the theme that our vantage point is just the tip of an iceberg.

I don't think that future knowledge discoveries being unknowable refutes longtermism. On the contrary, because future knowledge could lead to bad things such unknowability makes it more important to be prepared for various possibilities. Even if for every bad use of technology/knowledge there is a good use that can exactly counteract it, the bad uses can be employed/discovered first and the good uses may not have enough time to counteract the bad outcomes.

As I understand it, WIll MacAskill pointed out in Doing Good Better  that people doing such low-pay work are actually utilizing a relatively great opportunity in their country and that the seemingly low-pay is actually valuable in their country.

I think it is hybrid because it involves both forecasting and persuading others to think differently about their forecasts.

I would be interested in writing summaries of books. I did this with two books that I read within the past two years,  The Human Use of Human Beings and  Beyond Good and Evil. I imagine that I might have excluded many things that I expected myself to easily remember as following logically or being associated with what I did write down. For The Human Use of Human Beings, I tried to combine several of the ideas into one picture. I think what I had in mind was to put all the ideas of the book into a visual dashboard (I did not complete such a visual dashboard).  I don't think I still have the summary that I did of Beyond Good and Evil , but I imagine that for that one it is possible I wrote down many passages which I did not completely follow. 

Writing a summary of a book can help to process the book more and in different ways.

Part of the trap is that once you’re in the trap trying and failing to get out of it doesn’t help you much, so traits that would help in abundance don’t have a hill they can climb.

Can you clarify what you mean by this? I didn't follow you after you wrote "so traits that would help in abundance don’t have a hill they can climb."

 

I think maybe you meant that appreciation of the worth of money is valuable only until you fall into the trap of spending too much of it. Once you fall into that trap, appreciation of its worth won't helpful to you.

I did not find your blog post about moral offsetting offensive or insensitive. Your explanations of evolutionary reasons for why we have such visceral reactions to rape, to me, addressed the moral outrageousness with which rape is associated. Also, you clearly stated your own inability of being friends with a rapist. Philosophical discussions are probably better when they include sensitive issues so they can have more of an impact on our thought processes. 

Also, there was another post on here in which it was mentioned that a community organizer could come off as cult-like, dogmatic, or like they're reading from a script. So, for that reason, it's probably better to not try to censor yourself.

Regarding the content of the post about moral offsetting itself:

The problem I have with the thought experiments in which rape could lead to less rape overall is that there shouldn't be such situations where such hard choices are presented. While that is true that ideally we shouldn't have to face such hard decisions, I am probably underestimating the power of situations and overestimating my and others' ability to act.

As someone who is turned off by the idea of moral offsetting, your zombie-rapists thought experiment helped me to see the utility of offsetting a bit more clearly. As you said, offsetting the harm to animals is not ideal, but if it is more effective towards getting us to a reality that is more ideal for animals, then it is valuable.

Before I read about the results of the  study, my a priori assumptions were that the money wouldn't help because of bills but that some kind of benefit must come out of it.

Without a reliable source of income, even if they did not have many bills, it is hard to see how even $2,000 could help in the longterm.

To me, it seems that an unconditional cash transfer that helps temporarily but not in a longterm way might make people feel worse by perception of the counterfactual of being better off becoming more vivid. The $500 or $2,000 unconditional cash transfer brings them somewhat closer to the reality of being better off, but not close enough.

I wonder if there is a minimum length of subsistence that can be established for unconditional cash transfer so that it helps people universally, regardless of the wealth of their country.

Part of the trap is that once you’re in the trap trying and failing to get out of it doesn’t help you much, so traits that would help in abundance don’t have a hill they can climb.

Can you clarify what you mean by this? I didn't follow you after you wrote "so traits that would help in abundance don’t have a hill they can climb."

Load more