J

jkmh

39 karmaJoined May 2020

Comments
12

This answer clarified in my mind what I was poorly trying to grasp at with my analogy. Thank you. I think the answer to my original question is a certain "no" at this point.

I sometimes get frustrated when I hear someone trying to "read between the lines" of everything another person says or does. I get even more frustrated if I'm the one involved in this type of situation. It seems that non-rhetorical exploratory questions (e.g. "what problem is solved by person X doing action Y?") are often taken as rhetorical and accusatory (e.g. "person X is not solving a problem by doing action Y.")

I suppose a lot of it comes down to presentation and communication skills. If you communicate very well, people won't try as hard to read between the lines of your statements and questions.

However, I still believe there is room for people to do a little less "reading between the lines," even in situations where they really want to. It can reduce friction and sometimes completely avoid an unnecessary conflict.

I searched for this topic on both EA Forum and LessWrong and didn't find much, at least with the phrasing of "reading between the lines." Does anyone have any links or articles that explore this idea more thoroughly?

Answer by jkmhAug 06, 20211
0
0

I occasionally donate to user-funded services, but it is very ad-hoc and not a lot of thought goes into deciding which ones. I think I donated to Wikipedia a few years ago, and I donated to a local public radio station once. It usually happens after I use a service for awhile and suddenly think "hmm, I want the people who make that to know I appreciate their service."

I don't think it's ever been anything more than $20. And again, no rigorous decision making process, something about $20 just seems right as an "appreciation donation." The dollar amount might go higher if I ever encountered a user-funded service that I believed needed more from me to stay afloat.

To add on to your thoughts about argument 2: even if taking breaks with X podcaster is crucial to your personal productivity, you should still ask yourself whether X podcaster needs your money to continue podcasting. And then even if you decide they don't need your money to continue, but you really want those fuzzies from donating to X podcaster, then remember to  purchase fuzzies and utilons separately.

What do you mean by correct?

When you say "this generalizes to other forms of consequentialism that don't have a utility function baked in", what does "this" refer to? Is it the statement: "there may be no utility function that accurately describes the true value of things" ?

Do the "forms of consequentialism that don't have a utility function baked in" ever intend to have a fully accurate utility function?

Answer by jkmhJun 15, 20216
0
0

I imagine most people reading your question don't want to list out a bunch of bad ideas. But I think that might be what's needed at this point, because the more we enumerate (and eliminate), the more clear it becomes whether or not CEEALAR is actually the best option. Or maybe seeing a bunch of bad ideas will spark a good one in someone's mind. Here:

Centre for Effective Learning, Centre for Learning Good, Blackpool Pastures, Hotel Hedon, Hotel for Effective Research, Maxgood Productions, Maxwell Hotel, EA Retreat House.

Yeah this is difficult.

Answer by jkmhJun 13, 20215
0
0

I don't have much to add, other than letting you know you're not alone in looking for this. I started doing a similar thing a few weeks ago. A couple more "sandboxes" you could add are GlobalGiving and Kickstarter. It's a bit difficult to find the projects that an EA might be looking for on Kickstarter, but the "evaluation exercise" that you're trying to do could be good practice even there (e.g. trying to determine the potential positive impact of this app that aims to build a habit of not touching your face)

This is an interesting perspective. It makes me wonder if/how there could be decently defined sub-groups that EAs can easily identify, e.g. "long-termists interested in things like AI" vs. "short-termists who place significantly more weight on current living things" - OR - "human-centered" vs. "those who place significant weight on non-human lives."

Like within Christianity, specific values/interpretations can/should be diverse, which leads to sub-groups. But there is sort of a "meta-value" that all sub-groups hold, which is that we should use our resources to do the most good that we can. It is vague enough to be interpreted in many ways, but specific enough to keep the community organized.

I think the fact that I could come up with (vaguely-defined) examples of sub-groups indicates that, in some way, the EA community already has sub-communities. I agree with the original post that there is risk of too much value-alignment that could lead to stagnation or other negative consequences. However, in my 2 years of reading/learning about EA, I've never thought that EAs were unaware or overly-confident in their beliefs, i.e. it seems to me that EAs are self-critical and self-aware enough to consider many viewpoints.

I personally never felt that just because I don't want (nor can I imagine) an AI singleton that brings stability to humanity meant that I wasn't an EA.

Answer by jkmhJun 25, 20202
0
0

Reading and following through reference links in the Wikipedia for "Reciprocity" might be a good start: https://en.wikipedia.org/wiki/Reciprocity_(social_psychology)

I had trouble finding much else Googling things like "science of guilt".

Are you wondering if the possible negative effects of shame/guilt could cause more harm than help in certain scenarios?

I also wonder if help coming from "institutions" helps lower any feeling of guilt for recipients, because it's less personal? Receiving help from "Organization X" seems easier to accept than receiving help from a face with a name who seems to be sacrificing time/resources for you.

Load more