CG

Charlie_Guthmann

836 karmaJoined

Bio

Talk to me about cost benefit analysis !

Comments
219

The QURI people
luisa rodriguez
arepo
vasco grillo
michael st jules
The moral weights people at RP
brian tomisik 
carl schuman
 

argument about anti-realism just reinforces my view that effective altruism needs to break apart into sub movements that clearly state their goals/ontologies.  (I'm pro ea) but it increasingly doesn't make sense to me to call this "effective altruism" and then be vaguely morally agnostic while mostly just being an applied utilitarian group. Even among the utilitarians there is tons of minutiae that actually significantly alters the value estimates of different things. 

I really do think we could solve most of this stuff by just making EA an umbrella org for ea minded orgs that have specific goals and ontologies (democratic negative utilitarians, dictatorial anti realist dog lovers). 

TBF I don't think it matters that much for above if you are/aren't anti realist rather it just reminds me that even here I have large worldview differences to a lot of people (which is totally fine) but I think it would be more clear to everyone if there were more de jure divisions. 

 
he most important reason for my favoring moral realism is my sense that some goals


Your sense is just vibes. 
 

In the same way that some things are true and worth believing, some things are good and worth desiring.

Some things may be true depending on what you mean by true. worth believing would presuppose realism depending on what you mean by "worth". If this sentence matters to your argument then the whole thing is circular. 
 

We should ultimately find the notion of justified goals to be no more deeply mysterious than that of justified beliefs.
 

obviously not true, but peter addresses this. 

 

To deny the objective reality of either goodness or truth would seem to undermine inquiry, and there's no deeply compelling reason to do so.
 

again you are presupposing and/or being circular. 

There isn't a coherent argument here. It's just you coming to the table with your priors and handwaving them. I appreciate you saying your piece but I don't find this even mildly compelling and struggling to understand the level of agreement.  

I think some states of the world are objectively better than others, pleasure is inherently good and suffering is inherently bad, and that we can say things like "objectively it would be better to promote happiness over suffering"

 

I know lots of people who think some amount of suffering is good (and not just instrumentally for having more pleasure later). Is your claim here just that you somehow know that pleasure is inherently good?

I think the belief you are describing is more accurately "I'm confident my subjective view won't change" or something like that. 

Charlie_Guthmann
0
0
1
100% disagree

Morality is Objective

 

is ought problem

left side is people/acts/vibes of altruism, right side is people/acts/vibes of science, evidence based mindset, rationality, middle is combination of the two. photo below a loose style guide. Could come off as pretentionous but I think you can avoid that by just having all the people in the middle be historical examples and not from the current ea movement. 

Really like the idea. Also I would say yes you need to keep this to an extremely limited domain otherwise I would assume the main crux will just be the llm vs human analysis of different cause areas relative value. Agree with 2/3 though. 

Seems like there are two different broad ideas at play here. How good is the blog post fixing the topic and how important is the topic. I suppose you can try to tackle both at once but I feel like that might be biting off a lot at once?

A topic I personally like to think about and could gather 20 quite related posts are those relating to % chance of extinction risk and/or relative value of s-risk vs extinction risk reduction. 

Load more