All of Stephen Clare's Comments + Replies

Sasha Chapin on bad social norms in EA

I agree that a good number of people around EA trend towards sadness (or maybe "pits of despair"). It's plausible to me that the proportion of the community in this group is somewhat higher than average, but I'm not sure about that. If that is the case, though, then my guess is that some selection effects, rampant Imposter Syndrome, and the weight of always thinking about ways the world is messed up are more important causes than social norms. 

I have to say, I actually chuckled when I read "don’t ever indulge in Epicurean style" listed as an iron-clad EA norm. That, uhh, doesn't match my experience.

Donating money, buying happiness: new meta-analyses comparing the cost-effectiveness of cash transfers and psychotherapy in terms of subjective well-being

I'm interested in reading critiques of StrongMinds' research, but downvoted this comment because I didn't find it very helpful or constructive.  Would you mind saying a bit more about why you think their standards are low, and the evidence that led you to believe they are "making up" numbers?

They did not have a placebo-receiving control group. For example some kind of unstructured talking-group etc. Ideally an intervention known as „useless“ but sounding plausible. So we do not know, which effects are due to regression to the middle, social desirable answers etc. This is basically enough to make their research rather useless. And proper control groups are common for quiete a while.

No „real“ evaluation of the results. Only depending on what their patients said, but not checking, if this is correct (children going to school more often…). Not eve... (read more)

The motivated reasoning critique of effective altruism

They do discuss some of these and have published a few here, though I agree it would be cool to see some for longtermism (the sample BOTECs are for global health and wellbeing work).

The pretty hard problem of consciousness

Thanks for writing this summary! This all seems really important and really hard to figure out. What approaches/methods do researchers use to suggest answers to these kinds of questions? Can you give some examples of recent progress?

[Replying separately with comments on progress on the pretty hard problem; the hard problem; and the meta-problem of consciousness]

The meta-problem of consciousness is distinct from both a) the hard problem: roughly, the fundamental relationship between the physical and the phenomenal b) the pretty hard problem, roughly, knowing which systems are phenomenally consciousness

The meta-problem is c) explaining "why we think consciousness poses a hard problem, or in other terms, the problem of explaining why we think consciousness is hard to explain" (6)

The me... (read more)

6rgb3mo[Replying separately with comments on progress on the pretty hard problem [https://forum.effectivealtruism.org/posts/Qiiiv9uJWLDptH2w6/the-pretty-hard-problem-of-consciousness?commentId=vtscMAokvSdg6urRm] ; the hard problem; and the meta-problem of consciousness] Progress on the hard problem I am much less sure of how to think about this than about the pretty hard problem. This is in part because in general, I'm pretty confused about how philosophical methodology works, what it can achieve, and the extent to which there is progress in philosophy [http://consc.net/papers/progress.pdf]. This uncertainty is not in spite of, but probably because of doing a PhD in philosophy! I have considerable uncertainty about these background issues. One claim that I would hang my hat on is that the elaboration of (plausible) philosophical positions in greater detail, and more detailed scrutiny of them, is a kind of progress. And in this regard, I think the last 25 years have seen a lot of progress on the hard problem. The possible solution space has been sketched more clearly [http://consc.net/papers/nature.html], and arguments elaborated. One particularly interesting trend is the elaboration of the more 'extreme' solutions to the hard problem: panpsychism and illusionism. Panpsychism solves the hard problem by making consciousness fundamental and widespread; illusionism dissolves the hard problem by denying the existence of consciousness. Funnily enough, panpsychists and illusionists actually agree on a lot - they are both skeptical of programs that seek to identify consciousness with some physical, computational, or neural property; they both think that if consciousness exists it then it has some strange-sounding relation to the physical. For illusionists, this (putative) anomalousness of consciousness is part of why they conclude it must not exist. For panpsychists, this (putative) anomalousness of consciousness is part of why they are led to embrace a position that strikes
9rgb3moThat's a great question. I'll reply separately with my takes on progress on a) the pretty hard problem, b) the hard problem, and c) something called the meta-problem of consciousness [1]. [1] With apologies for introducing yet another 'problem' to distinguish between, when I've already introduced two! (Perhaps you can put these three problems into Anki?) Progress on the pretty hard problem This is my attempt to explain Jonathan Birch's recent proposal [https://philpapers.org/archive/BIRTSF.pdf] for studying invertebrate consciousness. Let me know if it makes rough sense! The problem with studying animal consciousness is that it is hard to know how much we can extrapolate from what we know about what suffices for human consciousness. Let's grant that we know from experiments on humans that you will be conscious of a visual perception if you have a neural system for broadcasting information to multiple sub-systems in the brain. (This is the Global Workspace Theory mentioned above), and that visual perception is broadcast. Great, now we know that this sophisticated human Global Workspace suffices for consciousness. But how much of that is necessary? How much simpler could the Global Workspace be and still result in consciousness? When we try to take a theory of consciousness "off the shelf" and apply it to animals, we face a choice of how strict to be. We could say that the Global Workspace must be as complicated as the human case. Then no animals count as conscious. We could say that the Global Workspace can be very simple. Then maybe even simple programs count as conscious. To know how strict or liberal to be in applying the theory, we need to know what animals are conscious. Which is the very question! Some people try to get around this by proposing tests for consciousness that avoid the need for theory--the Turing Test would be an example of this in the AI case. But these usually end up sneaking theory in the backdoor. Here's Birch's proposal for getting aro
AMA: Jason Brennan, author of "Against Democracy" and creator of a Georgetown course on EA

The forum guidelines suggest I downvote comments when I dislike the effect they have on a conversation. One of the examples the guidelines give is when a comment contains an error or bad reasoning. While I think the reasoning in Ruth's comment is fine, I think the claim that capitalism is unsustainable and causes "massive suffering" is an error. Nor is the claim backed up by any links to supporting evidence that might change my mind. The most likely effect of ruth_schlenker's comment is to distract from Halstead's original comment and inflame the discussion, i.e. have a negative effect on the conversation.

Capitalism could be worse than some alternative due to factory farming, climate change or various other global catastrophic risks, although we really need to consider specific alternatives. So far, I think it's pretty clear that what we've been doing has been unsustainable, but that doesn't mean replacing capitalism is better than reforming or regulating it, and technology does often address problems.

Economic policy in poor countries

Hey, I really appreciate this discussion! I wanted to jump in on one point. You note that the Founders Pledge follow-up to the original growth post (which I co-wrote) concluded that it would be too costly to continue the research to identify funding opportunities. I just wanted to note that taht was the case that because of how FP's funding model works. FP staff don't directly control the pledged funds - the members make the final decision over where to donate, and can take or leave the recommendations. 

Since policy orgs are difficult to evaluate, I w... (read more)