Wiki Contributions


Why & How to Make Progress on Diversity & Inclusion in EA

Another (possibly bad, but want to put it out there) solution is to list names of people who downvoted. That of course has downsides, but it would have more accountability, especially when it comes to my suspicion that it's a few people doing a lot of the downvoting against certain people/ideas.

Another is to have downvotes 'cost' karma, e.g. if you have 500 total karma, that allows you to make 50 downvotes.

Why & How to Make Progress on Diversity & Inclusion in EA

Yeah, I'm totally onboard with all of that, including the uncertainty.

My view on downvoting is less that we need to remove it, and more that the status quo is terrible and we should be trying really hard to fix it.

Why & How to Make Progress on Diversity & Inclusion in EA

Yeah, I don't think downvotes are usually the best way of addressing bad arguments in the sense that someone is making a logical error, mistaken about an assumption, missing some evidence, etc. Like in this thread, I think that's leading to dogpiling, groupthink, and hostility in a way that outweighs the benefit of downvoting from flagging bad arguments when thoughtful people don't have time to flag them via a thoughtful comment.

I think downvotes are mostly just good for bad comments in the sense that someone is purposefully lying, relying on personal attacks instead of evidence, or otherwise not abiding by basic norms of civil discourse. In these cases, I don't think the downvoting comes off as nearly as hostile.

If you agree with that, then we must just disagree on whether examples (like my downvoted comment above) are bad arguments or bad comments. I think the community does pretty often downvote stuff it shouldn't.

Why & How to Make Progress on Diversity & Inclusion in EA

Another concrete suggestion: I think we should stop having downvotes on the EA Forum. I might be not appreciating some of the downsides of this change, but I think they are small compared to the big upside of mitigating the toxic/hostile/dogpiling/groupthink environment we currently seem to have.

When I've brought this up before, people liked the idea, but it never got discussed very thoroughly or implemented.

Edit: Even this comment seems to be downvoted due to disagreement. I don't think this is helpful.

Why & How to Make Progress on Diversity & Inclusion in EA

For what it's worth, I think if you had instead commented with: "As a newcomer to this community, I see very little evidence that EA prizes accuracy more than average. This seems contrary to its goals, and makes me feel sad and unwelcome," (or something similar that politely captures what you mean) that would have been a valuable contribution to the discussion.

That being said, you might have still gotten downvoted. People's downvoting behavior on this forum is really terrible and a huge area for improvement in online EA discourse.

Why & How to Make Progress on Diversity & Inclusion in EA

I wouldn't concern yourself much with downvotes on this forum. People use downvotes for a lot more than the useful/not useful distinction they're designed for (most common other reason is to just signal against views they disagree with when they see an opening). I was recently talking to someone about what big improvements I'd like to see in the EA community's online discussion norms, and honestly if I could either remove bad comment behavior or remove bad liking/voting behavior, it'd actually be the latter.

To put it another way, though I'm still not sure exactly how to explain this, I think no downvotes and one thoughtful comment explaining why your comment is wrong (and no upvotes on that comment) should do more to change your mind than a large number of downvotes on your comment.

I'm really still in favor of just removing downvotes from this forum, since this issue has been so persistent over the years. I think there would be downsides, but the hostile/groupthink/dogpiling environment that the downvoting behavior facilitates is just really really terrible.

Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood

That pragmatic approach makes sense and helps me understand your view better. Thanks! I do feel like the consequences of suggesting objectivism for consciousness are more significant than for "living things," "mountains," and even terms that are themselves very important like "factory farming."

Consequences being things like (i) whether we get wrapped up in the ineffability/hard problem/etc. such that we get distracted from the key question (for subjectivists) of "What are the mental things we care about, and which beings have those?" and (ii) in the particular case of small minds (e.g. insects, simple reinforcement learners), whether we try to figure out their mental lives based on objectivist speculation (which, for subjectivists, is misguided) or force ourselves to decide what the mental things we care about are, and then thoughtfully evaluate small minds on that basis. I think evaluating small minds is where the objective/subjective difference really starts to matter.

Also, to a less extent, (iii) how much we listen to "expert" opinion outside of just people who are very familiar with the mental lives of the being in question, and (iv) unknown unknowns and keeping a norm of intellectual honesty, which seems to apply more to discussions of consciousness than of mountains/etc.

Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood

I think Tomasik's essay is a good explanation of objectivity in this context. The most relevant brief section.

Type-B physicalists maintain that consciousness is an actual property of the world that we observe and that is not merely conceptually described by structural/functional processing, even though it turns out a posteriori to be identical to certain kinds of structures or functional behavior.

If you're Type A, then presumably you don't think there's this sort of "not merely conceptually described" consciousness. My concern then is that some of your writing seems to not read like Type A writing, e.g. in your top answer in this AMA, you write:

I'll focus on the common fruit fly for concreteness. Before I began this investigation, I probably would've given fruit fly consciousness very low probability (perhaps <5%), and virtually all of that probability mass would've been coming from a perspective of "I really don't see how fruit flies could be conscious, but smart people who have studied the issue far more than I have seem to think it's plausible, so I guess I should also think it's at least a little plausible." Now, having studied consciousness a fair bit, I have more specific ideas about how it might turn out to be the case that fruit flies are conscious, even if I think they're relatively low probabilitiy, and of course I retain some degree of "and maybe my ideas about consciousness are wrong, and fruit flies are conscious via mechanisms that I don't currently find at all plausible." As reported in section 4.2, my current probability that fruit flies are conscious (as loosely defined in section 2.3.1 is 10%.

Speaking of consciousness in this way seems to imply there is an objective definition, but as I speculated above, maybe you think this manner of speaking is still justified given a Type A view. I don't think there's a great alternative to this for Type A folks, but what Tomasik does is just frequently qualifies that when he says something like 5% consciousness for fruit flies, it's only a subjective judgment, not a probability estimate of an objective fact about the world (like whether fruit flies have, say, theory of mind).

I do worry that this is a bad thing for advocating for small/simple-minded animals, given it makes people think "Oh, I can just assign 0% to fruit flies!" but I currently favor intellectual honesty/straightforwardness. I think the world would probably be a better place if Type B physicalism were true.

Makes sense about the triviality objection, and I appreciate that a lot of your writing like that paragraph does sound like Type A writing :)

Hi, I'm Luke Muehlhauser. AMA about Open Philanthropy's new report on consciousness and moral patienthood

Thanks for doing this AMA. I'm curious for more information on your views about the objectivity of consciousness, e.g. Is there an objectively correct answer to the question "Is an insect conscious?" or does it just depend on what processes, materials, etc. we subjectively choose to use as the criteria for consciousness?

The Open Phil conversation notes with Brian Tomasik say:

Luke isn’t certain he endorses Type A physicalism as defined in that article, but he thinks his views are much closer to “Type A” physicalism than to “Type B” physicalism

(For readers, roughly speaking, Type A physicalism is the view that consciousness lacks an objective definition. Tomasik's well-known analogy is that there's no objective definition of a table, e.g. if you eat on a rock, is it a table? I would add that even if there's something we can objectively point to as our own consciousness (e.g. the common feature of the smell of a mushroom, the emotion of joy, seeing the color red), that doesn't give you an objective definition in the same way knowing one piece of wood on four legs is a table, or even having several examples, doesn't give you an objective definition of a table.)

However, in the report, you write as though there is an objective definition (e.g. in the "Consciousness, innocently defined" section), and I feel most readers of the report will get that impression, e.g. that there's an objective answer as to whether insects are conscious.

Could you elaborate on your view here and the reasoning behind it? Perhaps you do lean towards Type A (no objective definition), but think it's still useful to use common sense rhetoric that treats it as objective, and you don't think it's that harmful if people incorrectly lean towards Type B. Or you lean towards Type A, but think there's still enough likelihood of Type B that you focus on questions like "If Type B is true, then is an insect conscious?" and would just shorthand this as "Is an insect conscious?" because e.g. if Type A is true, then consciousness research is not that useful in your view.

Introducing Sentience Institute

Thanks, Andy. That table had the values of the previous table for some reason. We updated the page.

Load More