Epistemic status: grumpy, not committed.
There was quite a lot of discussion of the karma system in the comments to the forum 2.0 announcement, but it didn’t seem very conclusive and as far as I know, hasn’t been publicly discussed since.
That seems enough concern that it’s worth revisiting. My worries are:
- Karma concentration exacerbates groupthink by allowing a relatively small number of people to influence which threads and comments have greatest visibility
- It leads to substantial karma inflation over time, strongly biasing recent posts to get more upvotes
Point 1) was discussed a lot in the original comments. The response was that because it’s a pseudo-logarithmic scale, this shouldn’t be much of a concern. I think we now have reasons to be sceptical of this response:
- There are plenty of people with quite powerful upvotes now - mine are currently worth 5 karma, very close to 6, and I’ve posted less than a dozen top level posts. That will give me 3-6 times the strong voting power of a forum beginner, which seems like way too much.
- While top level posts are the main concern, comments get a much lower level of interest, so the effect of one or two strong votes can stand out much more if you’re skimming through them.
- The people with the highest karma naturally tend to be the most active users, who’re likely already the most committed EAs. This means we already have a natural source of groupthink (assuming the more committed you are to a social group the more likely you are to have bought into any given belief it tends to hold). So groupthinky posts would already tend to get more attention, and having these active users have greater voting power multiplies this effect.
Point 2) is confounded by the movement and user base having grown, so a higher proportion of posts having been made in later years, when there were more potential upvoters. Nonetheless, unless you believe that the number of posts has proliferated faster than the number of users (so that karma is stretched evenly), it seems self-evident that there is at least some degree of karmic inflation.
So my current stance is that, while the magnitude of both effects is difficult to gauge because of complementary factors, both effects are probably in themselves net negative, and therefore things we should not be using tools to complement - we might even want to actively counteract them. I don’t have a specific fix in mind, though plenty were discussed in the comments section linked above. This is just a quick post to encourage discussion of alternative… so over to you, commenters!
Fwiw I didn't downvote this comment, though I would guess the downvotes were based on the somewhat personal remarks/rhetoric. I'm also finding it hard to parse some of what you say.
This still leaves a lot of room for subjective interpretation, but in the interests of good faith, I'll give what I believe is a fairly clear example from my own recent investigations: it seems that somewhere between 20-80% of the EA community believe that the orthogonality thesis shows that AI is extremely likely to wipe us all out. This is based on a drastic misreading of an often-cited 10-year old paper, which is available publicly for any EA to check.
Another odd belief, albeit one which seems more muddled than mistaken is the role of neglectedness in 'ITN' reasoning. What we ultimately care about is the amount of good done per resource unit, ie, roughly, <importance>*<tractability>. Neglectedness is just a heuristic for estimating tractability absent more precise methods. Perhaps it's a heuristic with interesting mathematical properties, but it's not a separate factor, as it's often presented. For example, in 80k's new climate change profile, they cite 'not neglected' as one of the two main arguments against working on it. I find this quite disappointing - all it gives us is a weak a priori probabilistic inference which is totally insensitive to the type of things the money has been spent on and the scale of the problem, which seems much less than we could learn about tractability by looking directly at the best opportunities to contribute to the field, as Founders Pledge did.
I don't know why you conclude this. I specified 'belief shared widely among EAs and not among intelligent people in general'. That is a very small subset of beliefs, albeit a fairly large subset of EA ones. And I do think we should be very cautious about a karma system that biases towards promoting those views.