Epistemic status: grumpy, not committed.
There was quite a lot of discussion of the karma system in the comments to the forum 2.0 announcement, but it didn’t seem very conclusive and as far as I know, hasn’t been publicly discussed since.
That seems enough concern that it’s worth revisiting. My worries are:
- Karma concentration exacerbates groupthink by allowing a relatively small number of people to influence which threads and comments have greatest visibility
- It leads to substantial karma inflation over time, strongly biasing recent posts to get more upvotes
Point 1) was discussed a lot in the original comments. The response was that because it’s a pseudo-logarithmic scale, this shouldn’t be much of a concern. I think we now have reasons to be sceptical of this response:
- There are plenty of people with quite powerful upvotes now - mine are currently worth 5 karma, very close to 6, and I’ve posted less than a dozen top level posts. That will give me 3-6 times the strong voting power of a forum beginner, which seems like way too much.
- While top level posts are the main concern, comments get a much lower level of interest, so the effect of one or two strong votes can stand out much more if you’re skimming through them.
- The people with the highest karma naturally tend to be the most active users, who’re likely already the most committed EAs. This means we already have a natural source of groupthink (assuming the more committed you are to a social group the more likely you are to have bought into any given belief it tends to hold). So groupthinky posts would already tend to get more attention, and having these active users have greater voting power multiplies this effect.
Point 2) is confounded by the movement and user base having grown, so a higher proportion of posts having been made in later years, when there were more potential upvoters. Nonetheless, unless you believe that the number of posts has proliferated faster than the number of users (so that karma is stretched evenly), it seems self-evident that there is at least some degree of karmic inflation.
So my current stance is that, while the magnitude of both effects is difficult to gauge because of complementary factors, both effects are probably in themselves net negative, and therefore things we should not be using tools to complement - we might even want to actively counteract them. I don’t have a specific fix in mind, though plenty were discussed in the comments section linked above. This is just a quick post to encourage discussion of alternative… so over to you, commenters!
You are right. My mindset writing this comment was bad, but I remember thinking the reply seemed not specific and general, and I reacted harshly, this was unnecessary and wrong.
I do not know the details of the orthogonality thesis and can't speak to this very specific claim (but this is not at all refuting you, I am just literally clueless and can't comment on something I don't understand).
To both say the truth and be agreeable, it's clear that the beliefs in AI safety are from EAs following the opinions of a group of experts. This just comes from people's outright statements.
In reality, those experts are not at the majority of AI people and it's unclear exactly how EA would update or change its mind.
Furthermore, I see things like the below, that, without further context, could be wild violations of "epistemic norms", or just common sense.
For background, I believe this person is interviewing or speaking to researchers in AI, some of whom are world experts. Below is how they seem to represent their processes and mindset when communicating with these experts.
The person who wrote the above is concerned about image, PR and things like initial conditions, and this is entirely justified, reasonable and prudent for any EA intervention or belief. Also, the person who wrote the above is conscientious, intellectually modest, and highly thoughtful, altruistic and principled.
However, at the same time, at least from their writing above, their entire attitude seems to be based on conversion—yet, their conversations is not with students or laypeople like important public figures, but the actual experts in AI.
So if you're speaking with the experts in AI and adopting this attitude that they are preconverts and you are focused on working around their beliefs, it seems like, in some reads of this, would be that you are cutting off criticism and outside thought. In this ungenerous view, it's a further red flag that you have to be so careful—that's an issue in itself.
For context, in any intervention, getting the opinion or updating from experts is sort of the whole game (maybe once you're at "GiveWell levels" and working with dozens of experts it's different, but even then I'm not sure—EA has updated heavily on cultured meat from almost a single expert).