12 karmaJoined


I think a sufficiently intelligent intelligence can generate accurate beliefs from evidence, not just 'experiments', and not just its own experiments. I imagine AIs will be suggesting experiments too (if they're not already).

It is still plausible that not being able to run its own experiments will greatly hamper AI's scientific agendas, but it's harder to know how much it will exactly for intelligences likely to be much more intelligent than ourselves.

It is a shame – and I would guess a very deliberate one.

I've been a user on LessWrong for a long time and these events have resurfaced several times that I remember already, always instigated by something like this article, and many people discovering the evidence and allegations about them jumps to the conclusion that 'the community' needs to do some "soul searching" about it all.

And this recurring dynamic is extra frustrating and heated because the 'community members', including people that are purely/mostly just users of the site, aren't even the same group of people each time. Older users try to point out the history and new users sort themselves into warring groups, e.g. 'this community/site is awful/terrible/toxic/"rape culture"' or 'WTF, I had nothing to do with this!?'.

Having observed several different kinds of 'communities' try to handle this stuff, rationality/LessWrong and EA groups are – far and away – much better at actually effectively addressing it than anyone else.

People should almost certainly remain vigilant against bad behavior – as I'm sure they are – but they should also be proud of doing as good of a job as they have, especially given how hard of a job it is.

Yes, this kind of 'idle conjecture' seems epistemically risky. It's too easy to invent reasons that point in any particular direction.

I think your comment was a useful comparison anyways :)

I've long wondered about this. Thanks!