T

tailcalled

58 karmaJoined

Comments
8

And what does "racist" even mean here? I'm worried that there's a bait-and-switch going on, where this term is being used as an ambiguous combination of grave, derogatory accusation; and descriptive of a set of empirical beliefs about demographics and genetics. (Or to clarify: there's of course absolutely such a bait-and-switch going on, in the Guardian article and lots of broader discourse, my worry is about it also leaking into EA forum discussion via your post.)

I think the fact that you said "ambiguous combination of grave, derogatory accusation" is a problem for your argument, because it suggests that you don't have anything in mind that racism could mean other than a set of empirical beliefs about demographics and genetics. If this is the only actual thing that comes to mind for people, then presumably the grave/derogatory aspect is just a result of how they view those empirical beliefs about demographics and genetics.

I say this as one of the people who started HBD conversations at less.online (main one being a conversation about this paper - I didn't do the whole fishing-for-compatibility thing that OP mentioned). Or I would be inclined to call them racist conversations, though if I was to propose an alternate meaning of "racist" where I don't count as a racist, it would be something like: someone whose political theories find it infeasible to work with different races. White separatists would be a central example, in that they decide it's too infeasible to work with black people and therefore want their own society. And e.g. cops who aren't accountable to black communities would also be an example of racism.

But this would exclude some things that I think people would typically agree is racism, e.g. cops who do racial profiling but don't conspire to protect each other when one of them abused a black person who is seeking accountability. So I wouldn't really push this definition so hard.

In my opinion, a more productive line of inquiry is that a lot of HBD claims are junk/bullshit. From a progressive perspective, that's problematic because there's this giant edifice of racist lies that's getting enabled by tolerating racism, and from the perspective of someone who is interested in understanding race, that's problematic because HBD will leave you with lots of abd flaws in your understanding. Progressives would probably be inclined to say that this means HBD should be purged from these places, but that's hypocritical because at least as many progressive claims about race are junk/bullshit. My view of the productive approach would be to sort out the junk from the gems.

Doesn't the $67 billion number cited for capabilities include a substantial amount of work being put into reliability, security, censorship, monitoring, data protection, interpretability, oversight, clean dataset development, alignment method refinement, and security? At least anecdotally the AI work I see at my non-alignment-related job mainly falls under these sorts of things.

Can you give 5 examples of cases where rationalist/EAs should defer more to experts?

It's interesting, I had heard some vague criticism from social justice communities that EA is bad, but at first I had dismissed it. Your review made me look up the book and compare what the book says to how EAs (that is, you) interpret the book. And I've got to say, a lot of the social justice criticism of EA really looks spot-on as critiques of your review. I'd encourage readers to do some epistemic spot checks of this review, as at least when I did so, it didn't seem to fare super well. On the other hand I will probably read the full book when I find the time.

Since A’s and B’s guesses are identically accurate, it seems most sensible to take the average in order to be closest to the truth. And even if you were A or B, if you want to be closest to the truth, you should do the same.

Why not add them together, and declare yourself 90% sure that it is an oak tree?

Or rather, since simply adding them together may get you outside the [0, 1] range, why not convert it to log odds, subtract off the prior to obtain the evidence, and add the evidence together, add back in the prior, and then convert it back to probabilities?

Hm, my understanding is that there is no traditional institution that will issue a "yep this person is good" document that works across contexts, including for e.g. people who work in crypto, so any approval process would require a lot of personal judgement?

That said I don't disagree with the notion of using preexisting approval systems like crime record, my suggestion is more for making sure that one does in fact use them in the correct proportions, and in particular credibly committing to doing so in the future.

I should maybe have been more explicit in stating the actual policy proposal:

I don't think paying back necessarily needs to be done on the level of an individual project/grant. Insofar as the EA community is, well, a community, it might be viable to take responsibility on the level of the community.

For instance, in the discussion I linked to on twitter, the suggestion was that EAs would set up a fund that they could donate to for the victims of FTX.

This would presumably still create lots of community-wide incentives, as well as incentives among the leaders of EA, because nobody wants their community to waste a lot of resources due to having worked with bad actors. But it would also be much less burdensome to individual granttakers.