D

Dicentra

512 karmaJoined

Comments
34

Yeah re the export controls, I was trying to say "I think CSET was generally anti-escalatory, but in contrast, the effect of their export controls work was less so" (though I used the word "ambiguous" because my impression was that some relevant people saw a pro of that work that it also mostly didn't directly advance AI progress in the US, i.e. it set China back without necessarily bringing the US forward towards AGI). To use your terminology, my impression is some of those people were "trying to establish overwhelming dominance over China" but not by "investing heavily in AI". 

I largely agree with this post, and think this is a big problem in general. There's also a lot of adverse selection that can't be called out because it's too petty and/or would require revealing private information. In a reasonable fraction of cases where I know the details, the loudest critics of a person or project is someone who has a pretty substantial negative-COI that isn't being disclosed, like that the project fired them or defunded them or the person used to date them and broke up with them or something. As with positive COIs, there's a problem where being closely involved with something both gives you more information you could use to form a valid criticism (or make a good hire or grant) that others might miss and is correlated with factors that could bias your judgment. 

But with hiring and grantmaking there are generally internal processes for flagging these, whereas when people are making random public criticisms, there generally isn't such a process 

This is inconsistent with my impressions and recollections. Most clearly, my sense is that CSET was (maybe still is, not sure) known for being very anti-escalatory towards China, and did substantial early research debunking hawkish views about AI progress in China, demonstrating it was less far along than ways widely believed in DC (and that EAs were involved in this, because they thought it was true and important, because they thought current false fears in the greater natsec community were enhancing arms race risks) (and this was when Jason was leading CSET, and OP supporting its founding). Some of the same people were also supportive of export controls, which are more ambiguous-sign here.

yeah, on second thought I think you're right that at least the arg "For a fixed valuation, potential is inversely correlated with probability of success" probably got a lot less attention than it should have, at least in the relevant conversations I remember

I'm a bit confused about how the first part of this post connects to the final major section... I recall people saying many of the things you say you wish you had said... do you think people were unaware FTX, a recent startup in a tumultuous new industry, might fail? Or weren't thinking about it enough? 

I agree strongly with your last paragraph, but I think most people I know who bounced from EA were probably just more of gold diggers, fad-follwing, or sensitive to public opinion and less willing to do what's hard when circumstances become less comfortable (but of course they won't come out and say it and plausibly don't admit it to themselves). Of the rest, it seems like they were bothered by a combination of the fraud, how EAs responded to the collapse, and updated towards the dangers of more utilitarian-style reasoning and the people it attracts. 

Another meta thing about the visuals is that I don't like the +[number] feature that makes it so can't tell, at a glance, that the voting is becoming very tilted towards the right side

I was also convinced by this and other things to write a letter, and am commenting now to make the idea stay salient to people on the Forum. 

The scientific proposition is "are there racial genetic differences related to intelligence" right, not "is racism [morally] right"? 

I find it odd how much such things seem to be conflated; if I learned that Jews have an IQ an average of 5 points lower than non-Jews, I would... still think the Holocaust and violence towards and harassment of Jews was abhorrent and horrible? I don't think I'd update much/at all towards thinking it was less horrible. Or if you could visually identify people whose mothers had drank alcohol during pregnancy, and they were statistically a big less intelligent (as I understand them to be), enslaving them, genociding them, or subjecting them to Jim Crow style laws would seem approximately as bad as it seems to do to some group that's slightly more intelligent on average.

I agree with 

if you want to make a widget that's 5% better, you can specialize in widget making and then go home and believe in crystal healing and diversity and inclusion after work. 

and

if you want to make impactful changes to the world and you believe in crystal healing and so on, you will probably be drawn away from correct strategies because correct strategies for improving the world tend to require an accurate world model including being accurate about things that are controversial. 

and 

many people seriously believed that communism was good, and they believed that so much that they rejected evidence to the contrary. Entire continents have been ravaged as a result.

A crux seems to be that I think AI alignment research is a fairly narrow domain, more akin to bacteriology than e.g. "finding EA cause X" or "thinking about if newly invented systems of government will work well". This seems more true if I imagine for my AI alignment researcher someone trying to run experiments on sparse autoencoders, and less true if I imagine someone trying to have a end-to-end game plan for how to make transformative AI as good as possible for the lightcone, which is obviously a more interdisciplinary topic more likely to require correct contrarianism in a variety of domains. But I think most AI researchers are more in the former category, and will be increasingly so. 

Two points: 

(1) I don't think "we should abolish the police and treat crime exclusively with unarmed social workers and better government benefits" or "all drugs should be legal and ideally available for free from the state" are the most popular political positions in the US, nor close to them,  even for D-voters. 

(2) your original question was about supporting things (e.g. Lysenkoism), and publicly associating with things, not about what they "genuinely believe"

But yes, per my earlier point, if you told me for example "there are three new researchers with PhDs from the same prestigious university in [field unrelated to any of the above positions, let's say virology], the only difference I will let you know about them is one (A) holds all of the above beliefs, one (B) holds some of the above beliefs, and one (C) holds none of the above beliefs, predict which one will improve the odds of their lab making a bacteriology-related breakthrough the most" I would say the difference between them is small i.e. these differences are only weakly correlated with odds of their lab making a breakthrough and don't have much explanatory power. And, assuming you meant "support" not "genuinely believe" and cutting the two bullets I claim aren't even majority positions among for example D-voters, and B>A>C but barely

Load more