Project lead of LessWrong 2.0, often helping the EA Forum with various issues with the forum. If something is broken on the site, it's a good chance it's my fault (Sorry!).
But... your post was quite inaccurate and strawmanned people extensively?
Eliezer and other commenters compellingly demonstrated this in the comments. I don't think you should get super downvoted, but your post includes a lot of straightforwardly false sentences, so I think the reaction makes sense.
(As someone who filled out the survey, I thought the framing of the questions was pretty off, and I felt like that jeopardized a lot of the value of the questions. I am not sure how much better you can do, I think a survey like this is inherently hard, but I at least don't feel like the survey results would help someone understand what I think much better)
We might mostly be arguing about semantics. In a similar discussion a few days ago I was making the literal analogy of "if you were worried about EA having bad effects on the world via the same kind of mechanism as the rise of communism, a large fraction of the things under the AI section should go into the 'concern' column, not the 'success' column". Your analogy with Marx illustrates that point.
I do disagree with your last sentence. The thing that people are endorsing is very much both a social movement as well as some object level claims. I think it differs between people, but there is a lot of endorsing AI Safety as a social movement. Social proof is usually the primary thing evoked these days in order to convince people.
(I am mostly thinking about the AI section, and disagree with your categorization there:
NO: Developed RLHF, a technique for controlling AI output widely considered the key breakthrough behind ChatGPT
Yep, agree with a NO here
NO: …and other major AI safety advances, including RLAIF and the foundations of AI interpretability10.
Yep, agree with a NO here
NO: Founded the field of AI safety, and incubated it from nothing up to the point where Geoffrey Hinton, Yoshua Bengio, Demis Hassabis, Sam Altman, Bill Gates, and hundreds of others have endorsed it and urged policymakers to take it seriously.
I think this should be a YES. This is clearly about ending up in an influential position in a field.
NO: Helped convince OpenAI to dedicate 20% of company resources to a team working on aligning future superintelligences.
This should also be a YES. This is really quite centrally about EAs ending up with more power and influence.
NO: Gotten major AI companies including OpenAI to work with ARC Evals and evaluate their models for dangerous behavior before releasing them.
This should also be a YES. Working with AI companies is about power and influence (which totally might be used for good, but it's not an intellectual achievement).
YES: Got two seats on the board of OpenAI, held majority control of OpenAI for one wild weekend, and still apparently might have some seats on the board of OpenAI, somehow?12
Agree
YES: Helped found, and continue to have majority control of, competing AI startup Anthropic, a $30 billion company widely considered the only group with technology comparable to OpenAI’s.13
Agree
YES: Become so influential in AI-related legislation that Politico accuses effective altruists of having “[taken] over Washington” and “largely dominating the UK’s efforts to regulate advanced AI”.
Agree
NO: Helped (probably, I have no secret knowledge) the Biden administration pass what they called "the strongest set of actions any government in the world has ever taken on AI safety, security, and trust.”
What we have seen here so far is institutes being founded and funding being promised, with some very extremely preliminary legislation that might help. Most of this achievement is about ending up with people in positions of power. So should be a YES.
NO: Helped the British government create its Frontier AI Taskforce.
This seems like a clear YES? The task force is very centrally putting EAs and people concerned about safety in power. No legislation has been passed.
NO: Won the PR war: a recent poll shows that 70% of US voters believe that mitigating extinction risk from AI should be a “global priority”.
Agree
Overall, for AI in-particular, I count 8/11. I think some of these are ambiguous or are clearly relevant for more than just being in power, but this list of achievements is really quite hugely weighted towards measuring the power that EA and AI Safety have achieved as a social movement, and not its achievements towards making AI actually safer.
I am reasonably confident Helen replaced Holden as a board member, so I don't think your 2021-12-31 list is accurate. Maybe there was a very short period where they were both on the board, but I heard the intention was for Helen to replace Holden.
I think the self-correction mechanism was not very strong. I think if Tara (who was also strongly supportive of the Leverage faction, which is why she placed Larissa in charge) had stayed, I think it would have been the long-term equilibrium of the organization. The primary reason why the equilibrium collapsed is because Tara left to found Alameda.
. Leverage and Nonlinear are very peripheral to EA and they mostly (if allegations are true) harmed EAs rather than people outside the movement.
I will again remind people that Leverage at some point had approximately succeeded at a corporate takeover of CEA, placing both the CEO and their second-in-command in the organization. They really were not very peripheral to EA, they were just covert about it.
I think it's a bit messy. I think each individual one of these really doesn't have large consequences, but it matters a lot in as much as Scott's list of good things about EA is in substantial parts a list of "EAs successfully ending up in positions of power", and stuff like Leverage and Nonlinear are evidence about what EAs might do with that power.
Top was mostly showing me tweets from people that I follow, so my sense is it was filtered in a personalized way. I am not fully sure how it works, but it didn't seem the right type of filter.
Sure. Here are some quotes from the original version of your post:
This paragraph clearly shows you misunderstood Eliezer. Different proteins are held together almost exclusively by non-covalent forces.
This is also evidently false, since like dozens of people I know have engaged with Drexlers and Eliezers thoughts on this space, many of which have a pretty deep understanding of chemistry, and would use similar (or the same) phrase. You seem to be invoking some expert consensus that doesn't exist. Indeed, multiple people with PhD level chemistry background have left comments saying they understood Eliezer's point here.
This is also false. The point makes sense, many people with chemistry or biology background get it, as shown above.
Look, I appreciate the post about the errors in the quantum physics sequence, but you are again vastly overstating the expert consensus here. I have talked with literally 10+ physics PhDs about the quantum physics sequence. Of course there are selection effects, but most people liked it and thought it was great. Yes, it was actually important to add a renormalization term, as you said in your critique, but really none of the points brought up in the sequences depended on it at all.
Like look, when people read your post without actually reading Eliezer's reply, they get the very strong sense that you are claiming that Eliezer is making an error at the level of high-school biology. That somehow he got so confused about chemistry that he didn't understand that a single protein of course is made out of covalent bonds.
But this is really evidently false and kind of absurd. As you can see in a lot of Eliezer's writing, and also his comment level response, Eliezer did not at any point get confused that proteins are made out of covalent bonds. Indeed, to me and Eliezer it seemed so obvious that proteins internally are made out of covalent bonds that I did not consider the possibility that people could somehow interpret this as a claim about the atoms in proteins being held together by Van der Waals forces (how do you even understand what Van der Waals forces are, but don't understand that proteins are internally covalently bonded?). But that misinterpretation seems really what your post was about.
Now, let me look at the most recent version of your post:
Well, it still includes:
This still seems wrong, though like, you did add some clarifications around it that make it more reasonable.
You did add a whole new section which is quite dense in wrong claims:
Look, this is doubling down on a misinterpretation which at this point you really should have avoided. We are talking about what you call the tertiary structure here. At the level of tertiary structures, and the bonds between proteins, biology does almost solely stick to ionic bonds and proteins held together by Van der Waals forces.
It is the case that sometimes the tertiary structures also use covalent bonds, like in the case of lignin, and I think that's a valid point. It's however not one you had made in your post at all, and just one that Eliezer acknowledges independently. The most recent version of your post now does have a fraction of a sentence, in a quote by your chemistry fact-checker, saying that sometimes tertiary protein structures to use covalent bonds, and I think that's an actual real point that responds to what Eliezer is saying. A post I wouldn't downvote would be one that had that as its main focus, since I think there is a valid critique to be made that biology is in some circumstances capable of using covalent bonds for tertiary structures (as Eliezer acknowledges), but it's not the one you made.
Look man, I think you really know by know what Eliezer means by this. Eliezer is talking about alternatives to biology where most of the tertiary structure leverages covalent bonds.
This is also doubling down on the same misunderstanding. The machinery and tertiary structure of bacteria do not use covalent bonds very much. This is quite different from most current nanomachine designs, which the relevant books hypothesize would be substantially more robust than present biological machinery due to leveraging mostly covalent bonds.
I don't understand what you are talking about here. Basically everything in biology is either made out of proteins or manufactured by proteins. If you can make proteins, you can basically make anything. Proteins are the way most things get done in a cell. The sentence above reads as confused as saying "a CPU is not a general purpose calculator. It does exactly one thing, and that is to read instructions and return the results". Yes, ribosomes read instructions and link together amino acids to form proteins, and that is how biological systems generally assemble things.
This one is confused on multiple levels. The meaning of "X is held together by something" of course depends on what level of organization of X you are talking about.
Both of the following sentences are correct:
Those are fine sentences to say. Yes, it's plausible to get misunderstood, and that's a bit sad, but it doesn't mean you were wrong.
This is a random nitpick, but animal bodies are indeed internally held together by flesh instead of skeletons. The skeleton itself is not connected. Bones only provide local structural support against bending and breaking. If I removed your flesh your bones would mostly disconnect and fall into a heap on the ground. Your bones are generally not under much of any strain at any given moment in time, instead you are more like a tensegrity structure where the vast majority of your integrity comes from tension, which comes from your tendons and muscles.
Even with correction this is still inaccurate. It is correct that non-covalent bonds are the dominant forces for the structure of proteins. Yes, there are some exceptions, like lignin, and that matters, and as I said, I would have upvoted a post talking about that. Yes, it's structurally dependent. But if you aggregate across structures it's true and seems reasonably to describe as being the dominant force.