An undergrad at University of Maryland, College Park. Majoring in math.
After finishing The Sequences at the end of 9th grade, I started following the EA community, changing my career plans to AI alignment. If anyone would like to work with me on this, PM me!
I’m currently starting the EA group for the university of maryland, college park.
Recommendation: A collection of paradoxes dealing with Utilitarianism. This seems to me to be what you wrote, and would have had me come to the post with more of a “ooo! Fun philosophy discussion” rather than “well, thats a very strong claim… oh look at that, all so called inconsistencies and irrationalities either deal with weird infinite ethics stuff or are things I can’t understand. Time to be annoyed about how the headline is poorly argued for.” the latter experience is not useful or fun, the former nice depending on the day & company.
My understanding of history says that usually letting militaries have such power, or initiating violent overthrow via any other means to launch an internal rebellion leads to bad results. Examples include the French, Russian, and English revolutions. Counterexamples possibly include the American Revolution, though notably I struggle to point to anything concrete that would have been different about the world had America had a peaceful break off like Canada later on did.
Do you know of counter-examples, maybe relating to poor developing nations which after the rebellion became rich developed nations?
I think Habryka has mentioned that Lightcone could withstand a defamation suit, so there’s not a high chance of financially ruining him. I am tentatively in agreement otherwise though.
This seems false. Dramatic increases in life extension technology have been happening ever since the invention of modern medicine, so its strange to say the field is speculative enough not to even consider.
I agree with your conclusion but disagree about your reasoning. I think its perfectly fine and should be encouraged to make advances in conceptual clarification which confuse people. Clarifying concepts can often result in people being confused about stuff they weren’t previously, and this often indicates progress.
My response would be a worse version of Marius’s response. So just read what he said here for my thoughts on hits-based approaches for research.
I disagree, and wish you’d actually explain your position here instead of being vague & menacing. As I’ve said in my previous comment
I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with reporters about this type of stuff! So mostly I think they should continue doing what they're doing.
This is because they usually talk about the strongest case for x-risk when talking to reporters, and somehow get into the article, and then have the reporter speak positively about the cause.
You’ve also said that some people think conjecture may be decreasing goodwill with policy makers. This announcement seems like a lot of evidence against this. Though there is debate on whether its good, the policy-makers are certainly paying lip-service to AI alignment-type concerns. I also want to know why would I trust such people to report on policy-makers opinions. Are these some Discord randos or parliament aides, or political researchers looking at surveys among parliament leaders, or deepmind policy people, or what?
In general I reject that people shouldn’t talk to the government if they’re qualified (in a general sense) and have policy-goals which would be good to implement. If policy is to work its because someone did something. So its a good thing that Conjecture is doing something.
I don’t know anything about Anthropic’s corporate governance structure. But I also don’t know much about Conjecture. I know at one point I tried to find Anthropic’s board of directors, and found nothing. But that was just a bunch of googling.
Conjecture’s infohazard policy not having legal force is bad, but not as bad as not having an infohazard policy in the first place. It seems like OpenAI and Anthropic have just as bad corporate governance structures in your book then. But you seem to think they have better structures. So I doubt whether or not having an infohazard policy with legal force is a crux for you here.
I’m very confused by your statements here, and would like you to explain like… why you think Conjecture’s is uniquely bad, so bad they shouldn’t get any funding, and we should consider shunning them from the community. Instead of just making the claim. This is where my crux lies.
I’m also curious about OpenAI and Anthropic’s corporate governance structures, but I don’t think its a crux. If you showed me OpenAI had a spectacular governance structure, I think I’d be more like “ah, well, in that case corporate governance structures don’t seem all that important then, and so its a positive Conjecture isn’t wasting money on this shown-to-be-useless-thing”.
(cross-posted to LessWrong)
I agree with Conjecture's reply that this reads more like a hitpiece than an even-handed evaluation.
I don't think your recommendations follow from your observations, and such strong claims surely don't follow from the actual evidence you provide. I feel like your criticisms can be summarized as the following:
Conjecture was publishing unfinished research directions for a while.
Conjecture does not publicly share details of their current CoEm research direction, and that research direction seems hard.
Conjecture told the government they were AI safety experts.
Some people (who?) say Conjecture's governance outreach may be net-negative and upsetting to politicians.
Conjecture's CEO Connor used to work on capabilities.
One time during college Connor said that he replicated GPT-2, then found out he had a bug in his code.
Connor has said at some times that open source models were good for alignment, then changed his mind.
Conjecture's infohazard policy can be overturned by Connor or their owners.
They're trying to scale when it is common wisdom for startups to try to stay small.
It is unclear how they will balance profit and altruistic motives.
Sometimes you talk with people (who?) and they say they've had bad interactions with conjecture staff or leadership when trying to tell them what they're doing wrong.
Conjecture seems like they don't talk with ML people.
I'm actually curious about why they're doing 9, and further discussion on 10 and 8. But I don't think any of the other points matter, at least to the depth you've covered them here, and I don't know why you're spending so much time on stuff that doesn't matter or you can't support. This could have been so much better if you had taken the research time spent on everything that wasn't 8, 9, or 10, and used to to do analyses of 8, 9, and 10, and then actually had a conversation with Conjecture about your disagreements with them.
I especially don't think your arguments support your suggestions that
Don't work at Conjecture.
Conjecture should be more cautious when talking to media, because Connor seems unilateralist.
Conjecture should not receive more funding until they get similar levels of organizational competence than OpenAI or Anthropic.
Rethink whether or not you want to support conjecture's work non-monetarily. For example, maybe think about not inviting them to table at EAG career fairs, inviting Conjecture employees to events or workspaces, and taking money from them if doing field-building.
(1) seems like a pretty strong claim, which is left unsupported. I know of many people who would be excited to work at conjecture, and I don't think your points support the claim they would be doing net-negative research given they do alignment at Conjecture.
For (2), I don't know why you're saying Connor is unilateralist. Are you saying this because he used to work on capabilities?
(3) is just absurd! OpenAI will perhaps be the most destructive organization to-date. I do not think your above arguments make the case they are less organizationally responsible than OpenAI. Even having an info-hazard document puts them leagues above both OpenAI and Anthropic in my book. And add onto that their primary way of getting funded isn't building extremely large models... In what way do Anthropic or OpenAI have better corporate governance structures than Conjecture?
(4) is just... what? Ok, I've thought about it, and come to the conclusion this makes no sense given your previous arguments. Maybe there's a case to be made here. If they are less organizationally competent than OpenAI, then yeah, you probably don't want to support their work. This seems pretty unlikely to me though! And you definitely don't provide anything close to the level of analysis needed to elevate such hypotheses.
Edit: I will add to my note on (2): In most news articles in which I see Connor or Conjecture mentioned, I feel glad he talked to the relevant reporter, and think he/Conjecture made that article better. It is quite an achievement in my book to have sane conversations with reporters about this type of stuff! So mostly I think they should continue doing what they're doing.
I'm not myself an expert on PR (I'm skeptical if anyone is), so maybe my impressions of the articles are naive and backwards in some way. This is something which if you think is important, it would likely be good to mention somewhere why you think their media outreach is net-negative, ideally pointing to particular things you think they did wrong rather than vague & menacing criticisms of unilateralism.
It seems altruistically very bad to invest in companies because you expect them to profit if they perform an action with a significant chance of ending the world. I am uncertain why this is on the EA forum.
I do dislike this feature of EA, but I don't think the solution is to transition away from a one-grant-at-a-time model. Probably better would be to have exit-coaches to help EAs find a new career outside EA, if they built up a bunch of skills because funding sources or other generally EA-endorsed sources told them they would give them money if they used such skills for the benefit of the universe.
What talents do you think aren't applicable outside the EAsphere?
(Edit: I do also note that I believe 80k should be taken a lot less seriously than they present themselves, and that most EAs take them. Their incorrect claims of EA being talent constrained one of many reasons I distrust them)