All of Chris Leong's Comments + Replies

Sure, but these orgs found their own niche.

HIP and Successif focuse more on mid-career professionals.

Probably Good focusing on a broader set of cause areas; and taking some of the old responsibilities of 80k when it started focusing on more on transformative AI. 

2
gergo
Yes, but there is still overlap in their work! It makes sense for orgs to find their nieche, but my stronger claim is that even if they didn't, it would still be good to have double the amount of fieldbuilding orgs, assuming they are doing good work.[1]  (I think people who think this is wrong have the intuition of fieldbuilding being a zero-sum game, while in reality, we have a large amount of untapped talent, and orgs just don't know how to reach them.) 1. ^ This is dependent on funding availability, though - the background assumption here is that funders (OP) can't give away money fast enough for some kind of fieldbuilding work (such as MATS) If MATS was struggling for money, I would rather have them get the marginal dollar than another org that is doing something very similar but at an earlier stage. (but you could argue against this. One might want to invest into something speculative if they think it has the potential to outperform MATS on the long run)

Oh, I think AI safety is very important; short-term AI safety too though not quite 2027 😂.

Knock-off MATS could produce a good amount of value, I just want the EA hotel to be even more ambitious.

Chris Leong
*2
0
0
50% disagree

Should our EA residential program prioritize structured programming or open-ended residencies?


There's more information value in exploring structured programming.

That said, I'd be wary duplicating existing programs; ie. if the AI Safety Fellowship became a knock-off MATS.

8
gergo
This theme comes up a lot in AI Safety, and I really don't think the reasoning is sound behind it. (See my post on a related topic). Imagine you could snap your finger and create another organisation like MATS. Wouldn't you want that, conditional on the org doing things just as well (or eventually becoming one that does things just as well)? MATS is well-funded (having received a grant of over $30M recently, I believe), so it's not as if they can magically absorb the money that could go to startup fieldbuilding projects. (Not to mention that smaller projects tend to be more cost-effective as long as they are good). Imagine we lived in a world where 80k is still the only organisation doing career support.  Now we have HIP, Successif, Probably Good etc. These orgs are a blessing for the field, and if we could have twice as many of them, that would be great.
9
Attila Ujvari
I am keenly wary of duplicating efforts when there is no demand. However, I suspect (pending actual market research and confirmation) that there are still uncovered use cases and needs that we could — and should — fill. Duplication is not the goal. Finding a niche that is underserved and provides value is.
8
peterbarnett
MATS has a very high bar these days, I'm pretty happy about there being "knock-off MATS" programs that allow people who missed the bar for MATS to demonstrate they can do valuable work. 

What the School of Moral Ambition has achieved is impressive, but it's unclear whether EA should aim for mainstream appeal insofar as SoMA could potentially fill that niche.

"~70% male and ~75% white" — I'm increasingly feel that the way to be cool is to not be so self-conscious about this kind of stuff. Would it be great to have more women on our team? Of course! And for EA to be more global? Again, that'd be great! But talking about your demographics like it's a failure will never be cool. Instead EA should just back itself. Are our demographics ideal? No... (read more)

An analogy: let's suppose you're trying to stop a tank. You can't just place a line of 6 kids in front of it and call it "defense in depth".

Also, it would be somewhat weird to call it "defense in depth" if most of the protection came from a few layers.

Feel reply to this comment with any suggestions about other graphs that I should consider including.

Create nice zones for spontaneous conversations (not sure how to do this well)


I've tried pushing for this without much success unfortunately.

It really is a lot more effort to have spontaneous conversations when almost all pairs are a one-on-one and almost all people by themselves are waiting for a one-on-one.

I've seen attempts to declare a space an area that's not for one-on-ones, but people have one-on-ones there anyway. Then again, organisers normally put up one or two small signs.

Honestly, the only way to stop people having one-on-ones in the area for s... (read more)

5
calebp
Yeah I also think hanging out in a no 1:1s area is weirdly low status/unexciting. I’d be a bit more excited about cause or interest specific areas like “talk about ambitious project ideas”.

For most fellowships you're applying to a mentor rather than pursuing your own project (ERA is an exception). And, on the most common fellowships of a few months it's pretty much go, go, go, with little time to explore.

Thanks for the detailed comments.

 Maybe the only way to really push for x-safety is with If Anyone Builds It style "you too should believe in and seek to stop the impending singularity" outreach. That just feels like such a tough sell even if people would believe in the x-safety conditional on believing in the singularity. Agh. I'm conflicted here. No idea.

I wish I had more strategic clarity here.

I believe there was a recent UN general assembly where world leaders were literally asking around for, like, ideas for AI red lines.

I would be surprised if a... (read more)

I agree that EA might be somewhat “intellectually adrift”, and yes the forum could be more vibrant, but I don’t think these are the only metric for EA success or progress - and maybe not even the most important.

 

The EA movement attracted a bunch of talent by being intellectually vibrant. If I thought that the EA movement was no longer intellectually vibrant, but it was attracting a different kind of talent (such as the doers you mention) instead, this would be less of a concern, but I don't think that's the case.

(To be clear, I'm talking about the EA ... (read more)

Very excited to read this post. I strongly agree with both the concrete direction and with the importance of making EA more intellectually vibrant

Then again, I'm rather biased since I made a similar argument a few years back.

Here's the main differences between what I was suggesting back then and what Will is suggesting here:

  • I suggested that it might make sense for virtual programs to create a new course rather than just changing the intro fellowship content. My current intuition is that splitting the intro fellowship would likely be the best option for now
... (read more)

Really excited to see this released! Seems very helpful for folks trying to explore the intersection of these spaces.

Honestly, I don't care enough to post any further replies. I've spent too much time on this whole Epoch thing already (not just through this post, but through other comments). I've been reflecting recently on how I spend my time and I've realised that I often make poor decisions here. I've shared my opinion, if your opinion is different, that's perfectly fine, but I'm out.

It can be a mistake to have trusted someone without there necessarily having been misbehavior. I'm not saying there wasn't misbehavior, that's just not my focus here.

Trust has never been just about whether someone technically lied.

8
Ben_West🔸
Sure, but I just genuinely don't know what you are complaining about here. I can make a few guesses but it seems better to just ask what you mean.

I find that surprising.

The latest iteration has 80+ projects

But why do you say that?

I can't see an option to delete.

I took a look at the post announcing Epoch.

It was interesting noting this comment by Ofer:
 

Jaime Sevilla replied:

Additionally, looking at the post itself:

It's up to the reader to form their own judgement, but it certainly seems to me that the AI Safety community was too ready to trust Epoch.

Weak-downvoted; I think it's fair game to say an org acted in an untrustworthy way, but I think it's pretty essential to actually sketch the argument rather than screenshotting their claims and not specifying what they've done that contradicts the claims. It seems bad to leave the reader in a position of being like, "I don't know what the author means, but I guess Epoch must have done something flagrantly contradictory to these goals and I shouldn't trust them," rather than elucidating the evidence so the reader can actually "form their own judgment." Ben_... (read more)

Why do you think we were too ready to trust them? Are you implying that they later violated what Jaime says here?

People seem to think that there is an 'EA Orthodoxy' on this stuff


Well, there kind of is. Maybe you think it's incorrect, but that's a separate matter.

I am far more concerned of "geniuses in a data center" which Dario/Sam seem to be pushing for, than I am of more economically useful AI


I'd be curious to hear your views now that we know they're focusing on improving RL environments which seems useful for the former and not just that latter.

In part, that's a lesson for funders to not just look at the content of the proposal in front of you, but also what the org as a whole is doing.

 

Agreed.

The overall summary is pretty good, however:

"Concern for broader implications" - the broader implications of having the discussion.

"Requires considering possible social or political consequences" - of the speech act.

"Look like bias or deflection" - would likely be clearer to say political bias.

"Strict decoupling can enable harmful speech" - not just 'harmful' in the sense of someone's feelings being hurt, but in the worst case, it means standing aside as people begin co-ordinating on genocide.

"What counts as “charged”" - not just charged, but excessively charged.

in many employers’ eyes they would not look as value aligned as someone who did MATS, something which is part of a researcher’s career path anyway.


Yeah, I also found this sentence somewhat surprising. I likely care more about value alignment more than you, but I expect that the main way for people to signal this is by participating in multiple activities over time rather than by engaging in any particular program. I do agree with the OP's larger point though: that it is easier for researchers to demonstrate value alignment given that there are more program... (read more)

Great post! I agree with your core point about a shortfall in non-researcher pipelines creating unnecessary barriers and I really appreciated how well you've articulated these issues. Excited to see any future work!

But I spent the next 6 months floundering; I thought and thought about cause prioritisation, I read lots of 80k and I applied to fellowship after fellowship without success

Were you mostly applying to the highly competitive paid fellowships? I don't exactly know what Nontrivial entails (though my impression was that it covered a few differen... (read more)

1
nickaraph
I believe SPAR is no longer less competitive 

You might not care about politics, but politics cares about you[1].

Honestly, the social justice wave is what made this quite clear to me.

There are situations when disengagement with politics is viable and situations when it is not.

  1. ^

    At least some of the time.

Hiring, but then not being able to share control of the org with senior operators and/or adapt / make compromises to enable the org to succeed


Whether or not this is the right decision is highly circumstantial.

Honestly, I'd typically prefer an organisation to fail than compromise its mission.

2
Vaidehi Agarwalla 🔸
The 'enable the org to succeed' implies 'at it's stated goals or mission'. Like, the by the org or it's leaders own lights.

Together, we’ll evolve CEEALAR from a residency into a fully equipped impact incubator, nurturing not just EA and AI Safety research, but also the whole person behind every project, at every stage of their journey.


Excited to see what happens here!

So his whole worldview is rooted in a fear of scapegoating and yet he is fine with scapegoating EA?

Then again, it makes sense if he unconsciously believes that there has to be a scapegoat and wants to make sure that it ends up being a group that isn't him?

2
Ben_West🔸
I understand "scapegoating" to be a specific sociological process, not just a generic term for blaming some one. I'm not sure if Thiel wants us to be scapegoated, but if it does happen it would look more like "mob violence" than "guy mentions offhand in a podcast that you're the antichrist."[1] 1. ^ Does Thiel have some 12D chess plan to inspire mob violence against EAs? Seems unlikely, but who knows.

Does Thiel support Trump?  I know he did... then he didn't... unsure where he is now that Trump is back in power again.

3
barkbellowroar
Yes, Thiel has returned to supporting Trump more fully since the election. Many presume it's because of lucrative contracts with Palantir conducting surveillance and data collection for US government. Best summary I could find on short notice. 
2
Charlie_Guthmann
Depends exactly what you mean by support but JD vance is deeply connected to Thiel.

I don't have a strong understanding of IASEAI, but it's less clear to me that an event like this needs to have policy-makers attending. It seems reasonable for there to be different events with different focuses, rather than each event having to cater to everyone.

2
gergo
Fair point!

I'm skeptical of your analysis of scenario 3, as I generally buy the orthogonality thesis, leading me to believe that it's possible to be both wise and evil.

At the same time, emergent misalignment seems to suggest that it might be reasonable to expect that an AI that has been nudged to become wise will also be nudged somewhat towards being moral.

3
Jordan Arel
Interesting! I think I didn’t fully distinguish between two possibilities: 1. AW just has an understanding of wisdom 2. AW whose values are aligned to wisdom, or at least aligned to pursuing and acting on wisdom  I think both types of AW are worth pursuing, but the second may be even more valuable, and I think this is the type I had in mind at least in scenario 3.

On the contrary, I think we should be very careful about imposing morality taxes. I'm not going to say we should never impose them, but not even attempting to think through the unintended consequences is the height of arrogance. I see this bad both from the perspect of leading to bad policy and also bad from the perspective of class relations.

A world with lots of competing courses would probably be better, but sadly that's hard to achieve. 

A rotating yearly theme would be a good second best option.

This post does a good job of highlighting the harms from alcohol.

However, I'm strongly suspicious of the implicit framing:

If you talk either about crime policy or drug policy, that’s got to be the number 1 recommendation — just because it’s so easy. It doesn’t cost you anything. You don’t have to kick in anybody’s door. You just have to change a number in the tax code and crime goes down.

This is a quote - rather than the author - but I think the article does the same thing.

Namely, that it takes a very naive view of the subject by focusing on the immediate ... (read more)

1
artilugio
One can also imagine monetary costs being inflicted on families whose drunk adults now have less money leftover from their binge for picking up takeout or groceries
3
Jason
That's a good point in general, but I'm less worried about this in the context of an increased tax. The potential magnitude of the loss of benefit is limited by the ability to pay the tax and continue at current levels of consumption ~ in that case, the loss to the drinker is limited to the amount of the tax increase. So I don't think we have to worry as much about calculating the "more diffuse and harder to articulate benefits" as precisely as we would under a ban. Moreover, you could pair an alcohol tax hike with a decrease in some other consumption tax in a way that makes the whole package ~cost-neutral for moderate drinkers.
1
JoA🔸
I'm curious to understand what you mean by this. I don't know if the implication is meant to be self-evident, but I have trouble getting it.

I expect it'd be possible to build some good ops experience in blue collar roles.

I would like to suggest that folk not downvote this post below zero. I'm generally in favour of allowing people to defend themselves, unless their response is clearly in bad faith. I'm sure many folk strongly disagree with the OP's desired social norms, but this is different from bad faith.

Additionally, I suspect most of us have very little insight into how community health operates and this post provides some much needed visibility. Regardless of whether you think their response was just right, too harsh or too lenient, this post opens up a rare opportuni... (read more)

8
SpeedyOtter
Hi, This meant a lot to me.  This is how I would like to judge posts I disagree with, so I appreciate you advocating for mine in such terms.
3
Kambar
Hey Chris! The application form will be closing in less than an hour from this reply, then after (this) week of preparation, the course will launch and run for ~9 weeks

This is a shame to hear. Do you have any thoughts on why this wasn't appealing to funders despite the positive feedback/survey results?

Unfortunately, we got very little feedback from funders, and frequently none at all. It could be the risks of reaching out to this more senior audience (i.e. the risk of making them less interested in AI safety if the outreach is poor), but this is just a guess. I expect there are a number of factors at play. 

Thinking this through: what's novel is not so much the idea that the path AI takes affects non-human welfare, but that it's worth developing this as its own subfield.

And the argument for this is much stronger in the current context: the arguments for rapid AI progress, AI companies not being responsible by default and AI not being aligned by default are much more legible these days.

And that makes it much easier to build energy around this as there seem to be folks in the EA animal welfare crowd who were skeptical about AI/AI risk before, but now see that t... (read more)

I agree with this problem, that there are many folk who lack courage, particularly within AI governance, but I don't really see these kinds of comments as being that likely to change things.

There are times when it can make sense to shout at people, but it normally works better when there's a clear and realistic path that's been sketched out[1]. And when this isn't the case, it tends to simply create strife for very little gain.

  1. ^

    Someone really needs to write an article along the lines of "The case for being bold".

If your complaint is that the default assumption is that these are more or less true, well, my claim is that even though normies tend to see this as a negative signal, it's actually a positive signal for those with good epistemics.

That said, it's important to keep in mind that these aren't directly talking about actual systems (the Orthogonality Thesis is about possible systems and Instrumental Convergence is about incentives).

3
Joseph_Chu
I more or less agree. It's not really a complaint from me. I probably was too provocative in my choice of wording earlier.

Interesting post.

I think it did a good job of explaining why the metacrisis might be relevant from an EA standpoint (the dialogue format was a great choice!). I made a similar (but different) argument - that Less Wrong should be paying attention to the sensemaking space - back in 2021[1] and it may still be helpful for readers who want to get better sense of the scene[2].

Unfortunately, I'm with Amina. Short AI timelines are looking increasingly likely and culture change tends to take a long time, so the argument for prioritising this isn't looking as ... (read more)

As soon as you start charging a fee to be a member, people will become suspicious that you're trying to sign them up because you want their cash, rather than being purely dedicated to charity. It'll also cost you members because they'll have the choice to either spend the money or not feel fully part of the community. This effect will be worse in some countries than others.

If you don't get enough members signing up, then the organisation becomes vulnerable to capture by someone who signs up a bunch of straw members who never show up to meetings except to v... (read more)

I personally haven't updated too much based off this example, as I suspect this model works better in Norway than it would elsewhere.

1
Nithin Ravi🔸
I was thinking similarly, Norwegians seem especially accepting of both democracy and EA ideas, so hard to generalize.

Nothing specific. It just seems like it would make sense for them to write an explanation of why they're doing this.

I'm a bit more skeptical, but this sounds fascinating. Would love to hear more.

1
Will_Davison
Great! What questions do you have? What makes you feel skeptical? 

I haven't made a list of existing projects yet, but I hope to do this at some point. 

Load more