Chris Leong

Topic Contributions


The Case for Non-Technical AI Safety/Alignment Growth & Funding

I still don't know where I stand on governance. Plausibly there will be laws and policies we need passed; but it's also plausible that we will mainly need the government to just stay out of the way and not make things worse, such as by adding a bunch of useless regulations that don't advance safety[1]. But, I suppose even if it's the latter, it's exactly the kind of thing you would need policy people for.

  1. ^

    Ideally, we would be able to slow down AI, but we are unlikely to be able to do this in every country, so this could easily just make things worse.

Sophia's Shortform

Well, if you think of anything, let me know.

Fiction Writing Retreat: Ink in the Abbey

I think Eliezer really needed an editor to cut it down at points.

Sophia's Shortform

Oh, here's another excellent example, the EA Writing Retreat.

Sophia's Shortform

Hmm... Some really interesting thoughts. I generally try to determine whether people are actually making considered counter-arguments vs. repeating cliches, but I take your point about a willingness to voice half-formed thoughts can cause others to assume you're stupid.

I guess in terms of outreach it makes sense to cultivate a sense of practical wisdom so that you can determine when to patiently continue a conversation or when to politely and strategically withdraw so as to save energy and avoid wasting time. This won't be perfect and it's subject to biases as you mentioned, but it's really the best option available.

Sophia's Shortform

I think there's a lot of value in people reaching out to people they know (this seems undervalued in EA, then again maybe it's intentional as evangelism can turn people off). This doesn't seem to trade-off too substantially against more formal movement-building methods which should probably filter more on which groups are going to be most impactful.

In terms of expanding the range of people and skills in EA, that seems to be happening over time (take for example the EA blog prize: ). Or the increased focus on PA's ( I have no doubt that there are still many useful skills that we're missing, but there's a decent chance that funding would be available if there was a decent team to work on the project.

If my confidence in any of these claims substantially increases or decreases in the next few days I might come back and clarify that (but if doing this becomes a bit of an ugh field, I'm not going to prioritise de-ughing it because there are other ugh-fields that are higher on my list to prioritise de-ughing 😝)

Makes sense

Sophia's Shortform

I agree that we need scope for people to gradually increase their commitment over time. Actually, that's how my journey has kind of worked out.

On the other hand, I suspect that tail people can build a bigger and more impactful campfire. For example, one Matt Yglesias occasionally posting positive things about EA or EA adjacent ideas increases our campfire by a lot and these people are more likely to be the ones who can influence things.

The Berlin Hub: Longtermist co-living space (plan)

Having spent time at the EA hotel, there isn't as much collaboration there as you might expect or hope for. I think it is worthwhile at least trying the experiment to see if this results in improved collaboration. Beyond this, there is a fundamental split between long-termismism and near-termism, in that near-termist spending on these kinds of projects directly trades off against donating to the Against Malaria foundation, while there isn't a similar comparable option in terms of long-termism (ie. long-termism is less funding constrained).

Some unfun lessons I learned as a junior grantmaker

It's hard to say some of those without coming off as insulting.

Bad Omens in Current Community Building

Well, haters are gonna hate. Maybe that's too blase, but as long as we are talking about university groups rather than high schools, the PR risks don't feel too substantial.

Load More