finm

Research scholar @ FHI and assistant to Toby Ord. Philosophy student before that. I do a podcast about EA called Hear This Idea.

www.finmoorhouse.com/writing

www.hearthisidea.com

Topic Contributions

Comments

Space governance - problem profile

This (and your other comments) is incredibly useful, thanks so much. Not going to respond to particular points right now, other than to say many of them stick out as well worth pursuing.

Space governance - problem profile

Thanks for this, I think I agree with the broad point you're making.

That is, I agree that basically all the worlds in which space ends up really mattering this century are worlds in which we get transformative AI (because scenarios in which we start to settle widely and quickly are scenarios in which we get TAI). So, for instance, I agree that there doesn't seem to be much value in accelerating progress on space technology. And I also agree that getting alignment right is basically a prerequisite to any of the longer-term 'flowthrough' considerations.

If I'm reading you right I don't think your points apply to near-term considerations, such as from arms control in space.

It seems like a crux is something like: how much precedent-setting or preliminary research now on ideal governance setups doesn't get washed out once TAI arrives, conditional on solving alignment? And my answer is something like: sure, probably not a ton. But if you have a reason to be confident that none of it ends up being useful, it feels like that must be a general reason for scepticism that any kind of efforts at improving governance, or even values change, are rendered moot by the arrival of TAI. And I'm not fully sceptical about those efforts.

Suppose before TAI arrived we came to a strong conclusion: e.g. we're confident we don't want to settle using such-and-such a method, or we're confident we shouldn't immediately embark on a mission to settle space once TAI arrives. What's the chance that work ends up making a counterfactual difference, once TAI arrives? Notquite zero, it seems to me.

So I am indeed on balance significantly less excited about working on long-term space governance things than on alignment and AI governance, for the reasons you give. But not so much that they don't seem worth mentioning.

Ultimately, I'd really like to see [...] More up-front emphasis on the importance of AI alignment as a potential determinant.

This seems like a reasonable point, and one I was/am cognisant of — maybe I'll make an addition if I get time.

(Happy to try saying more about any of above if useful)

Nuclear Fusion Energy coming within 5 years

I agree that fusion is feasible and will likely account for a large fraction (>20%) of energy supply by the end of the century, if all goes well. I agree that would be pretty great. And yeah, Helion looks promising.

But I don't think we should be updating much on headlines about achieving ignition or breakeven soon. In particular, I don't think these headlines should be significantly shifting forecasts like this one from Metaculus about timelines to >10% of energy supply coming from fusion. The main reason is that there is a very large gap between proof of concept and a cost-competitive supply of energy. Generally speaking, solar will probably remain cheaper per kWh than fusion for a long time (decades), so I don't expect the transition to be very fast.

It's also unclear what this should all mean for EA. One response could be: "Wow, a world with abundant energy would be amazing, we should prioritise trying to accelerate the arrival of that world." But, I don't know, there's already a lot of interested capital flying around — it's not like investors are naive to the benefits. On the government side, the bill for ITER alone was something in the order of $20 billion.

Another response could be: "Fusion is going to arrive sooner than we expected,  so the world is soon going to look different from what we expected!" And I'd probably just dispute that the crowd (e.g. the Metaculus forecast above) is getting it especially wrong here in any action-relevant way. But I'd be delighted to be proved wrong.

Concave and convex altruism

Thanks, that's a very good example.

I don't think this actually describes the curve of EA impact per $ overall

For sure.

Past and Future Trajectory Changes

Just wanted to comment that this was a really thoughtful and enjoyable post. I learned a lot.

In particular, I loved the way the point about how the relative value of trajctory change should depend on the smoothness of your probability distribution over the value of the long-run future.

I'm also now curious to know more about the contingency of the caste system in India. My (original) impression was that the formation of the caste system was somewhat gradual and not especially contingent.

Pre-announcing a contest for critiques and red teaming

For what it's worth I think I basically endorse that comment.

I definitely think an investigation that starts with a questioning attitude, and ends up less negative than the author's initial priors, should count.

That said, some people probably do already just have useful, considered critiques in their heads that they just need to write out. It'd be good to hear them.

Also, presumably (convincing) negative conclusions for key claims are more informationally valuable than confirmatory ones, so it makes sense to explicitly encourage the kind of investigations that have the best chance of yielding those conclusions (because the claims they address look under-scrutinised).

Pre-announcing a contest for critiques and red teaming

Thank you, this is a really good point. By 'critical' I definitely intended to convey something more like "beginning with a critical mindset" (per JackM's comment) and less like "definitely ending with a negative conclusion in cases where you're critically assessing a claim you're initially unsure about". 

This might not always be relevant. For instance, you might set out to find the strongest case against some claim, whether or not you end up endorsing it. As long as that's explicit, it seems fine.

But in cases where someone is embarking on something like a minimal-trust investigation — approaching an uncertain claim from first principles — we should be incentivising the process, not the conclusion!

We'll try to make sure to be clear about that in the proper announcement.

Pre-announcing a contest for critiques and red teaming

Yes, totally. I think a bunch of the ideas in the comments on that post would be a great fit for this contest.

Pre-announcing a contest for critiques and red teaming

Thanks, great points. I agree that we should only be interested in good faith arguments — we should be clear about that in the judging criteria, and clear about what counts as a bad faith criticism. I think the Forum guidelines are really good on this.

Of course, it is possible to strongly disagree with a claim without resorting to bad faith arguments, and I'm hopeful that the best entrants can lead by example.

EA Projects I'd Like to See

The downweighting of AI in DGB was a deliberate choice for an introductory text.

Thanks, that's useful to know.

Load More