finm

Researcher @ Longview Philanthropy
2435 karmaJoined Apr 2019Working (0-5 years)Oxford, UK
www.finmoorhouse.com/writing

Bio

I do research at Longview Philanthropy. Previously I was a Research scholar at FHI and assistant to Toby Ord. Philosophy at Cambridge before that.

I also do a podcast about EA called Hear This Idea.

www.finmoorhouse.com/writing

www.hearthisidea.com

Posts
35

Sorted by New
3
finm
· 3y ago · 1m read
192
finm
· 6mo ago · 7m read
187
finm
· 1y ago · 20m read
67
finm
· 1y ago · 29m read
77
finm
· 2y ago · 13m read

Comments
132

finm
7h64
12
0
1

I think it is worth appreciating the number and depth of insights that FHI can claim significant credit for. In no particular order:

Note especially how much of the literal terminology was coined on (one imagines) a whiteboard in FHI. “Existential risk” isn't a neologism, but I understand it was Nick who first suggested it be used in a principled way to point to the “loss of potential” thing. “Existential hope”, “vulnerable world”, “unilateralist's curse”, “information hazard”, all (as far as I know) tracing back to an FHI publication.

It's also worth remarking on the areas of study that FHI effectively incubated, and which are now full-blown fields of research:

  • The 'Governance of AI Program' was launched in 2017, to study questions around policy and advanced AI, beyond the narrowly technical questions. That project was spun out of FHI to become the Centre for the Governance of AI. As far as I understand, it was the first serious research effort on what's now called ”AI governance”.
  • From roughly 2019 onwards, the working group on biological risks seems to have been fairly instrumental in making the case for biological risk reduction as a global priority, specifically because of engineered pandemics.
  • If research on digital minds (and their implications) grows to become something resembling a 'field', then the small team and working groups on digital minds can make a claim to precedence, as well as early and more recent published work.

FHI was staggeringly influential; more than many realise.

Answer by finmFeb 07, 20245
1
0

The singer-songwriter José González has mentioned being inspired by The Precipice and apparently other EA-related ideas. Take the charmingly scout mindset 'Head On':

Speak up
Stand down
Pick your battles
Look around
Reflect
Update
Pause your intuitions and deal with it
Head on

[Copied from an email exchange with Vasco, slightly embellished]

I think the probability of a flat universe is ~0 because the distribution describing our knowledge about the curvature of the universe is continuous, whereas a flat universe corresponds to a discrete curvature of 0.

Sure, if you put infinitesimal weight on a flat universe in your prior (true if your distribution is continuous over a measure of spatial curvature and you think it's infinite only if spatial curvature = 0), then no observation of (local) curvature is going to be enough. On your framing, I think the question is just why the distribution needs to be continuous? Consider: "the falloff of light intensity / gravity etc is very close to being proportional to , but presumably the exponent isn't exactly 2 since our distribution over  for  is continuous".

all the evidence for infinity is coming from having some weight on infinity in our prior.

'All' in the sense that you need nonzero non-infinitesimal weight on infinity in your prior, but not in the sense that your prior is the only thing influencing your credence in infinity. Presumably observations of local flatness do actually upweight hypotheses about the universe being infinite, or at least keep them open if you are open to the possibility in the first place. And I could imagine other things counting as more indirect evidence, such as how well or poorly our best physical theories fit with infinity.

[Added] I think this speaks to something interesting about a picture of theoretical science suggested by a subjective Bayesian attitude to belief-forming in general, on which we start with some prior distribution(s) over some big (continuous?) hypothesis space(s), and observations tell us how to update our priors. But you might think that's a weird way to figure out which theories to believe, because e.g. (i) the hypothesis space is indefinitely large such that you should have infinitesimal or very small credence in any given theory; (ii) the hypothesis space is unknown in some important way, in which case you can't assign credences at all, or (iii) theorists value various kinds of simplicity or elegance which are hard to cash out in Bayesian terms in a non-arbitrary way. I don't know where I come down on this but this is a case where I'm unusually sympathetic to such critiques (which I associate with Popper/Deutsch[1]). 

[Continuing email] I do agree that "the universe is infinite in extent" (made precise) is different from "for any size, we can't rule out the universe being at least that big", and that the first claim is of a different kind. For instance, your distribution over the size of the universe could have an infinite mean while implying certainty that the universe has some finite size (e.g. if that distribution over the size of the universe is  where ).

That does put us in a weird spot though, where all the action seems to be in your choice of prior.

I don't know how relevant it is that the axiom of infinity is independent of ZFC, unless you think that all true mathematical claims are made true by actual physical things in the world (JS Mill believed something like this I think). Then you might have thought you have independent reason to believe (i) the  axioms, and if so believing that (ii)  you'd be forced to believe in an actual physical infinity. But that has the same suspect "synthetic a priori" character as ontological arguments for God's existence, and is moot in any case because (ii) is false!

For what it's worth, as a complete outsider I feel a surprised by how little serious discussion there is in e.g. astrophysics / philosophy of physics etc around whether the universe is infinite in some way. It seems like such a big deal; indeed an infinitely big deal!

  1. ^

    Though I don't think these views would have much constructive to say about how much credence to put on the universe being infinite, since they'd probably reject the suggestion that you can or should be trying to figure out what credence to put on it. Paging @ben_chugg since I think he could say if I'm misrepresenting the view.

Very cool! Feel free to share your paper if you're able, I'd be curious to see.

I don't know how to interpret the image, but the this makes sense:

With a [small] attack surface (grid) for each actor, the budget multiplication should have no effect on loss rates, because all vulnerabilities are found and it’s just a matter of who found them first, which is not affected by budget multiplication. However, with a [large attack surface], the multiplication of budgets strictly benefits the attacker, because the defenders will ~never check the same squares that the attacker checks.

Copying a comment from Substack:

If offence and defence both get faster, but all the relative speeds stay the same, I don’t see how that in itself favours offence (we get ICBMs, but the same rocketry + guidance etc tech means missile defence gets faster at the same rate). But ideas like this make sense, e.g. if there are any fixed lags in defence (like humans don’t get much faster at responding but need to be involved in defensive moves) then speed favours offence in that respect.

That is to say there could be a 'faster is different' effect, where in the AI case things might move too chaotically fast — faster than the human-friendly timescales of previous tech — to effectively defend. For instance, your model of cybersecurity might be a kind of cat-and-mouse game, where defenders are always on the back foot looking for exploits, but they patch them with a small (fixed) time lag. The lag might be insignificant historically, until the absolute lag begins to matter. Not sure I buy this though.

A related vague theme is that more powerful tech in some sense ‘turns up the volatility/variance’. And then maybe there’s some ‘risk of ruin’ asymmetry if you could dip below a point that’s irrecoverable, but can’t rise irrecoverably above a point. Going all in on such risky bets can still be good on expected value grounds, while also making it much more likely that you get wiped out, which is the thing at stake.

Also, embarassingly, I realise I don't have a very good sense of how exactly people operationalise the 'offence-defence balance'. One way could be something like 'cost to attacker of doing $1M of damage in equilibrium', or in terms of relative spending like Garfinkel and Dafoe do ("if investments into cybersecurity and into cyberattacks both double, should we expect successful attacks to become more or less feasible"). Or maybe something about the cost-per-attacker spending to hold on to some resource (or cost-per-defender spending to sieze it).

This is important because I don't currently know how to say that some technology is more or less defence-dominant than another, other than in a hand-wavery intuitive way. But in hand-wavey terms it sure seems like bioweapons are more offence-dominant than, say, fighter planes. Because it's already the case that you need to spend a lot of money to prevent most the damage someone could cause with not much money at all.

I see the AI stories — at least the ones I find most compelling — as being kinda openly idiosyncratic and unprecedented. The prior from previous new tech very much points against them, as you show. But the claim is just: yes, but we have stories about why things are different this time ¯\_(ツ)_/¯

Great post.

What a great resource, thanks for putting it together!

Opinionated lists like this feel significantly more useful than comprehensive but unordered lists of relevant resources, because: (i) for most literatures, you're likely to get most of all the good insights from reading a small standout minority of everything written; and (ii) it's typically often not obvious to an outsider which resources are best in this respect. I hadn't heard of many of the books you rate highly.

Incidentally: consider reformatting the papers to not be headers? It makes the navigation bar feel cluttered to me.

Congrats Toby, excited to see what you get up to in the new role! And thanks for all your work on Amplify.

(I'd guess the different titles mostly just reflect the difference in seniority? cf. "program officer" vs "program associate")

Thanks for these details! I updated the relevant paragraph to include them.

I got a lot of value out of Guesstimate, and this (plus Squiggle itself) looks like a big step up. So thanks, and kudos!

(Also — both this new site and the Squiggle lang seem generally useful far beyond EA / x-risk contexts; e.g. for consultancies / policy planning / finance. I'd be interested to see if it catches on more widely.)

Load more