Will Howard

Software Engineer @ Centre for Effective Altruism
854 karmaJoined Aug 2022London, UK

Bio

I'm a developer on the EA Forum (the website you are currently on). You can contact me about forum stuff at will.howard@centreforeffectivealtruism.org or about anything else at w.howard256@gmail.com

Comments
70

Topic contributions
45

Thanks for reporting!

  • I'll think about how we could handle this one better. It's tricky because the doc itself as a title, and then people often rewrite the title as a heading inside the doc, so there isn't an obvious choice for what to use as the title. But it may be true that the heading case is a lot more common so we should make that the default.
  • That was indeed intended as a feature, because a lot of people use blank lines as a paragraph break. We can add that to footnotes too.

I'll set a reminder to reply here when we've done these.

Cosmologist: Well, I’m a little uncomfortable with this, but I’ll give it a shot. I will tentatively say that the odds of doom are higher than 1 in a googol. But I don’t know the order of magnitude of the actual threat. To convey this:

I’ll give a 1% chance it’s between 10^-100 and 10^-99

A 1% chance it’s between 10^-99 and 10^-98

A 1% chance it’s between 10^-98 and 10^-97,

And so on, all the way up to a 1% chance it’s between 1 in 10 and 100%.

I think the root of the problem in this paradox is that this isn't a very defensible humble/uniform prior, and if the cosmologist were to think it through more they could come up with one that gives a lower p(doom) (or at least, doesn't look much like the distribution stated initially).

So, I agree with this as a criticism of pop-Bayes in the sense that people will often come up with a quick uniform-prior-sounding explanation for why some unlikely event has a probability that is around 1%, but I think the problem here is that the prior is wrong[1] rather than failing to consider the whole distribution, seeing as a distribution over probabilities collapses to a single probability anyway.

Imo the deeper problem is how to generate the correct prior, which can be a problem due to "pop Bayes", but also remains when you try to do the actual Bayesian statistics.

Explanation of why I think this is quite an unnatural estimate in this case

Disclaimer: I too have no particular claim on being great at stats, so take this with a pinch of salt

The cosmologist is supposing a model where the universe as it exists is analogous to the result of a single Bernoulli trial, where the "yes" outcome is that the universe is a simulation that will be shut down. Writing this Bernoulli distribution as [2], they are then claiming uncertainty over the value of . So far so uncontroversial.

They then propose to take the pdf over  to be:

Where  is a normalisation constant. This is the distribution that results in the property that each OOM has an equal probability[3]. Questions about this:

  1. Is this the appropriate non-informative prior?
  2. Is this a situation where it's appropriate to appeal to a non-informative prior anyway?

Is this the appropriate non-informative prior?

I will tentatively say that the odds of doom are higher than 1 in a googol. But I don’t know the order of magnitude of the actual threat.

The basis on which the cosmologist chooses this model is an appeal to a kind of "total uncertainty"/non-informative-prior style reasoning, but:

  • They are inserting a concrete value of  as a lower bound
  • They are supposing the total uncertainty is over the order of magnitude of the probability, which is quite a specific choice

This results in a model where  in this case, so the expected probability is very sensitive to this lower bound parameter, which is a red flag for a model that is supposed to represent total uncertainty.

There is apparently a generally accepted way to generate non-informative-priors for parameters in statistical models, which is to use a Jeffreys prior. The Jeffreys prior[4] for the Bernoulli distribution is:

This doesn't look much like equation (A) that the cosmologist proposed. There are parameters where the Jeffreys prior is , such as the standard deviation in the normal distribution, but these tend to be scale parameters that can range from 0 to . Using it for a probability does seem quite unnatural when you contrast it with these examples, because a probability has hard bounds at 0 and 1.

Is this a situation where it's appropriate to appeal to a non-informative prior anyway?

Using the recommended non-informative prior (B), we get that the expected probability is 0.5. Which makes sense for the class of problems concerned with something that either happens or doesn't, where we are totally uncertain about this.

I expect the cosmologist would take issue with this as well, and say "ok, I'm not that uncertain". Some reasons he would be right to take issue are:

  1. A general prior that "out of the space of things that could be the case, most are not the case[5]", this should update the probability towards 0. And in fact massively so, such that in the absence of any other evidence you should think the probability is vanishingly small, as you would for the question of "Is the universe riding on the back of a giant turtle?"
  2. The reason to consider this simulation possibility in the first place, is not just that it is in principle allowed by the known laws of physics, but that there is a specific argument for why it should be the case. This should update the probability away from 0

The real problem the cosmologist has is uncertainty in how to incorporate the evidence of (2) into a probability (distribution). Clearly they think there is enough to the argument to not immediately reject it out of hand, or they would put it in the same category as the turtle-universe, but they are uncertain about how strong the argument actually is and therefore how much it should update their default-low prior.

...

I think this deeper problem gets related to the idea of non-informative priors in Bayesian statistics via a kind of linguistic collision.

Non-informative priors are about having a model which you have not yet updated based on evidence, so you are "maximally uncertain" about the parameters. In the case of having evidence only in the form of a clever argument, you might think "well I'm very uncertain about how to turn this into a probability, and the thing you do when you're very uncertain is use a non-informative prior". You might therefore come up with a model where the parameters have the kind of neat symmetry-based uncertainty that you tend to see in non-informative priors (as the cosmologist did in your example).

I think these cases are quite different though, arguably close to being opposites. In the second (the case of having evidence only in the form of a clever argument), the problem is not a lack of information, but that the information doesn't come in the form of observations of random variables. It's therefore hard to come up with a likelihood function based on this evidence, and so I don't have a good recommendation for what the cosmologist should say instead. But I think the original problem of how they end up with a 1 in 230 probability is due to a failed attempt to avoid this by appealing to an non-informative prior over order of magnitude.

  1. ^

    There is also a meta problem where the prior will tend to be too high rather than too low, because probabilities can't go below zero, and this leads to people on average being overly spooked by low probability events

  2. ^

     being the "true probability". I'm using  rather than p because 1) in general parameters of probability distributions don't need to be probabilities themselves, e.g. the mean of a normal distribution, 2)  is a random variable in this case, so talking about the probability of p taking a certain value could be confusing 3) it's what is used in the linked Wikipedia article on Jeffreys priors

  3. ^

  4. ^

    There is some controversy about whether this the right prior to use, but whatever the right one is it would give 

  5. ^

    For some things you can make a mutual exclusivity + uncertainty argument for why the probability should be low. E.g. for the case of the universe riding on the back of the turtle you could consider all the other types of animals it could be riding on the back of, and point out that you have no particular reason to prefer a turtle. For the simulation argument and various other cases it's trickier because they might be consistent with lots of other things, but you can still appeal to Occam's razor and/or viewing this as an empirical fact about the universe

Ok nested bullets should be working now :)

I have thought this might be quite useful to do. I would guess (people can confirm/correct me) a lot of people have a workflow like:

  1. Edit post in Google doc
  2. Copy into Forum editor, make a few minor tweaks
  3. Realise they want to make larger edits, go back to the Google doc to make these, requiring them to either copy over or merge together the minor tweaks they have made

For this case being able to import/export both ways would be useful. That said it's much harder to do the other way (we would likely have to build up the Google doc as a series of edits via the api, whereas in our case we can handle the whole post exported as html quite naturally), so I wouldn't expect us to do this in the near future unfortunately.

Yep images work, and agree that nested bullet points are the biggest remaining issue. I'm planning to fix that in the next week or two.

Edit: Actually I just noticed the cropping issue, images that are cropped in google docs get uncropped when imported. That's pretty annoying. There is no way to carry over the cropping but we could flag these to make sure you don't accidentally submit a post with the uncropped images.

You can now import posts directly from Google docs

Plus, internal links to headers[1] will now be mapped over correctly. To import a doc, make sure it is public or shared with "eaforum.posts@gmail.com"[2], then use the widget on the new/edit post page:

Importing a doc will create a new (permanently saved) version of the post, but will not publish it, so it's safe to import updates into posts that are already published. You will need to click the "Publish Changes" button to update the live post.

Everything that previously worked on copy-paste[3] will also work when importing, with the addition of internal links to headers (which only work when importing).

There are still a few things that are known not to work:

  • Nested bullet points (these are working now)
  • Cropped images get uncropped
  • Bullet points in footnotes (these will become separate un-bulleted lines)
  • Blockquotes (there isn't a direct analog of this in Google docs unfortunately)

There might be other issues that we don't know about. Please report any bugs or give any other feedback by replying to this quick take, you can also contact us in the usual ways.

Appendix: Version history

There are some minor improvements to the version history editor[4] that come along with this update:

  • You can load a version into the post editor without updating the live post, previously you could only hard-restore versions
  • The version that is live[5] on the post is shown in bold

Here's what it would look like just after you import a Google doc, but before you publish the changes. Note that the latest version isn't bold, indicating that it is not showing publicly:

  1. ^

    Previously the link would take you back to the original doc, now it will take you to the header within the Forum post as you would expect. Internal links to bookmarks (where you link to a specific text selection) are also partially supported, although the link will only go to the paragraph the text selection is in

  2. ^

    Sharing with this email address means that anyone can access the contents of your doc if they have the url, because they could go to the new post page and import it. It does mean they can't access the comments at least

  3. ^

    I'm not sure how widespread this knowledge is, but previously the best way to copy from a Google doc was to first "Publish to the web" and then copy-paste from this published version. In particular this handles footnotes and tables, whereas pasting directly from a regular doc doesn't. The new importing feature should be equal to this publish-to-web copy-pasting, so will handle footnotes, tables, images etc. And then it additionally supports internal links

  4. ^

    Accessed via the "Version history" button in the post editor

  5. ^

    For most intents and purposes you can think of "live" as meaning "showing publicly". There is a bit of a sharp corner in this definition, in that the post as a whole can still be a draft.

    To spell this out: There can be many different versions of a post body, only one of these is attached to the post, this is the "live" version. This live version is what shows on the non-editing view of the post. Independently of this, the post as a whole can be a draft or published.

Very reasonable, I think the project is great as is. I just have one more newsletter-related suggestion:

It's a lot cheaper to collect emails than it is to do the rest of the work related to sending out automated updates, so it could be worth doing that to take advantage of the initial spike in interest (without making any promises as to whether there will be updates). This could just be a link to a google form on the website if you wanted it to be really simple to implement.

Just about oils specifically:

My best guess is that either freezing or storing them as a liquid in an inert environment could be quite economical, very rough OOM maths:

For frozen storage

Apparently ice costs ~$0.01/kg to produce, this is just for the upfront cost of initially freezing, oil would be similar/lower because it has a lower heat capacity and similar freezing point. The cost for keeping it frozen is not that easy to work out, but still I would say "well the square-cube law takes care of this if you are freezing in large enough quantities" so I would be quite surprised if it were way more than the $3.7/year depreciation cost (OOM logic: even if it melted and you re-froze it every day that would only cost $3.65/year).

For liquid storage

You can store crude oil for something like $1.2 to $6[1] per barrel per year, with the cheapest method being putting it in salt caverns, but other methods are not way more expensive. This corresponds to about 2 person-years of calories with palm oil ($0.6 to $3 per person-year for storage). There are some reasons to think storing an edible oil[2] could be more expensive:

  • You would probably need to backfill with an inert gas to preserve it
  • Other food-grade safety related things? Although I think this would be a misguided concern tbh because it's intended for a use case where you would otherwise starve, so it only has to not kill you, it could be quite contaminated

And reasons to think it would be cheaper:

  • You would be storing for a known, very long, amount of time, so you could save on things required for quick access to the oil (like pumps)
  • It's easier to handle, less flammable, no noxious gases etc

I would guess the cheaper side would win here, as backfilling with an inert gas doesn't seem very hard if you have a large enough volume. Apparently oil tankers already do this (not sure about salt cavern storage) so this may be priced in already.

  1. ^

    "Global Platts reveal that the price of onshore storage varies between 10 and 50 cents per barrel per month.", then adjusted based on 158 litres/barrel, corresponding to ~140kg of palm oil, which is 2 person-years worth

  2. ^

    You couldn't actually use palm oil in this case, because it's a solid, but e.g. sunflower oil has similar calories/$

This is a great idea, I just submitted a project. I also wrote it up as a post, but your post was what prompted me to write it :)

Something I tend to find with projects like this[1] is that they can be forgotten about after the initial launch because they're not a destination site so there is no way for people to naturally come back to them. Have you thought about doing a newsletter or similar with an update on the projects that are added? I think it could be fairly infrequent (monthly) and automated and still be quite useful.

Also some minor feedback: submitting didn't work initially because of some problem with the description field. I removed a url and some line breaks and then it worked.

  1. ^

    i.e. the UnfinishedImpact project, not my idea

The thing that stands out to me as clearly seeming to go wrong is the lack of communication from the board during the whole debacle. Given that the final best guess at the reasoning for their decision seems like something could have explained[1], it does seem like an own goal that they didn't try to do so at the time.

They were getting clear pressure from the OpenAI employees to do this for instance, this was one of the main complaints in the employee letter, and from talking to a couple of OAI employees I'm fairly convinced that this was sincere (i.e. they were just as in the dark as everyone else, and this was at least one of their main frustrations).

I've heard a few people make a comparison to other CEO-stepping-down situations, where it's common for things to be relatively hush hush and "taking time out to spend with their family". I think this isn't a like for like comparison, because in those cases it's usually a mutual agreement between the board and the CEO for them both to save face and preserve the reputation of the company. In the case of a sudden unilateral firing it seems more important to have your reasoning ready to explain publicly (or even privately, to the employees).

It's possible of course that there are some secret details that explain this behaviour, but I don't think there's any reason to be overly charitable in assuming this. If there was some strategic tradeoff that the board members were making it's hard to see what they were trading off against because they don't seem to have ended up with anything in the deal[2]. I also don't find "safety-related secret" explanations that compelling because I don't see why they couldn't have said this (that there was a secret, not what it was). Everyone involved was very familiar with the idea that AI safety infohazards might exist so this would have been a comprehensible explanation.

If I put myself in the position of the board members I can much more easily imagine feeling completely out of my depth in the situation that happened and ill-advisedly doubling down on this strategy of keeping quiet. It's also possible they were getting bad advice to this effect, as lawyers tend to tell you to keep quiet, and there is general advice out there to "not engage with the twitter mob".

  1. ^

    Several minor fibs from Sam, saying different things to different board members to try and manipulate them. This does technically fit with the "not consistently candid" explanation but that was very cryptic without further clarification and examples

  2. ^

    To frame this the other way, if they had kept quiet and then been given some lesser advisory position in the company afterwards you could more easily reason that some face-saving dealing had gone on

Load more