Hamish Huggard

Wiki Contributions

Comments

Philosophy PhD Application: Advice on Written Submission

Not a philosopher, but I have overlapping interests.

  1. I'm not sure what you mean here. What's RDM? Robust decision making? So you'd want to formalise decision making in terms of the Bayesian or frequentist interpretation of probability?
  2. Again, I'm not sure what "maximising ambition" means? Could you expand on this?
  3. How would you approach this? Surveys? Simulations? From a probability perspective I'm not sure that there's anything to say here. You choose a prior based on symmetry/maximum-entropy/invariance arguments, then if observations give you more information you update, otherwise you don't.

I suspect a better way to approach topic selection is to find a paper you get excited about, and ask "how can I improve on this research by 10%?" This stops you from straying wildly off of the path of "respectable and achievable academic research".

EffectiveAltruismData.com: A Website for Aggregating and Visualising EA Data

Thanks for the suggestion. I don't have a super clear idea of what the main issues/chunks actually are at the moment, but I'll work towards that.

[Creative writing contest] Blue bird and black bird

Very cute. 🙂

I'm curious about your thinking on colour symbolism. On the one hand, ravens are smart and crafty, so "black bird = smart/strategic bird" makes sense. But on the other hand, blue is kinda an EA colour, so at first I thought the blue bird would represent EA. Why did you choose to make the lay-bird a blue bird?

Statistics for Lazy People, Part 1

Thank you. I have corrected the mistake.

The relationship between Lindy, Doomsday, and Copernicus is as follows:

  • The "Copernican Principle" is that "we" are not special. This is a generalisation of how the Earth is not special: it's just another planet in the solar system, not the centre of the universe.
  • In John Gott's famous paper on the Doomsday Argument, he appeals to the the Copernican Principle to assert "we are also not special in time", meaning that we should expect ourselves to be in a typical point in the history of humanity.
  • The "most typical" point in history is exactly in the middle. Thus your best guess of the longevity of humanity is twice its current age: Lindy's Law. 
I scraped all public "Effective Altruists" Goodreads reading lists

This is brilliant!

I think we can actually do an explicit expected-utility and value-of-information calculation here:

  • Let one five-star book = one util 
  • Each book's quality can be modelled as a rate  of producing stars. 
  • The star rating you give a book is the sum of 5 Bernoulli trials with rate 
  • The book will produce  utils of value per read in expectation.
  • To estimate , sum up the total stars awarded  and total possible stars .
  • The probability distribution is then  (assuming uniform prior for simplicity).
  • For any pair of books, we can compute the probability that book 1 is more valuable than book 2 as .
  • Let's say there's a prescribed EA reading list. 
  • Let people who encounter the list be probabilistic automata.
  • These automata start at the top of the list, then iteratively either: 1) read the book they are currently looking at, 2) move down to the next item on the list, 3) quit.
  • Intuitively, I think this process will result in books being read geometrically less as you move down the list.
  • For simplicity, let's say the first book is guaranteed to be read, the next book has a 50% chance of being read, then 25%, ..., and then -th book has  chance of being read (with  starting at zero).
  • The expected value of the list is then 
  • To calculate the value of information for reading a given book, you enumerate all the possible outcomes (one-star, two-stars, ...., five-stars), calculate the probability of each one, look at how the rankings would change, and re-calculate the the expected value of the list. Multiply the expected values by the probabilities et voila

Can I get the data please?

BitBets: A Simple Scoring System for Forecaster Training

It just occurred to me that you don't actually need to convert the forecaster's odds to bits. You can just take the ceiling of the odds themselves:
 

Which is more useful for calibrating in the low-confidence range.

BitBets: A Simple Scoring System for Forecaster Training

Additional note: BitBets is a proper scoring rule, but not strictly proper. If you round report odds which are rounded up to the next power of two you will achieve the same scores in expectation.

Load More