NunoSempere

I'm an independent researcher, hobbyist forecaster, programmer, and aspiring effective altruist.

In the past, I've studied Maths and Philosophy, dropped out in exhasperation at the inefficiency; picked up some development economics; helped implement the European Summer Program on Rationality during 2017, 2018 and 2019, and SPARC during 2020; worked as a contractor for various forecasting and programming projects; volunteered for various Effective Altruism organizations, and carried out many independent research projects. In a past life, I also wrote a popular Spanish literature blog, and remain keenly interested in Spanish poetry.

I like to spend my time acquiring deeper models of the world, and a good fraction of my research is available on nunosempere.github.io.

With regards to forecasting, I am LokiOdinevich on GoodJudgementOpen, and Loki on CSET-Foretell, and I have been running a Forecasting Newsletter since April 2020. I also enjoy winning bets against people too confident in their beliefs.

I was a Future of Humanity Institute 2020 Summer Research Fellow, and I'm working on a grant from the Long Term Future Fund to do "independent research on forecasting and optimal paths to improve the long-term." You can share feedback anonymously with me here.

Sequences

Estimating value
Forecasting Newsletter

Comments

Shapley values: Better than counterfactuals

Roses are redily
counterfactuals sloppily
but I don't thinkily
that we should use Shapily 

Base Rates on United States Regime Collapse

I think that your probabilities are too high, because you are not processing enough data, or processing the data you have enough. For example, the new sovereign state prior (3%) would assume something like all countries having the same chance of popping out a state, which seems to clearly not be the case.

You might want to take a look at or contact the authors from the Rulers, Elections and Irregular Governance (REIGN) dataset/CoupCast, which has way more data behind it.

Org Update

Any thoughts about dividing far future into AI and non-AI? Also, I'm surprised to see GPI on "Infrastructure" rather than on "Far future"

Getting a feel for changes of karma and controversy in the EA Forum over time

I would be interested in how to circumvent this for future analysis.

You can query by year, and then aggregate the years. From a past project, in nodejs:

/* Imports */
import fs from "fs"
import axios from "axios"

/* Utilities */
let print = console.log;
let sleep = (ms) => new Promise(resolve => setTimeout(resolve, ms))

/* Support function */
let graphQLendpoint = 'https://www.forum.effectivealtruism.org/graphql/'
async function fetchEAForumPosts(start, end){
  let response  = await axios(graphQLendpoint, ({
    method: 'POST',
    headers: ({ 'Content-Type': 'application/json' }),
    data: JSON.stringify(({ query: `
       {
        posts(input: {
          terms: {
          after: "${start}"
          before: "${end}"
          }
          enableTotal: true
        }) {
          totalCount
          results{
            pageUrl
            user {
              slug
              karma
            }
            
          }
        }
      }`
})),
  }))
  .then(res => res ? res.data ? res.data.data ? res.data.data.posts ? res.data.data.posts.results : null : null : null : null)
  return response
}

/* Body */
let years = [];
for (var i = 2005; i <= 2021; i++) {
   years.push(i);
}

// Example, getting only 1 year.
let main0 = async () => {
  let data = await fetchEAForumPosts("2005-01-01","2006-01-01")
  console.log(JSON.stringify(data,null,2))
}
//main0()

// Actual body
let main = async () => {
  let results = []
  for(let year of years){
    print(year)
    let firstDayOfYear = `${year}-01-01`
    let firstDayOfNextYear = `${year+1}-01-01`
    let data = await fetchEAForumPosts(firstDayOfYear, firstDayOfNextYear)
    //console.log(JSON.stringify(data,null,2))
    //console.log(data.slice(0,5))
    results.push(...data)
    await sleep(5000)
  }
  print(results)
  fs.writeFileSync("eaforumposts.json", JSON.stringify(results, 0, 2))
}
main()
Mundane trouble with EV / utility

So here is something which sometimes breaks people: You're saying that you prefer A = 10% chance of saving 10 people over B = 1 in a million chance of saving a billion lives. Do you still prefer a 10% chance of A over a 10% chance of B?

If you are, note how you can be Dutch-booked.

Mundane trouble with EV / utility

On Pascal's mugging specifically, Robert Miles has an interesting youtube video arguing that AI Safety is not a Pascal mugging, which the OP might be interested in: 

Mundane trouble with EV / utility

1 & 2 might be normally be answered by the Von Neumann–Morgenstern utility theorem*

In the case you mentioned, you can try to calculate the impact of an education throughout the beneficiaries' lives. In this case, I'd expect it to mostly be an increase in future wages, but also some other positive externalities. Then you look at  the willingness to trade time for money, or the willingness to trade years of life for money, or the goodness and badness of life at different earning levels, and you come up with a (very uncertain) comparison.

If you want to look at an example of this, you might want to look at GiveWell's evaluations in general, or at their evaluation of deworming charities in particular.

I hope that's enough to point you to some directions which might answer your questions.

* But e.g., for negative utilitarians, axiom's 3 and 3' wouldn't apply in general (because they prefer to avoid suffering infinitely more than promoting happiness, i.e. consider L=some suffering, M=non-existence, N=some happiness) but they would still apply for the particular case where they're trading-off between different quantities of suffering. In any case, even if negative utilitarians would represent the world with two points (total suffering, total happiness), they still have a way of comparing between possible worlds (choose the one with the least suffering, then the one with the most happiness if suffering is equal).

New Top EA Causes for 2021?

This isn't exactly a proposal for a new cause area, but I've felt that the current names of EA organizations are confusingly named. So I'm proposing  some name-swaps:

  • Probably Good should now be called "80,000 hours". Since 80,000 hours explicitly moved towards a more longtermist direction, it has abandoned some of its initial relationship to its name, and Probably Good seems to be picking some of that slack.
  • "80,000 hours should be renamed to "Center for Effective Altruism" (CEA). Although technically a subsidiary, 80,000 hours reaches more people than CEA, and produces more research. This change in name would reflect its de-facto leadership position in the EA community.
  • The Center for Effective Altruism should rebrand to "EA Infrastructure Fund", per CEA's strategical focus on events, local groups and the EA forum, and on providing infrastructure for community building more generally.
  • However, this leaves the "EA Infrastructure Fund" without a name. I think the main desiderata for a name is basically prestige, and so I suggest "Future of Humanity Institute", which sounds suitably ominous. Further, the association with Oxford might lead more applicants to apply, and require a lower salary (since status and monetary compensation are fungible), making the fund more cost-effective.
  • Fortunately, the Global Priorities Institute (GPI) recently determined that helping factory farmed animals is the most pressing priority, and that we never cared that much about humans in the first place. This leaves a bunch of researchers at the Future of Humanity Institute and at the Global Priorities Institute, which recently disbanded, unemployed, but Animal Charity Evaluators is offering them paid junior researcher positions. To reflect its status as the indisputable global priority, Animal Charity Evaluators should consider changing their name to "Doing Good Better".
  • To enable this last change and to avoid confusion, Doing Good Better would have to be put out of print.

I estimate that having better names only has a small or medium impact, but that tractability is sky-high. No comment on neglectedness. 

What do you blokes think?

Report on Semi-informative Priors for AI timelines (Open Philanthropy)

Random thought on anthropics: 

  • If AGI had been developed early and been highly dangerous, one can't update on not seeing it
  • Anthropic reasoning might also apply to calculating the base rate of AGI; in the worlds where it existed and was beneficial, one might not be trying to calculate its a priori outside view.
Load More