NunoSempere's Shortform

by NunoSempere22nd Mar 202042 comments
25 comments, sorted by Highlighting new comments since Today at 6:29 PM
New Comment

Reasons why upvotes on the EA forum and LW don't correlate that well with impact .

  1. More easily accessible content, or more introductory material gets upvoted more.
  2. Material which gets shared more widely gets upvoted more.
  3. Content which is more prone to bikeshedding gets upvoted more.
  4. Posts which are beautifully written are more upvoted.
  5. Posts written by better known authors are more upvoted (once you've seen this, you can't unsee).
  6. The time at which a post is published affects how many upvotes it gets.
  7. Other random factors, such as whether other strong posts are published at the same time, also affect the number of upvotes.
  8. Not all projects are conducive to having a post written about them.
  9. The function from value to upvotes is concave (e.g., like a logarithm or like a square root), in that a project which results in a post with a 100 upvotes is probably more than 5 times as valuable as 5 posts with 20 upvotes each. This is what you'd expect if the supply of upvotes was limited.
  10. Upvotes suffer from inflation as EA forum gets populated more, so that a post which would have gathered 50 upvotes two years might gather 100 upvotes now.
  11. Upvotes may not take into account the relationship between projects, or other indirect effects. For example, projects which contribute to existing agendas are probably more valuable than otherwise equal standalone projects, but this might not be obvious from the text.
  12. ...

I agree that the correlation between number of upvotes on EA forum and LW posts/comments and impact isn't very strong. (My sense is that it's somewhere between weak and strong, but not very weak or very strong.) I also agree that most of the reasons you list are relevant.

But how I'd frame this is that - for example - a post being more accessible increases the post's expected upvotes even more than it increases its expected impact. I wouldn't say "Posts that are more accessible get more upvotes, therefore the correlation is weak", because I think increased accessibility will indeed increase a post's impact (holding other factor's constant). 

Same goes for many of the other factors you list. 

E.g., more sharing tends to both increase a post's impact (more readers means more opportunity to positively influence people) and signal that the post would have a positive impact on each reader (as that is one factor - among many - in whether people share things). So the mere fact that sharing probably tends to increase upvotes to some extent doesn't necessarily weaken the correlation between upvotes and impact. (Though I'd guess that sharing does increase upvotes more than it increases/signals impact, so this comment is more like a nitpick than a very substantive disagreement.)

To make it clear, the claim is that the number karma for a forum post on a  project does not correlate well with the project's direct impact? Rather than, say, that a karma score of a post correlates well with the impact of the post itself on the community?

I'd say it also doesn't correlate that well with its total (direct+indirect) impact either, but yes. And I was thinking more in contrast to the karma score being an ideal measure of total impact; I don't have thoughts to share here on the impact of the post itself on the community.

Thanks, that makes sense. 

I think that for me, I upvote according to how much I think a post itself is valuable for me or for the community as a whole. At least, that's what I'm trying to do when I'm thinking about it logically.

What happened in forecasting in March 2020

Epistemic status: Experiment. Somewhat parochial.

Prediction platforms.

  • Foretold has two communities on Active Coronavirus Infections and general questions on COVID.
  • Metaculus brings us the The Li Wenliang prize series for forecasting the COVID-19 outbreak, as well as the Lockdown series and many other pandemic questions
  • PredictIt: The odds of Trump winning the 2020 elections remain at a pretty constant 50%, oscillating between 45% and 57%.
  • The Good Judgment Project has a selection of interesting questions, which aren't available unless one is a participant. A sample below (crowd forecast in parenthesis):
    • Will the UN declare that a famine exists in any part of Ethiopia, Kenya, Somalia, Tanzania, or Uganda in 2020? (60%)
    • In its January 2021 World Economic Outlook report, by how much will the International Monetary Fund (IMF) estimate the global economy grew in 2020? (Less than 1.5%: 94%, Between 1.5% and 2.0%, inclusive: 4%)
    • Before 1 July 2020, will SpaceX launch its first crewed mission into orbit? (22%)
    • Before 1 January 2021, will the Council of the European Union request the consent of the European Parliament to conclude a European Union-United Kingdom trade agreement? (25%)
    • Will Benjamin Netanyahu cease to be the prime minister of Israel before 1 January 2021? (50%)
    • Before 1 January 2021, will there be a lethal confrontation between the national military or law enforcement forces of Iran and Saudi Arabia either in Iran or at sea? (20%)
    • Before 1 January 2021, will a United States Supreme Court seat be vacated? (No: 55%, Yes, and a replacement Justice will be confirmed by the Senate before 1 January 2021: 25%, Yes, but no replacement Justice will be confirmed by the Senate before 1 January 2021: 20%)
    • Will the United States experience at least one quarter of negative real GDP growth in 2020? (75%)
    • Who will win the 2020 United States presidential election? (The Republican Party nominee: 50%, The Democratic Party nominee: 50%, Another candidate: 0%)
    • Before 1 January 2021, will there be a lethal confrontation between the national military forces of Iran and the United States either in Iran or at sea? (20%)
    • Will Nicolas Maduro cease to be president of Venezuela before 1 June 2020? (10%)
    • When will the Transportation Security Administration (TSA) next screen two million or more travelers in a single day? (Not before 1 September 2020: 66%, Between 1 August 2020 and 31 August 2020: 17%, Between 1 July 2020 and 31 July 2020: 11%, Between 1 June 2020 and 30 June 2020: 4%, Before 1 June 2020: 2%)

Misc.

  • The Brookings institution, on Forecasting energy futures amid the coronavirus outbreak
  • The European Statistical Service is "a partnership between Eurostat and national statistical institutes or other national authorities in each European Union (EU) Member State responsible for developing, producing and disseminating European statistics". In this time of need, the ESS brings us inane information, like "consumer prices increased by 0.1% in March in Switzerland".
  • Famine: The famine early warning system gives emergency and crisis warnings for East Africa.
  • COVID: Everyone and their mother have been trying to predict the future of COVID. One such initiative is Epidemic forecasting, which uses inputs from the above mentioned prediction platforms.
  • On LessWrong, Assessing Kurzweil's 1999 predictions for 2019; I expect an accuracy of between 30% and 40%, based on my own investigations but find the idea of crowdsourcing the assessment rather interesting.

Prizes in the EA Forum and LW.

I was looking at things other people had tried before.

EA Forum

How should we run the EA Forum Prize?

Cause-specific Effectiveness Prize (Project Plan)

Announcing Li Wenliang Prize for forecasting the COVID-19 outbreak

Announcing the Bentham Prize

$100 Prize to Best Argument Against Donating to the EA Hotel

Essay contest: general considerations for evaluating small-scale giving opportunities ($300 for winning submission)

Cash prizes for the best arguments against psychedelics being an EA cause area

Cause-specific Effectiveness Prize (Project Plan)

Debrief: "cash prizes for the best arguments against psychedelics"

Cash prizes for the best arguments against psychedelics being an EA cause area

A black swan energy prize

AI alignment prize winners and next round

$500 prize for anybody who can change our current top choice of intervention

The Most Good - promotional prizes for EA chapters from Peter Singer, CEA, and 80,000 Hours

LW (on the last 5000 posts)

Over $1,000,000 in prizes for COVID-19 work from Emergent Ventures

The Dualist Predict-O-Matic ($100 prize)

Seeking suggestions for EA cash-prize contest

Announcement: AI alignment prize round 4 winners

A Gwern comment on the Prize literature

[prize] new contest for Spaced Repetition literature review ($365+)

[Prize] Essay Contest: Cryonics and Effective Altruism

Announcing the Quantified Health Prize

Oops Prize update

Some thoughts on: https://groups.google.com/g/lw-public-goods-team

AI Alignment Prize: Round 2 due March 31, 2018

Quantified Health Prize results announced

FLI awards prize to Arkhipov’s relatives

Progress and Prizes in AI Alignment

Prize for probable problems

Prize for the best introduction to the LessWrong source ($250)

How to replicate.

Go to the EA forum API or to the LW API and input the following query:


{
      posts(input: {
        terms: {
          # view: "top"
          meta: null  # this seems to get both meta and non-meta posts
          after: "10-1-2000"
          before: "10-11-2020" # or some date in the future
        }
      }) {
        results {
          title
          url
          pageUrl
          createdAt
        }
      }
}

Copy the output into a last5000posts.txt

Search for the keyword "prize". In Linux one can use this with grep "prize" last5000posts.txt, or with grep -B 1 "prize" last5000posts.txt | sed 's/^.*: //' | sed 's/\"//g' > last500postsClean.txt to produce a cleaner output.

Can't believe I forgot the D-Prize, which awards $20,000 USD for teams to distribute proven poverty interventions.

The Stanford Social Innovation Review makes the case (archive link) that new, promising interventions are almost never scaled up by already established, big NGOs.

I suppose I just assumed that scale ups happened regularly at big NGOs and I never bothered to look closely enough to notice that it didn't. I find this very surprising.

Quality Adjusted Research Papers

Taken from here, but I want to be able to refer to the idea by itself. 

This spans six orders of magnitude (1 to 1,000,000 mQ), but I do find that my intuitions agree with the relative values, i.e., I would probably sacrifice each example for 10 equivalents of the preceding type (and vice-versa).

A unit — even if it is arbitrary or ad-hoc — makes relative comparison easier, because projects can be compared to a reference point, rather than between each other.. It also makes working with different orders of magnitude easier: instead of asking how valuable a blog post is compared to a foundational paper, one can move up and down in steps of 10x, which seems much more manageable. 

CoronaVirus and Famine

The Good Judgement Open forecasting tournament gives a 66% chance for the answer to "Will the UN declare that a famine exists in any part of Ethiopia, Kenya, Somalia, Tanzania, or Uganda in 2020?"

I think that the 66% is a slight overestimate. But nonetheless, if a famine does hit, it would be terrible, as other countries might not be able to spare enough attention due to the current pandemic.

  1. https://ourworldindata.org/what-does-a-famine-declaration-declare
  2. https://fews.net/
  3. https://www.gjopen.com/questions/1559-will-the-un-declare-that-a-famine-exists-in-any-part-of-ethiopia-kenya-somalia-tanzania-or-uganda-in-2020 (registration needed to see)

It is not clear to me what an altruist who realizes that can do, as an individual:

  • A famine is likely to hit this region (but hasn't hit yet)
  • It is likely to be particularly bad.

Donating to the World Food Programme, which is already doing work on the matter, might be a promising answer, but I haven't evaluated the programe, nor compared it to other potentially promising options (see here: https://forum.effectivealtruism.org/posts/wpaZRoLFJy8DynwQN/the-best-places-to-donate-for-covid-19, or https://www.againstmalaria.com/)

Did you mean to post this using the Markdown editor? Currently, the formatting looks a bit odd from a reader's perspective.

Ethiopia's Tigray region has seen famine before: why it could happen again - The Conversation Africa

https://theconversation.com/ethiopias-tigray-region-has-seen-famine-before-why-it-could-happen-again-150181

Tue, 17 Nov 2020 13:38:00 GMT

 

The Tigray region is now seeing armed conflict. I'm at 5-10%+ that it develops into famine (regardless of whether it ends up meeting the rather stringent UN conditions for the term to be used) (but have yet to actually look into the base rate).  I've sent an email to FEWs.net to see if they update their forecasts. 

Excerpt from "Chapter 7: Safeguarding Humanity" of Toby Ord's The Precipice, copied here for later reference. h/t Michael A.

SECURITY AMONG THE STARS?

Many of those who have written about the risks of human extinction suggest that if we could just survive long enough to spread out through space, we would be safe—that we currently have all of our eggs in one basket, but if we became an interplanetary species, this period of vulnerability would end. Is this right? Would settling other planets bring us existential security?

The idea is based on an important statistical truth. If there were a growing number of locations which all need to be destroyed for humanity to fail, and if the chance of each suffering a catastrophe is independent of whether the others do too, then there is a good chance humanity could survive indefinitely.

But unfortunately, this argument only applies to risks that are statistically independent. Many risks, such as disease, war, tyranny and permanently locking in bad values are correlated across different planets: if they affect one, they are somewhat more likely to affect the others too. A few risks, such as unaligned AGI and vacuum collapse, are almost completely correlated: if they affect one planet, they will likely affect all. And presumably some of the as-yet-undiscovered risks will also be correlated between our settlements.

Space settlement is thus helpful for achieving existential security (by eliminating the uncorrelated risks) but it is by no means sufficient. Becoming a multi-planetary species is an inspirational project—and may be a necessary step in achieving humanity’s potential. But we still need to address the problem of existential risk head-on, by choosing to make safeguarding our longterm potential one of our central priorities.

Nitpick: I would have written "this argument only applies to risks that are statistically independent" as "this argument applies to a lesser degree if the risks are not statistically independent, and proportional to their degree of correlation." Space colonization still buys you some risk protection if the risks are not statistically independent but imperfectly correlated. For example, another planet definitely buys you at least some protection from absolute tyranny (even if tyranny in one place is correlated with tyranny elsewhere.)

Here is a more cleaned up — yet still very experimental — version of a rubric I'm using for the value of research:

Expected

  • Probabilistic
    • % of producing an output which reaches goals
      • Past successes in area
      • Quality of feedback loops
      • Personal motivation
    • % of being counterfactually useful
      • Novelty
      • Neglectedness
  • Existential
    • Robustness: Is this project robust under different models?
    • Reliability: If this is a research project, how much can we trust the results?

Impact

  • Overall promisingness (intuition)
  • Scale: How many people affected
  • Importance: How important for each person
  • (Proxies of impact):
    • Connectedness
    • Engagement
    • De-confusion
    • Direct applicability
    • Indirect impact
      • Career capital
      • Information value

Per Unit of Resources

  • Personal fit
  • Time needed
  • Funding needed
  • Logistical difficulty

See also: Charity Entrepreneurship's rubric, geared towards choosing which charity to start.

I like it! I think that something in this vein could potentially be very useful. Can you expand more about the proxies of impact?

Sure. So I'm thinking that for impact, you'd have sort of causal factors (Scale, importance, relation to other work, etc.) But then you'd also have proxies of impact, things that you intuit correlate well with having an impact even if the relationship isn't causal. For example, having lots of comments praising some project doesn't normally cause the project to have more impact. See here for the kind of thing I'm going for.

If one takes Toby Ord's x-risk estimates (from here), but adds some uncertainty, one gets: this Guesstimate. X-risk ranges from 0.1 to 0.3, with a point estimate of 0.19, or 1 in 5 (vs 1 in 6 in the book).

I personally would add more probability to unforeseen natural risk and unforeseen anthropocentric risk

The uncertainty regarding AI risk is driving most of the overall uncertainty.

2020 U.S. Presidential election to be most expensive in history, expected to cost $14 billion - The Hindu https://www.thehindu.com/news/international/2020-us-presidential-election-to-be-most-expensive-in-history-expected-to-cost-14-billion/article32969375.ece Thu, 29 Oct 2020 03:17:43 GMT

Testing shortform

[+][comment deleted]3mo 2