This is a special post for quick takes by NunoSempere. Only they can create top-level comments. Comments here also appear on the Quick Takes page and All Posts page.
Sorted by Click to highlight new quick takes since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

Current takeaways from the 2024 US election <> forecasting community.

First section in Forecasting newsletter: US elections, posting here because it has some overlap with EA.

  1. Polymarket beat legacy institutions at processing information, in real time and in general. It was just much faster at calling states, and more confident earlier on the correct outcome.
  2. The OG prediction markets community, the community which has been betting on politics and increasing their bankroll since PredictIt, was on the wrong side of 50%—1, 2, 3, 4, 5. It was the democratic, open-to-all nature of it, the Frenchman who was convinced that mainstream polls were pretty tortured and bet ~$45M, what moved Polymarket to the right side of 50/50.
  3. Polls seem like a garbage in garbage out kind of situation these days. How do you get a representative sample? The answer is maybe that you don't.
  4. Polymarket will live. They were useful to the Trump campaign, which has a much warmer perspective on crypto. The federal government isn't going to prosecute them, nor bettors. Regulatory agencies, like the CFTC and the SEC, which have taken such a prominent role in recent editions of this newsletter, don't really matt
... (read more)

I don't have time to write a detailed response now (might later), but wanted to flag that I either disagree or "agree denotatively but object connotatively" with most of these. I disagree most strongly with #3: the polls were quite good this year. National and swing state polling averages were only wrong by 1% in terms of Trump's vote share, or in other words 2% in terms of margin of victory. This means that polls provided a really large amount of information.

(I do think that Selzer's polls in particular are overrated, and I will try to articulate that case more carefully if I get around to a longer response.)

Oh cool, Scott Alexander just said almost exactly what I wanted to say about your #2 in his latest blog post: https://www.astralcodexten.com/p/congrats-to-polymarket-but-i-still

2
NunoSempere
My sense is that the polls were heavily reweighted by demographics, rather than directly sampling from the population. That said, I welcome your nitpicks, even if brief
1
David T
Think I disagree especially strongly with #6. Of all the reasons to think Musk might be a genius, him going all in on 60/40 odds is definitely not one of them  Especially since he could probably have got an invite to Mar-a-Lago and President Trump's ear on business and space policy with a small donation and generic "love Donald's plans to make American business great again" endorsement, and been able to walk it right back again whenever the political wind was blowing the other way. I don't think he's spent his time and much of his fortune to signal boost catturd tweets out of calm calculation of which way the political wind was blowing.  Biggest and highest profile donor to the winning side last time round didn't do too well out of it either, and he probably did think he was being clever and calculating (Lifelong right winger Thiel's "I think it's 50/50 who will win but my contrarian view is I also don't think it'll be close" was great hedging his bets, on the other hand!) .
1
NunoSempere
I think this is the wrong way to think about it. From my or your perspective, this might have been 60/40 (or even 40/60). But a more informed actor can have better probabilities.
1
David T
My point isn't that the odds were definitely 60/40 (or in any particular range other than "not a dead cert for Trump and his allies to stay in power for as long as anything matters).  My point was that to gloss Musk's political activity over the last four years as "genius" in a prediction market sense (something even he isn't claiming)  you've got to conclude that the most cost effective way a billionaire entrepreneur and major government contractor could get valuable ROI out of an easily-flattered president with overlapping interests was by buying Twitter and embedding himself in largely irrelevant but vaguely aligned culture war bullshit. This seems... unlikely, and it seems even more unlikely people wouldn't have been upset enough with the economy to vote Trump without Elon's input. Otherwise it looks like Elon went on a political opinion binge, and this four year cycle it came up with his cards and not the other lot's cards. Many other people backed Trump in ways which cost them less and will be easier to reconcile with future administrations, and many others will successfully curry favour without even having backed him. Put another way, did you consider the donations of SBF to be genius last time round? 
4
NunoSempere
I think there is something powerful about noticing who is winning and trying to figure out what the generators for their actions are. On this specifically: This is not how I see it. Buying Twitter and changing its norms was a surprisingly high-leverage intervention in a domain where turning money into power is notoriously difficult. One of the effects, but not the only one, was influencing the outcome of the 2024 US elections.
3
David T
I think there's something quite powerful about not going all in on a single data point and noting that Musk backed Hillary Clinton in 2016 and when he did endorse the winning side in 2020 he spent most of the next year publicly complaining about the [predictable] COVID policy outcomes. The base rate for Musk specifically and politically-driven billionaires in general picking winners in elections isn't better than pollsters, or even notably better than random chance. Do you honestly believe that Harris (or Biden) would have won if Musk didn't buy Twitter or spend so much time on it?
4
Jason
How much of that do you think was about what the legacy institutions knew vs. what they publicly communicated? The Polymarket hive mind doesn't necessarily care about things like maintaining democratic institutions (like not making calls that could influence elections elsewhere with still-open polls) or long-term individual reputation (like having to walk the Florida call back in 2000). I don't see those as weaknesses.
4
NunoSempere
If you have {publicly competent, publicly incompetent, privately incompetent, privately competent}, we get some information that screens off publicly competent. That leaves the narrower set {publicly incompetent, privately incompetent, privately competent}. So it's still an update. I agree that in this case there is some room for doubt though. Depending on which institutions we are thinking of (Democratic party, newspapers, etc.), we also get some information from the speed at which people decided on/learned that Biden was going to step down.
1
David T
I also frankly don't think they're necessarily as interested at making speedy decisions based on county level polls, which is exactly the sort of real time stats checking you'd expect prediction market enthsiasts to be great at (obviously trad media does report on county level polls and ultimately use them to decide if they're happy to call a race, but they don't have much incentive to be first and they're tracking and checking a lot of other stuff like politico commentary and rumours and relevance of human interest stories and silly voxpops) Throw in the fact that early counts are sometimes skewed in favour of another candidate which changes around as later voters or postal votes or ballot boxes from further out districts within a county get tallied up. This varies according to jurisdiction rules and demographics and voting trends, and it's possible serious Poly Market betters were extremely clued up on them. But this time around, they'd have been just as right about state level outcomes if much of the money moved on the relatively naive assumption that you couldn't expect any late swings, and not factoring in that possibility would be bad calibration.

Prompted by a different forum:

...as a small case study, the Effective Altruism forum has been impoverished over the last few years by not being lenient with valuable contributors when they had a bad day.

In a few cases, I later learnt that some longstanding user had a mental health breakdown/psychotic break/bipolar something or other. To some extent this is an arbitrary category, and you can interpret going outside normality through the lens of mental health, or through the lens of "this person chose to behave inappropriately". Still, my sense is that leniency would have been a better move when people go off the rails.

In particular, the best move seems to me a combination of:

  • In the short term, when a valued member is behaving uncharacteristically badly, stop them from posting
  • Followup a week or a few weeks later to see how the person is doing

Two factors here are:

  • There is going to be some overlap in that people with propensity for some mental health disorders might be more creative, better able to see things from weird angles, better able to make conceptual connections.
  • In a longstanding online community, people grow to care about others. If a friend goes of the rails, there is the question of how to stop them from causing harm to others, but there is also the question of how to help them be ok, and the second one can just dominate sometimes.

I'm surprised by this. I don't feel like the Forum bans people for a long time for first offences?

4
NunoSempere
I don't think not banning users for first offences is necessary the highest bar I want to reach for. For instance, consider this comment. Like, to exaggerate this a bit, imagine receiving that comment in one of the top 3 worst moments of your life.

I value a forum where people are not rude to each other, so I think it is good that moderators give out a warning to people who are becoming increasingly rude in a short time.

4
NunoSempere
A factor here is that EA has historically had a mental health problem (https://forum.effectivealtruism.org/posts/FheKNFgPqEsN8Nxuv/ea-mental-health-survey-results-and-analysis?commentId=hJweNWsmJr8Jtki86).
4
trevor1
At some point in the 90s or the 00s, the "whole of person" concept became popular in the US Natsec community for security clearance matters. It distinguishes between a surface level vibe from a person, and trying to understand the whole person. The surface level vibe is literally taking the worst of a person and taking it out of context, whereas the whole person concept is making any effort at all to evaluate the person and the odds that they're good to work with and on what areas. Each subject has their own cost-benefit analysis in the context of different work they might do, and more flexible people (e.g. younger) and weirder people will probably have cost-benefit analysis that change somewhat over time. In environments where evaluators are incompetent, lack the resources needed to evaluate each person, or believe that humans can't be evaluated, then there's a reasonable justification to rule people out without making an effort to optimize. Otherwise, evaluators should strive to make predictions and minimize the gap between their predictions of whether a subject will cause harm again, and the reality that comes to pass; for example, putting in any effort at all to succeed at distinguishing between individuals causing harm due to mental health, individuals causing harm due to mistakes due to unpreventable ignorance (e.g. the pauseAI movement), mistakes caused by ignorance that should have been preventable, harm caused by malice correctly attributed to the subject, harm caused by someone spoofing the point of origin, or harm caused by a hostile individual, team, or force covertly using SOTA divide-and-conquer tactics to disrupt or sow discord in an entire org, movement, or vulnerable clique; see Conflict vs mistake theory.

Reasons why upvotes on the EA forum and LW don't correlate that well with impact .

  1. More easily accessible content, or more introductory material gets upvoted more.
  2. Material which gets shared more widely gets upvoted more.
  3. Content which is more prone to bikeshedding gets upvoted more.
  4. Posts which are beautifully written are more upvoted.
  5. Posts written by better known authors are more upvoted (once you've seen this, you can't unsee).
  6. The time at which a post is published affects how many upvotes it gets.
  7. Other random factors, such as whether other strong posts are published at the same time, also affect the number of upvotes.
  8. Not all projects are conducive to having a post written about them.
  9. The function from value to upvotes is concave (e.g., like a logarithm or like a square root), in that a project which results in a post with a 100 upvotes is probably more than 5 times as valuable as 5 posts with 20 upvotes each. This is what you'd expect if the supply of upvotes was limited.
  10. Upvotes suffer from inflation as EA forum gets populated more, so that a post which would have gathered 50 upvotes two years might gather 100 upvotes now.
  11. Upvotes may not take into account the relationship
... (read more)
8
MichaelA🔸
I agree that the correlation between number of upvotes on EA forum and LW posts/comments and impact isn't very strong. (My sense is that it's somewhere between weak and strong, but not very weak or very strong.) I also agree that most of the reasons you list are relevant. But how I'd frame this is that - for example - a post being more accessible increases the post's expected upvotes even more than it increases its expected impact. I wouldn't say "Posts that are more accessible get more upvotes, therefore the correlation is weak", because I think increased accessibility will indeed increase a post's impact (holding other factor's constant).  Same goes for many of the other factors you list.  E.g., more sharing tends to both increase a post's impact (more readers means more opportunity to positively influence people) and signal that the post would have a positive impact on each reader (as that is one factor - among many - in whether people share things). So the mere fact that sharing probably tends to increase upvotes to some extent doesn't necessarily weaken the correlation between upvotes and impact. (Though I'd guess that sharing does increase upvotes more than it increases/signals impact, so this comment is more like a nitpick than a very substantive disagreement.)
6
EdoArad
To make it clear, the claim is that the number karma for a forum post on a  project does not correlate well with the project's direct impact? Rather than, say, that a karma score of a post correlates well with the impact of the post itself on the community?
4
NunoSempere
I'd say it also doesn't correlate that well with its total (direct+indirect) impact either, but yes. And I was thinking more in contrast to the karma score being an ideal measure of total impact; I don't have thoughts to share here on the impact of the post itself on the community.
4
EdoArad
Thanks, that makes sense.  I think that for me, I upvote according to how much I think a post itself is valuable for me or for the community as a whole. At least, that's what I'm trying to do when I'm thinking about it logically.

Open Philanthopy’s allocation by cause area

Open Philanthropy’s grants so far, roughly:

This only includes the top 8 areas. “Other areas” refers to grants tagged “Other areas” in OpenPhil’s database. So there are around $47M in known donations missing from that graph. There is also one (I presume fairly large) donation amount missing from OP’s database, to Impossible Foods

See also as a tweet and on my blog. Thanks to @tmkadamcz for suggesting I use a bar chart.

6
JamesÖz
One thing I can never figure out is where the missing Open Phil donations are! According to their own internal comms (e.g. this job advert) they gave away roughly $450 million in 2021. Yet when you look at their grants database, you only find about $350 million, which is a fair bit short. Any idea why this might be?  I think it could be something to do with contractor agreement (e.g. they gave $2.8 million to Kurzgesagt and said they don't tend to publish similar contractor agreements like these). Curious to see the breakdown of the other approx. $100 million though!
2
Aaron Gertler 🔸
We're still in the process of publishing our 2021 grants, so many of those aren't on the website yet. Most of the yet-to-be-published grants are from the tail end of the year — you may have noticed a lot more published grants from January than December, for example.  That accounts for most of the gap. The gap also includes a few grants that are unusual for various reasons (e.g. a grant for which we've made the first of two payments already but will only publish once we've made the second payment a year from now).  We only include contractor agreements in our total giving figures if they are conceptually very similar to grants (Kurzgesagt is an example of this). Those are also the contractor agreements we tend to publish. In other words, an agreement that isn't published is very unlikely to show up in our total giving figures.
2
NunoSempere
I'm guessing $10M-$50M to something like Impossible Food, and $50-100M to political causes
2
Aaron Gertler 🔸
We publish our giving to political causes just as we publish our other giving (e.g. this ballot initiative). As with contractor agreements, we publish investments and include them in our total giving if they are conceptually similar to grants (meaning that investments aren't part of the gap James noted).  You can see a list of published investments by searching "investment" in our grants database.
4
RyanCarey
I did a variation on this analysis here: https://github.com/RyanCarey/openphil
2
Chris Leong
Any thoughts on way AI keeps expanding then shrinking? Is it due to 2 year grants?
4
Gavin
Giant grants for new orgs like CSET and Redwood (but overwhelmingly CSET)

Estimates of how long various evaluations took me (in FTEs)

Title How long it took me
Shallow evaluations of longtermist organizations Around a month, or around three days for each organization.
External Evaluation of the EA Wiki Around three weeks
2018-2019 Long-Term Future Fund Grantees: How did they do? Around two weeks
Relative Impact of the First 10 EA Forum Prize Winners Around a week
An experiment to evaluate the value of one researcher's work Around a week
An estimate of the value of Metaculus questions Around three days

The recent EA Forum switch to to creative commons license (see here) has brought into relief for me that I am fairly dependent on the EA forum as a distribution medium for my writing.

Partly as a result, I've invested a bit on my blog: <https://nunosempere.com/blog/>, added comments, an RSS feed, and a newsletter

Arguably I should have done this years ago. I also see this dependence with social media, where a few people I know depend on Twitter, Instagram &co for the distribution of their content & ideas.

(Edit: added some more thoughts here)

Brief thoughts on my personal research strategy

Sharing a post from my blog: <https://nunosempere.com/blog/2022/10/31/brief-thoughts-personal-strategy/>; I prefer comments there.

Here are a few estimation related things that I can be doing:

  1. In-house longtermist estimation: I estimate the value of speculative projects, organizations, etc.
  2. Improving marginal efficiency: I advise groups making specific decisions on how to better maximize expected value.
  3. Building up estimation capacity: I train more people, popularize or create tooling, create templates and a
... (read more)

What happened in forecasting in March 2020

Epistemic status: Experiment. Somewhat parochial.

Prediction platforms.

  • Foretold has two communities on Active Coronavirus Infections and general questions on COVID.
  • Metaculus brings us the The Li Wenliang prize series for forecasting the COVID-19 outbreak, as well as the Lockdown series and many other pandemic questions
  • PredictIt: The odds of Trump winning the 2020 elections remain at a pretty constant 50%, oscillating between 45% and 57%.
  • The Good Judgment Project has a selection of interesting questions, which aren't available unless one is a participant. A sample below (crowd forecast in parenthesis):
    • Will the UN declare that a famine exists in any part of Ethiopia, Kenya, Somalia, Tanzania, or Uganda in 2020? (60%)
    • In its January 2021 World Economic Outlook report, by how much will the International Monetary Fund (IMF) estimate the global economy grew in 2020? (Less than 1.5%: 94%, Between 1.5% and 2.0%, inclusive: 4%)
    • Before 1 July 2020, will SpaceX launch its first crewed mission into orbit? (22%)
    • Before 1 January 2021, will the Council of the European Union request the consent of the European Parliament to conclude a European Uni
... (read more)

The Stanford Social Innovation Review makes the case (archive link) that new, promising interventions are almost never scaled up by already established, big NGOs.

4
EricHerboso
I suppose I just assumed that scale ups happened regularly at big NGOs and I never bothered to look closely enough to notice that it didn't. I find this very surprising.

Infinite Ethics 101: Stochastic and Statewise Dominance as a Backup Decision Theory when Expected Values Fail

First posted on nunosempere.com/blog/2022/05/20/infinite-ethics-101 , and written after one too many times encountering someone who didn't know what to do when encountering infinite expected values.

In Exceeding expectations: stochastic dominance as a general decision theory, Christian Tarsney presents stochastic dominance (to be defined) as a total replacement for expected value as a decision theory. He wants to argue that one decision is only ratio... (read more)

5
Kevin Lacker
You could discount utilons - say there is a “meta-utilon” which is a function of utilons, like maybe meta utilons = log(utilons). And then you could maximize expected metautilons rather than expected utilons. Then I think stochastic dominance is equivalent to saying “better for any non decreasing metautilon function”. But you could also pick a single metautilon function and I believe the outcome would at least be consistent. Really you might as well call the metautilons “utilons” though. They are just not necessarily additive.
2
Charles He
A monotonic transformation like log doesn’t solve the infinity issue right? Time discounting (to get you comparisons between finite sums) doesn’t preserve the ordering over sequences. This makes me think you are thinking about something else?
1
Kevin Lacker
Monotonic transformations can indeed solve the infinity issue. For example the sum of 1/n doesn’t converge, but the sum of 1/n^2 converges, even though x -> x^2 is monotonic.

I've been blogging on nunosempere.com/blog/ for the past few months. If you are reading this shortform, you might want to check out my posts there.

Here is an excerpt from a draft that didn't really fit in the main body.

Getting closer to expected value calculations seems worth it even if we can't reach them

Because there are many steps between quantification and impact, quantifying the value of quantification might be particularly hard. That said, each step towards getting closer to expected value calculations seems valuable even if we never arrive at expected value calculations. For example:

  • Quantifying the value of one organization on one unit might be valuable, if the organization aims to do better e
... (read more)

Quality Adjusted Research Papers

Taken from here, but I want to be able to refer to the idea by itself. 

... (read more)

The Tragedy of Calisto and Melibea

https://nunosempere.com/blog/2022/06/14/the-tragedy-of-calisto-and-melibea/

Enter CALISTO, a young nobleman who, in the course of his adventures, finds MELIBEA, a young noblewoman, and is bewitched by her appearance.

CALISTO: Your presence, Melibea, exceeds my 99.9% percentile prediction.

MELIBEA: How so, Calisto?

CALISTO: In that the grace of your form, its presentation and concealment, its motions and ornamentation are to me so unforeseen that they make me doubt my eyes, my sanity, and my forecasting prowess. In that if beau... (read more)

A comment I left on Knightian Uncertainty here.:

The way this finally clicked for me was: Sure, Bayesian probability theory is the one true way to do probability. But you can't actually implement it.

In particular, problems I've experienced are:

- I'm sometimes not sure about my calibration in new domains

- Sometimes something happens that I couldn't have predicted beforehand (particularly if it's very specific), and it's not clear what the Bayesian update should be. Note that I'm talking about "something took me completely by surprise" rather than "something ... (read more)

CoronaVirus and Famine

The Good Judgement Open forecasting tournament gives a 66% chance for the answer to "Will the UN declare that a famine exists in any part of Ethiopia, Kenya, Somalia, Tanzania, or Uganda in 2020?"

I think that the 66% is a slight overestimate. But nonetheless, if a famine does hit, it would be terrible, as other countries might not be able to spare enough attention due to the current pandem

... (read more)
3
Aaron Gertler 🔸
Did you mean to post this using the Markdown editor? Currently, the formatting looks a bit odd from a reader's perspective.
2
NunoSempere
Ethiopia's Tigray region has seen famine before: why it could happen again - The Conversation Africa https://theconversation.com/ethiopias-tigray-region-has-seen-famine-before-why-it-could-happen-again-150181 Tue, 17 Nov 2020 13:38:00 GMT   The Tigray region is now seeing armed conflict. I'm at 5-10%+ that it develops into famine (regardless of whether it ends up meeting the rather stringent UN conditions for the term to be used) (but have yet to actually look into the base rate).  I've sent an email to FEWs.net to see if they update their forecasts. 

How much would I have to run to lose 20 kilograms?

Originally posted on my blog, @ <https://nunosempere.com/blog/2022/07/27/how-much-to-run-to-lose-20-kilograms/>

In short, from my estimates, I would have to run 70-ish to 280-ish 5km runs, which would take me between half a year and a bit over two years. But my gut feeling is telling me that it would take me twice as long, say, between a year and four.

I came up with that estimate because was recently doing some exercise and I didn’t like the machine’s calorie loss calculations, so I rolled some calcula... (read more)

Excerpt from "Chapter 7: Safeguarding Humanity" of Toby Ord's The Precipice, copied here for later reference. h/t Michael A.

SECURITY AMONG THE STARS?

Many of those who have written about the risks of human extinction suggest that if we could just survive long enough to spread out through space, we would be safe—that we currently have all of our eggs in one basket, but if we became an interplanetary species, this period of vulnerability would end. Is this right? Would settling other planets bring us existential security?

The idea is based on an important stat... (read more)

6
NunoSempere
Nitpick: I would have written "this argument only applies to risks that are statistically independent" as "this argument applies to a lesser degree if the risks are not statistically independent, and proportional to their degree of correlation." Space colonization still buys you some risk protection if the risks are not statistically independent but imperfectly correlated. For example, another planet definitely buys you at least some protection from absolute tyranny (even if tyranny in one place is correlated with tyranny elsewhere.)

Here is a more cleaned up — yet still very experimental — version of a rubric I'm using for the value of research:

Expected

  • Probabilistic
    • % of producing an output which reaches goals
      • Past successes in area
      • Quality of feedback loops
      • Personal motivation
    • % of being counterfactually useful
      • Novelty
      • Neglectedness
  • Existential
    • Robustness: Is this project robust under different models?
    • Reliability: If this is a research project, how much can we trust the results?

Impact

  • Overall promisingness (intuition)
  • Scale: How many people affected
  • Importance: How
... (read more)
4
EdoArad
I like it! I think that something in this vein could potentially be very useful. Can you expand more about the proxies of impact?
2
NunoSempere
Sure. So I'm thinking that for impact, you'd have sort of causal factors (Scale, importance, relation to other work, etc.) But then you'd also have proxies of impact, things that you intuit correlate well with having an impact even if the relationship isn't causal. For example, having lots of comments praising some project doesn't normally cause the project to have more impact. See here for the kind of thing I'm going for.

Some thoughts on Turing.jl

Originally published on my blog @ <https://nunosempere.com/blog/2022/07/23/thoughts-on-turing-julia/

Turing is a cool probabilistic programming new language written on top of Julia. Mostly I just wanted to play around with a different probabilistic programming language, and discard the low-probability hypothesis that things that I am currently doing in Squiggle could be better implemented in it.

My thoughts after downloading it and playing with it a tiny bit are as follows:

1. Installation is annoying: The program is pre... (read more)

Here is a css snippet to make the forum a bit cleaner. <https://gist.github.com/NunoSempere/3062bc92531be5024587473e64bb2984>. I also like ea.greaterwrong.com under the brutalist setting and with maximum width.

Better scoring rules

From SamotsvetyForecasting/optimal-scoring:

This git repository outlines three scoring rules that I believe might serve current forecasting platforms better than current alternatives. The motivation behind it is my frustration with scoring rules as used in current forecasting platforms, like Metaculus, Good Judgment Open, Manifold Markets, INFER, and others. In Sempere and Lawsen, we outlined and categorized how current scoring rules go wrong, and I think that the three new scoring rules I propose avoid the pitfalls outlined in that pape

... (read more)
4
Nathan Young
Waaaaiiit I didn't know you could do this!
2
NunoSempere
It was recently done, in collaboration with the Manifold guys. I'm also making sure that the dimensions are right, see: <https://github.com/ForumMagnum/ForumMagnum/pull/6096>
2
NunoSempere
It was just pushed!

How to get into forecasting, and why?

Taken from this answer, written quickly, might iterate.

As another answer mentioned, I have a forecasting newsletter which might be of interest, maybe going through back-issues and following the links that catch your interest could give you some amount of  background information.

For reference works, the Superforecasting book is a good introduction. For the background behind the practice, personally, I would also recommend E.T. Jaynes' Probability Theory, The Logic of Science (find a well-formatted edition, some of t... (read more)

Notes on: A Sequence Against Strong Longtermism

Summary for myself. Note: Pretty stream-of-thought.

Proving too much

  • The set of all possible futures is infinite which somehow breaks some important assumptions longtermists are apparently making.
    • Somehow this fails to actually bother me
  • ...the methodological error of equating made up numbers with real data
    • This seems like a cheap/unjustified shot. In the world where we can calculate the expected values, it would seems fine to compare (wide, uncertain) speculative interventions with harcore GiveWell data (note that
... (read more)

If one takes Toby Ord's x-risk estimates (from here), but adds some uncertainty, one gets: this Guesstimate. X-risk ranges from 0.1 to 0.3, with a point estimate of 0.19, or 1 in 5 (vs 1 in 6 in the book).

2
NunoSempere
I personally would add more probability to unforeseen natural risk and unforeseen anthropocentric risk
2
NunoSempere
The uncertainty regarding AI risk is driving most of the overall uncertainty.

2020 U.S. Presidential election to be most expensive in history, expected to cost $14 billion - The Hindu https://www.thehindu.com/news/international/2020-us-presidential-election-to-be-most-expensive-in-history-expected-to-cost-14-billion/article32969375.ece Thu, 29 Oct 2020 03:17:43 GMT

Testing shortform

Curated and popular this week
Relevant opportunities