Shortform Content [Beta]

Daniel_Friedrich's Shortform

COVID question: What's the pessimistic case for Omicron? 

My attempt: Supposing there were 0 deaths out of the 56.000 estimated cases and the symptoms are described as mild, we shouldn't worry about direct COVID deaths, even if vaccines for instance only work half as good for it.  However it's hard to evaluate its impact on life expectancy for younger people mediated through its long-term symptoms. Young people are significantly more often hospitalized with Omicron although some blame their low vaccination rates and experience long COVID more seve... (read more)

Samuel Shadrach's Shortform

Random thought: Could it make sense for GiveDirectly donations to be set up as income share agreements instead?

Basically if someone is given money by GiveDirectly, and later their income crosses some threshold X, then say 10% of their income is claimed upto say the original amount provided by GiveDirectly with no added interest.

Ozzie Gooen's Shortform

I’m sort of hoping that 15 years from now, a whole lot of common debates quickly get reduced to debates about prediction setups.

“So, I think that this plan will create a boom for the United States manufacturing sector.”

“But the prediction markets say it will actually lead to a net decrease. How do you square that?”

“Oh, well, I think that those specific questions don’t have enough predictions to be considered highly accurate.”

“Really? They have a robustness score of 2.5. Do you think there’s a mistake in the general robustness algorithm?”


Perhaps 10 years ... (read more)

Showing 3 of 4 replies (Click to show all)
2Ozzie Gooen1dMy guess is that this could be neat, but also pretty tricky. There are lots of "debate/argument" platforms out there, it's seemed to have worked out a lot worse than people were hoping. But I'd love to be proven wrong. If "this" means the specific thing you're referring to, I don't think there's really a project for that yet, you'd have to do it yourself. If you're referring more to forecasting projects more generally, there are different forecasting jobs and stuff popping up. Metaculus has been doing some hiring. You could also do academic research in the space. Another option is getting an EA Funds grant and pursuing a specific project (though I realize this is tricky!)

If "this" means the specific thing you're referring to, I don't think there's really a project for that yet, you'd have to do it yourself. If you're referring more to forecasting projects more generally, there are different forecasting jobs and stuff popping up. Metaculus has been doing some hiring. You could also do academic research in the space. Another option is getting an EA Funds grant and pursuing a specific project (though I realize this is tricky!)

Thanks this helps

1Samuel Shadrach1dDebate platform seems very different from a prediction market with liquidity. As long as you pay sufficient incentives to marketmakers they will spend time figuring out the best prices to quote - their primary motivation is profit (rather than fun or intellectaul stimulation). Whoever is paying out these incentives can figure out which cruxes they want resolved and accordingly pay on those markets.
Ozzie Gooen's Shortform

Could/should altruistic activist investors buy lots of Twitter stock, then pressure them to do altruistic things?


So, Jack Dorsey just resigned from Twitter.

Some people on Hacker News are pointing out that Twitter has had recent issues with activist investors, and that this move might make those investors happy.

From a quick look... Twitter stock really hasn't been doing very well. It's almost back at its price in 2014.

Square, Jack Dorsey's other company (he was CEO of two), has done much better.... (read more)


Coordination is a pain though, you may be better off appealing to specific HNWI investors to rally the cause. If anyone else is interested they can buy stock and delegate votes.

In general I think there's a case to be made for making delegating voting rights easier.

Ozzie Gooen's Shortform

Some musicians have multiple alter-egos that they use to communicate information from different perspectives. MF Doom released albums under several alter-egos; he even used these aliases to criticize his previous aliases.

Some musicians, like Madonna, just continued to "re-invent" themselves every few years.

Youtube personalities often feature themselves dressed as different personalities to represent different viewpoints. 

It's really difficult to keep a single understood identity, while also conveying different kinds of information.

Narrow identities ar... (read more)

As someone coming from the crypto space, I think carefully about which identity has what kind of content attached, and whether they can be cross-linked. Both for privacy and engagement purposes. Usernames instead of real names work well for this.

I don't see why researchers or EAs can't do that.

Samuel Shadrach's Shortform

What do EAs think about AI surveillance tech dismantling democracies?

Ozzie Gooen's Shortform

When discussing forecasting systems, sometimes I get asked,

“If we were to have much more powerful forecasting systems, what, specifically, would we use them for?”

The obvious answer is,

“We’d first use them to help us figure out what to use them for”


“Powerful forecasting systems would be used, at first, to figure out what to use powerful forecasting systems on”

For example,

  1. We make a list of 10,000 potential government forecasting projects.
  2. For each, we will have a later evaluation for “how valuable/successful was this project?”.
  3. We then open forecasting ques
... (read more)
Ozzie Gooen's Shortform

One futarchy/prediction market/coordination idea I have is to find some local governments and see if we could help them out by incorporating some of the relevant techniques.

This could be neat if it could be done as a side project. Right now effective altruists/rationalists don't actually have many great examples of side projects, and historically, "the spare time of particularly enthusiastic members of a jurisdiction" has been a major factor in improving governments.

Berkeley and London seem like natural choices given the communities there. I imagine it cou... (read more)

Ozzie Gooen's Shortform

The following things could both be true:

1) Humanity has a >80% chance of completely perishing in the next ~300 years.

2) The expected value of the future is incredibly, ridiculously, high!

The trick is that the expected value of a positive outcome could be just insanely great. Like, dramatically, incredibly, totally, better than basically anyone discusses or talks about.

Expanding to a great deal of the universe, dramatically improving our abilities to convert matter+energy to net well-being, researching strategies to expand out of the universe.

A 20%, or e... (read more)

Ozzie Gooen's Shortform

Opinions on charging for professional time?

(Particularly in the nonprofit/EA sector)

I've been getting more requests recently to have calls/conversations to give advice, review documents, or be part of extended sessions on things. Most of these have been from EAs.

I find a lot of this work fairly draining. There can be surprisingly high fixed costs to having a meeting. It often takes some preparation, some arrangement (and occasional re-arrangement), and a fair bit of mix-up and change throughout the day.

My main work requires a lot of focus, so the context s... (read more)

Ozzie Gooen's Shortform

On AGI (Artificial General Intelligence):

I have a bunch of friends/colleagues who are either trying to slow AGI down (by stopping arms races) or align it before it's made (and would much prefer it be slowed down).

Then I have several friends who are actively working to *speed up* AGI development. (Normally just regular AI, but often specifically AGI)[1]

Then there are several people who are apparently trying to align AGI, but who are also effectively speeding it up, but they claim that the trade-off is probably worth it (to highly varying degrees of plausibi... (read more)

Stefan_Schubert's Shortform

I think that some EAs focus a bit too much on sacrifices in terms of making substantial donations (as a fraction of their income), relative to sacrifices such as changing what cause they focus on or what they work with. The latter often seem both higher impact and less demanding (though it depends a bit). So it seems that one might want to emphasise the latter a bit more, and the former a bit less, relatively speaking. And if so one would want want to adjust EA norms and expectations accordingly.

Aaron_Scher's Shortform

Progressives might be turned off by the phrasing of EA as "helping others." Here's my understanding of why. Speaking anecdotally from my ongoing experience as a college student in the US, mutual aid is getting tons of support among progressives these days. Mutual aid involves members of a community asking for assistance (often monetary) from their community, and the community helping out. This is viewed as a reciprocal relationship in which different people will need help with different things and at different times from one another, so you help out when y... (read more)

Linch's Shortform

What are the best arguments for/against the hypothesis that (with ML) slightly superhuman unaligned systems can't recursively self-improve without solving large chunks of the alignment problem?

Like naively, the primary way that we make stronger ML agents is via training a new agent, and I expect this to be true up to the weakly superhuman regime (conditional upon us still doing ML).

Here's the toy example I'm thinking of, at the risk of anthromorphizing too much:Suppose I'm Clippy von Neumann, an ML-trained agent marginally smarter than all humans, but nowh... (read more)

Showing 3 of 8 replies (Click to show all)
4Buck3dHow do you know whether you're happy with the results?

I agree that's a challenge and I don't have a short answer. The part I don't buy is that you have to understand the neural net numbers very well in some "theoretical" sense (i.e. without doing experiments), and that's a blocker for recursive improvement. I was mostly just responding to that.

That being said, I would be pretty surprised if "you can't tell what improvements are good" was a major enough blocker that you wouldn't be able to significantly accelerate recursive improvement. It seems like there are so many avenues for making progress:

  • You can medita
... (read more)
2Linch3dOkay now I'm back to being confused.
tessa's Shortform

While making several of review crossposts for the Decade Review I found myself unhappy about the possibility that someone might think I had authored one of the posts I was cross-linking. Here are the things I ended up doing:

  1. Make each post a link post (this one seems... non-optional).
  2. In the title of the post, add the author / blog / organization's name before the post title, separated by an en-dash.
    • Why before the title? This ensures that the credit appears even if the title is long and gets cut off.
    • Why an en-dash? Some of the posts I was linking alrea
... (read more)
Samuel Shadrach's Shortform

I have voted for two posts in the decadal review prelim thingie.

9 votes

4 votes

Seems to me like perspectives I strongly agree with, but not everyone in the EA community does.

TianyiQ's Shortform

One doubt on superrationality:

(I guess similar discussions must have happened elsewhere, but I can't find them. I am new to decision theory and superrationality, so my thinking may very well be wrong.)

First I present an inaccruate summary of what I want to say, to give a rough idea:

  • The claim that "if I choose to do X, then my identical counterpart will also do X" seems to (don't necessarily though; see the example for details) imply there is no free will. But if we in deed assume determinism, then no decision theory is practically meaningful.

Then I shall e... (read more)

After writing this down, I'm seeing a possible response to the argument above:

  • If we observe that Alice and Bob had, in the past, made similar decisions under equivalent circumstances, then we can infer that:
    • There's an above-baseline likelihood that Alice and Bob have similar source codes, and
    • There's an above-baseline likelihood that Alice and Bob have correlated sources of randomness.
    • (where the "baseline" refers to our prior)


  • It still rests on the non-trivial metaphysical claim that different "free wills" (i.e. different sources of randomness)
... (read more)
HaukeHillebrandt's Shortform

Likelihood of nuclear winter
Two recent 80k podcasts [1, 2] deal with nuclear winter (EA wiki link). One episode discusses bias in nuclear winter research (link to section in transcript). The modern case for nuclear winter is based on modelling by Robock, Toon, et al. (e.g. see them being acknowledged here). Some researchers have criticized them, suggesting the nuclear winter hypothesis is implausible and that the research is biased and has been instrumentalized for political reasons (e.g. paper paper, citation trail of recent modelling work out of L... (read more)

Related: New audible of 'Hacking the Bomb'  on cyber nuclear security.

Jamie_Harris's Shortform

How did Nick Bostrom come up with the "Simulation argument"*? 

Below is an answer Bostrom gave in 2008. (Though note, Pablo shares a comment below that Bostrom might be misremembering this, and he may have taken the idea from Hans Moravec.)

"In my doctoral work, I had studied so-called self-locating beliefs and developed the first mathematical theory of observation selection effects, which affects such beliefs. I had also for many years been thinking a lot about future technological capabilities and their possible impacts on humanity. If one combines th... (read more)

17Pablo8dNote that Hans Moravec, an Austrian-born roboticist, came up with essentially the same idea back in the 1990s []. Bostrom was very familiar with Moravec's work, so it's likely he encountered it prior to 2003, but then forgot it by the time he made his rediscovery.

Oh, nice, thanks very much for sharing that. I've cited Moravec in the same research report that led me to the Bostrom link I just shared, but hadn't seen that article and didn't read Mind Children fully enough to catch that particular idea.

4HaukeHillebrandt7dIt's quite common: "Cryptomnesia occurs when a forgotten memory returns without its being recognized as such by the subject, who believes it is something new and original. It is a memory bias [] whereby a person may falsely recall generating a thought, an idea, a tune, a name, or a joke,[1] [] not deliberately engaging in plagiarism [] but rather experiencing a memory as if it were a new inspiration." []
Load More