Linch

Linch's Comments

New Top EA Causes for 2020?
During a crisis, people tend to implement the preferred policies of whoever seems to be accurately predicting each phase of the problem

This seems incredibly optimistic.

What are the best arguments that AGI is on the horizon?

I edited that section, let me know if there are remaining points of confusion!

What are the best arguments that AGI is on the horizon?
Do you include in "People working specifically on AGI" people working on AI safety, or just capabilities?

Just capabilities (in other words, people working to create AGI), although I think the safety/capabilities distinction is less clear-cut outside of a few dedicated safety orgs like MIRI.

"bullish" in the sense of "thinking transformative AI (TAI) is coming soon"

Yes.

what do you mean by "experts not working on AGI"?

AI people who aren't explicitly thinking of AGI when they do their research (I think this correctly describes well over 90% of ML researchers at Google Brain, for example).

Why say "even"

Because it might be surprising (to people asking or reading this question who are imagining long timelines) to see timelines as short as the ones AI experts believe, so the second point is qualifying that AGI experts believe it's even shorter.

In general it looks like my language choice was more ambiguous than desirable so I'll edit my answer to be clearer!

Finding equilibrium in a difficult time

I also like this quote:

“I wish it need not have happened in my time," said Frodo.
"So do I," said Gandalf, "and so do all who live to see such times. But that is not for them to decide. All we have to decide is what to do with the time that is given us.”

J.R.R. Tolkien, The Fellowship of the Ring

Ask Me Anything!

I think there's some evidence that Metaculus, while a group of fairly smart and well-informed people, are nowhere near as knowledgeable as a fairly informed EA (perhaps including a typical user of this forum?) for the specific questions around existential and global catastrophic risks.

One example I can point to is that for this question on climate change and GCR before 2100 (that has been around since October 2018), a single not-very-informative comment from me was enough to change the community median from 24% to 10%. This suggests to me that Metaculus users did not previously have strong evidence or careful reasoning on this question, or perhaps GCR-related thinking in general.

Now you might think that actual superforecasters are better, but based on the comments given so far for COVID-19, I'm unimpressed. In particular the selected comments point to use of reference classes that EAs and avid Metaculus users have known to be flawed for over a week before the report came out (eg, using China's low deaths as evidence that this can be easily replicated in other countries as the default scenario).

Now COVID-19 is not an existential risk or GCR, but it is an "out of distribution" problem showing clear and fast exponential growth that seems unusual for most questions superforecasters are known to excel at.

How would crop pollination work without commercial beekeeping?

Hmm, if everybody stopped eating honey and wild bees are not picking up the slack, then presumably farmers would instead pay for commercial beekeeping to pollinate their fields?

Should recent events make us more or less concerned about biorisk?

One reason to believe otherwise is because you think existential GCBRs will looks so radically different that any broader biosecurity preparatory work won't be useful.

Should recent events make us more or less concerned about biorisk?

It's going to go public! Want people to review it lightly in case this type of question will lead to information-hazard territory in the answers.

AMA: Elie Hassenfeld, co-founder and CEO of GiveWell

Do you think it makes sense for EAs to treat global health and economic development as the same cause area, given that they seem to be two somewhat separate fields with different metrics, different theories of change, different institutions etc?


(I may not be formulating this question correctly).

Load More