Misha_Yagudin

Comments

vaidehi_agarwalla's Shortform

If longtermism is one of the latest stages of moral circle development than your anecdotal data suffers from major selection effects.

Anecdotally seems true from a number of EAs I've spoken to who've updated to longtermism over time.

The academic contribution to AI safety seems large

On the other hand, in 2018's review MIRI wrote about new research directions, one of which feels ML adjacent. But from a few paragraphs, it doesn't seem that the direction is relevant for prosaic AI alignment.

Seeking entirely new low-level foundations for optimization, designed for transparency and alignability from the get-go, as an alternative to gradient-descent-style machine learning foundations.

The academic contribution to AI safety seems large

Indeed, Why I am not currently working on the AAMLS agenda is a year-later write up by the lead researcher. Moreover, they write:

That is, though I was officially lead on AAMLS, I mostly did other things in that time period.

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

Oh, I meant pessimistic. A reason for a weak update might similar to Gell-Man amnesia effect. After putting effort into classical arguments you noticed some important flaws. The fact that have not been articulated before suggests that collective EA epistemology is weaker than expected. Because of that one might get less certain about quality of arguments in other EA domains.

So, in short, the Gell-Mann Amnesia effect is when experts forget how badly their own subject is treated in media and believe that subjects they don't know much about are treated more competently by the same media.
Misha_Yagudin's Shortform

Estimates from The Precipice.

| Stellar explosion             | 1 in 1,000,000,000 |
| Asteroid or comet impact | 1 in 1,000,000 |
| Supervolcanic eruption | 1 in 10,000 |
| “Naturally” arising pandemics | 1 in 10,000 |
|-------------------------------+--------------------|
| Total natural risk | 1 in 10,000 |


| Nuclear war | 1 in 1,000 |
| Climate change | 1 in 1,000 |
| Other environmental damage | 1 in 1,000 |
| Engineered pandemics | 1 in 30 |
| Unaligned artificial intelligence | 1 in 10 |
| Unforeseen anthropogenic risks | 1 in 30 |
| Other anthropogenic risks | 1 in 50 |
|-----------------------------------+------------|
| Total anthropogenic risk | 1 in 6 |


|------------------------+--------|
| Total existential risk | 1 in 6 |
AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

Have your become more uncertain/optimistic about the arguments in favour of importance of other x-risks as a result of scrutinising AI risk?

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

I am curious whether you are, in general, more optimistic about x-risks [say, than Toby Ord]. What are your estimates of total and unforeseen anthropogenic risks in the next century?

AMA or discuss my 80K podcast episode: Ben Garfinkel, FHI researcher

On a scale from 1 to 10 what would you rate The Boss Baby? :)

Load More