Samin

AI Alignment/EA outreach and translation projects
17Joined Apr 2019
mishasamin.me

Bio

Participation
3

I want humanity not to lose its long-term potential.

📚 In 2018, I launched a project to print HPMOR in Russian and promote it; it became the most funded crowdfunding campaign in Russian history; we printed 21 000 copies. A startup I own gives me enough income to focus on EA projects and, e.g., donated $30k to MIRI. I do outreach projects (e.g., we're organizing a translation of the 80,000 Hours' Key Ideas series) but am considering switching to direct alignment research.

(The effectiveness and the impact aside, I was the head of the Russian Pastafarian Church for a year, which was fun. And I was a political activist, which was fun except for spending time in jail after the protests; now it’s not as fun since the Russian authorities would probably throw me in prison if I return to Russia.)

I’m 22 now. have Israeli citizenship, and currently live in Tel Aviv, but I don’t want to remain there in the long term and would like to find a country and a city in the world to reside in. Also, I’m bi 🏳️‍🌈

How others can help me

Tell me if I should switch from outreach to alignment research despite all the leverage in the first and uncertainties about the second.

Before the war started, we thought we had some outstanding opportunities to promote the EA ideas on a large scale in Russia but were uncertain about strategies (e,g, whether we should do something wide or more targeted, how important is it to share the most up-to-date career advice, etc.); now we have a lot of uncertainty about what opportunities would still exist and of what scale and what new risks could be there.

Also, please tell me if you know a good country/city to live in.

How I can help others

Talk to me if you're thinking about translating EA content from English to other languages! I can share our technical setup that we’re using to, e.g., translate 80k Hours.

I’m not doing research in the field, but I can answer questions about the AI-related x-risks and pitch you the Alignment problem.

And I can share my experiences with launching the crowdfunding campaign to print HPMOR in Russian (which became the most funded crowdfunding campaign in the history of the country) and with community-building and EA/AI Safety outreach in Russia.

Comments
7

Saving lives near the precipice: we're doing it wrong?

It is quite likely that you're right! I think it's just something that should be explicitly thought about, it seems like an uncertainty that wasn't really noticed. If x-risk is in the next few decades, some of the money currently directed to the interventions fighting deaths and suffering might be allocated to charities that do it better.

Saving lives near the precipice: we're doing it wrong?

Awesome!

I didn't consider the spending speed here. It highlights another important part of the analysis one should make when considering neartermist donations conditional on the short timelines. Dependent on humanity solving alignment, you not only want to spend the money before a superintelligence appears but also might maximize the impact by, e.g., delaying deaths until then

Neartermists should consider AGI timelines in their spending decisions

Thanks for writing this!

Conditional on an aligned superintelligence appearing in a short time, there could be interventions that prevent or delay deaths until it appears and probably save these lives and have a lot of value (though it's hard to come up with concrete examples. Speculating without thinking about the actual costs, providing HIV therapy that’s possibly not cost-effective if you to do it for a lifetime but is cost-effective if you do it for a year or maybe freezing people when they die or providing mosquito nets that stop working in a year but are a lot cheaper sound kind of like it)

Saving lives near the precipice: we're doing it wrong?

Comments and DMs are welcome, including on the quality of writing (I'm not a native English speaker and would appreciate any corrections)

Samin's Shortform

I think discounting QALYs/DALYs due to the probability of doom makes sense if you want a better estimate of QALYs/DALYs; but it doesn’t help with estimating the relative effectiveness of charities and doesn’t help to allocate the funding better.

(It would be nice to input the distribution of the world ending in the next n years and get the discounted values. But it’s the relative cost of ways to save a life that matters; we can’t save everyone, so we want to save the most lives and reduce suffering the most, the question of how to do that means that we need to understand what our actions lead to so we can compare our options. Knowing how many people you’re saving is instrumental to saving the most people from the dragon. If it costs at least $15000 to save a life, you don’t stop saving lives because that’s too much; human life is much more valuable. If we succeed, you can imagine spending stars on saving a single life. And if we don’t, we’d still like to reduce the suffering the most and let as many people as we can live for as long as humanity lives; for that, we need estimates of the relative value of different interventions conditional on the world ending in n years with some probability.)

Samin's Shortform

How do effectiveness estimates change if everyone saved dies in 10 years?

“Saving lives near the precipice”

Has anyone made comparisons of the effectiveness of charities conditional on the world ending in, e.g., 5-15 years?

[I’m highly uncertain about this, and I haven’t done much thinking or research]

For many orgs and interventions, the impact estimations would possibly be very different from the default ones made by, e.g., GiveWell. I’d guess the order of the most effective non-longtermist charities might change a lot as a result.

It would be interesting to see how it changes as at least some estimates account for the world ending in n years.

Maybe one could start with updating GiveWell’s estimates: e.g., for DALYs, one would need to recalculate the values in GiveWell’s spreadsheets derived from the distributions that are capped or changed as a result of the world ending (e.g., life expectancy); for estimates of relative values of averting deaths at certain ages, one would need to estimate and subtract something representing that the deaths still come at (age+n). The second-order and long-term effects would also be different, but it’s possibly more time-consuming to estimate the impact there.

It seems like a potentially important question since many people have short AGI timelines in mind. So it might be worthwhile to research that area to give people the ability to weigh different estimates of charities’ impacts by their probabilities of an existential catastrophe.

Please let me know if someone already has worked this out or is working on this or if there’s some reason not to talk about this kind of thing, or if I’m wrong about something.