Manuel Del Río Rodríguez 🔹

Satellite School Head of Studies - Noia (Spain) @ EOI Santiago (Official School of Languages, Santiago)
288 karmaJoined Working (6-15 years)
linktr.ee/manueldelrio

Bio

English teacher for adults and teacher trainer, a lover of many things (languages, literature, art, maths, physics, history) and people. Head of studies at the satellite school of Noia, Spain.

How others can help me

I am omnivorous in my interests, but from a work perspective, I am very interested in the confluence of new technologies and education. As for other things that could profit from assistance, I am trying to self-teach myself undergraduate level math and to seriously explore and engage with the intellectual and moral foundations of EA.

How I can help others

Reach out to me if you have any questions about Teaching English as a Foreign Language, translation and , generally, anything Humanities-orientated. Also, anything you'd like to know about Spain in general and its northwestern corner, Galicia, in particular.

Comments
39

Hello, and thanks for engaging with it. A couple of notes about the points you mention:

I have only read Thorstad's arguments as they appear summarized in the book (he does have a blog in which one of his series, which I haven't read yet, goes into detail on this:  https://reflectivealtruism.com/category/my-papers/existential-risk-pessimism ). I have gone back to the chapter, and his thesis, in a bit more detail, would be that  Ord's argument is predicated on a lot of questionable assumptions, i.e.,the time of perils will be short, that the current moment is very dangerous, the future time will be much less so and it will stay so for a long time. He questions the evidence for all those assumptions, but particularly the last: "For humans to survive for a billion years, the annual average risk of our extinction needs to be no higher than one in a billion. That just doesn’t seem plausible—and it seems even less plausible that we could know something like that this far in advance." He also goes on to expand it citing extreme uncertainty of events far in time, that it is unlikely that treaties or world government could keep risk low, that 'becoming more intelligent' is too vague and AGI absurdly implausible ("The claim that humanity will soon develop superhuman artificial agents is controversial enough,” he writes. “The follow-up claim that superintelligent artificial systems will be so insightful that they can foresee and prevent nearly every future risk is, to most outside observers, gag-inducingly counterintuitive.”).

As for the second statement, my point wasn’t that extinction always trumps everything else in expected value calculations, but that if you grant the concept of existential risk any credence, then -ceteris paribus-, the sheer scale of what’s at stake (e.g., billions of future lives across time and space) makes extinction risks of overriding importance in principle. That doesn’t mean that catastrophic-but-non-extinction events are negligible, just that their moral gravity derives from how they affect longterm survival and flourishing. I think you make a very good argument that massive, non-extinction catastrophes might be nearly as bad as extinction if they severely damage humanity’s trajectory but I feel it is highly speculative on the difficulties of making comebacks and on the likelihood of extreme climate change, and I still find the difference between existential risk and catastrophe(s) significant. 

Two direct quotes: "There are two issues here. The first is that Ord and MacAskill are out of step with the scientific mainstream opinion on the civilizational impacts of extreme climate change. In part, this seems to stem from a failure to imagine how global warming can interact with other risks (itself a wider issue with their program), but it’s also a failure to listen to experts on the subject, even ones they contact themselves".

"Ord and MacAskill’s confidence that climate change probably doesn’t pose the kind of existential threat they’re worried about is unwarranted. And the fact that they’re primarily worried about existential threats in the first place is the other problem: once a threat has been deemed existential, it’s impossible to outweigh it with any less- than-existential threat in the present day".

The first one is the clearest pointing in the direction that Ord's and MacAskil's estimation aren't within the pale of scientific mainstream opinion. It connects to a footnote (16) that links to https://digressionsnimpressions.typepad.com/digressionsimpressions/2022/11/on-what-we-owe-the-future-no-not-on-sbfftx.html which is definitely not some summary or compilation of mainstream views on global warming effects, but to a philosopher's review of What We Owe the Future. Perhaps this is a mistake. Note 14 does link to an article by none other than E. Torres on 'What “longtermism” gets wrong about climate change' which seems to be the authority produced for the thesis that Ord and MacAskill's views are far from the scientific mainstream on this. Torres states having contacted with 'a number of leading researchers' he cherrypicks - selective expert sourcing via Torres, not by systematic IPCC consensus.

That may be true, but it isn't the argument Becker is making; it would still mean that the book author is at best dissembling when he says that expert consensus on x-risks from global warming is very different from what Ord and MacAskill state.

I wish I could be of help in this, but I just lack the expertise. I think part of the issue is that 'the consensus' (as per IPCC reports) don't model worst-case scenarios, and I think most climate scientists do not predict human extinction from warming, even at extreme levels. It also doesn't make rational sense why Ord or McAskill would try to 'outsmart' the literature: if anything, I'd guess they prefer to be able to include Global Warming among existential risks, as it's an easy and popular win cause, so my prior is that they do indeed gauge well the expert consensus. Becker's sources are mostly those two mentioned scientists, which are likely (from a quick glance) to come from collapse-focused research that emphasizes high uncertainty and worst-case feedback loops.

Naively, I am assuming that extinction risks are relatively high (1/6 was Ord's take in The Precipice?), and if it happens, than there's 0 futures with 0 value.

Part A. 

This will not be fully theoretical: I've already been donating 5% for the last 2 year. First pick would be the Malaria Consortium. It seems to be very cost-effective ($5,000 per life saved on average, $7 ). It also has strong evidence of impact.

Second option would be the Against Malaria Foundation. It is pretty similar to the first choice in target, effectiveness and evidence of impact, but numbers are slightly worse (perhaps?). Cost per life is $2000 dollars more, which looks worse, but cost of output (per bednet output cost, as opposed to the consortium's children treated with a full course of medicine) is a bit lower, at $6.  Also, working on prevention seems more far-sighted and perhaps controlable. 

Third choice, Helen Keller International. Cost of outcome is practically the same as in the two previous cases, although it is much cheaper in cost of output (just $2 for supplements), but I am more uncertain about the specific results.

Part B. 

For all the reasons exposed above, if I have to choose only one, it would be the Malaria Consortium.

Part C.

Generally, decisions relating to investments for retirement in 20 years's time. Perhaps I should also consider alternative jobs or job promotion through this quantitative mindset.

I feel one is always allowed not to speak about what they don't want to, but that if one does decide to speak about something, they should never make a statement they know is a lie. This is sad, because depending on the issue and how it relates to your career and other stuff, you might not be able to just keep quiet, and besides, your silence is going to be interpreted uncharitably. People who have shown to consistently value and practice truth-saying should be allowed some sort of leeway, like 'I will only answer n randomly chosen questions today (n also randomized) and you are not entitled to press further on anything I don't answer'.

I am not being precise with language, but what I meant was something like sometimes you know that stating some truths, or merely accepting the possibility of some things being true and being willing to explore them and publicize them no matter the consequences might have negative consequences, like being hurtful and/or offending to people, frequently for good, pragmatic and historical reasons. Prioritizing not to harm would feel like a perfectly valid, utilitarian consideration, even if I disagree with it trumping all others. In Haidt's moral framework terms, one can prioritize Care/Harm versus Liberty/Oppression. Myself, I have a deontological, quasi-religious belief in truth and truth-seeking as an end in itself.

I agree with that, and that our goal should be to achieve both, but reality being what it is, there are going to be times when truth-seeking and kindness confront each other, and one has to make a trade-off. Ultimately, I choose truth-seeking in case of conflict, even weighing in the negative effects it can generate. But to each his own.

Really agree with this take. Ultimately, I get the impression that there seems to be a growing divide in EA between people who prioritize more truthseeking and those who prioritize better PR and kindness. And these are complex topics with difficult trade-offs that each has to navigate and establish on a personal basis.

Load more