Manuel Del Río Rodríguez 🔹

Satellite School Head of Studies - Noia (Spain) @ EOI Santiago (Official School of Languages, Santiago)
305 karmaJoined Working (6-15 years)
linktr.ee/manueldelrio

Bio

English teacher for adults and teacher trainer, a lover of many things (languages, literature, art, maths, physics, history) and people. Head of studies at the satellite school of Noia, Spain.

How others can help me

I am omnivorous in my interests, but from a work perspective, I am very interested in the confluence of new technologies and education. As for other things that could profit from assistance, I am trying to self-teach myself undergraduate level math and to seriously explore and engage with the intellectual and moral foundations of EA.

How I can help others

Reach out to me if you have any questions about Teaching English as a Foreign Language, translation and , generally, anything Humanities-orientated. Also, anything you'd like to know about Spain in general and its northwestern corner, Galicia, in particular.

Comments
55

It was my lame attempt at making a verb out of the Petersburg Paradox, where a calculation of Expected Value of the type I play a coin-tossing game where if I get heads, the pot doubles, if I had tails, I lose everything. The EV is infinite, but in real life, you'll end up ruined pretty quick. SBF had a talk about this with Tyler Cowen and clearly enjoyed biting the bullet:

COWEN: Okay, but let’s say there’s a game: 51 percent, you double the Earth out somewhere else; 49 percent, it all disappears. Would you play that game? And would you keep on playing that, double or nothing? 
BANKMAN-FRIED: With one caveat. Let me give the caveat first, just to be a party pooper, which is, I’m assuming these are noninteracting universes. Is that right? Because to the extent they’re in the same universe, then maybe duplicating doesn’t actually double the value because maybe they would have colonized the other one anyway, eventually. 
COWEN: But holding all that constant, you’re actually getting two Earths, but you’re risking a 49 percent chance of it all disappearing. 
BANKMAN-FRIED: Again, I feel compelled to say caveats here, like, “How do you really know that’s what’s happening?” Blah, blah, blah, whatever. But that aside, take the pure hypothetical. 
COWEN: Then you keep on playing the game. So, what’s the chance we’re left with anything? Don’t I just St. Petersburg paradox you into nonexistence? 
BANKMAN-FRIED: Well, not necessarily. Maybe you St. Petersburg paradox into an enormously valuable existence. That’s the other option.

I am rather assuming SBF was a radical, no holds barred, naive Utilitarian who just thought he was smart enough to not get caught with (from his pov) minor infringement of arbitrary rules and norms of the masses and that the risk was just worth it. 

While I agree that people shouldn't have renounced the EA label after the FTX scandal, I don't quite find your simile with veganism convincing. It seems to fail to include two very important elements:

  1. SBF's public significance within EA: this is more like if one of the most famous Vegan advocates in the planet, the one everybody knows about, was shown to actually not only consume meat, but have a rather big meat-packing plant.
  2. Proximity framing: I think one can make a case for SBF being a pure, naive Utilitarian who just Petersburgged himself to bankruptcy and fraud. While EA is not ideologically 'naive' Utilitarian, one can argue that its intellectual foundations aren't far from Sam's (in fact, they significantly overlap) and might non-trivially cast a shadow on them. It is common for EAs to make really counterintuitive EV calculations and take pride in giving support to stuff normies would find highly objectionable, while paying what from the outside might seems as only lip-service to 'oh, yeah, you should abide by socially established rules and norms' while paradoxically holding that such abiding is merely strategic and revocable.

Depopulation is Bad

 

I mildly agree that depopulation is bad, but not by much. Problem is I just suspect our starting views and premises are so different on this i can't see how they could converge. Very briefly, mine would be something like this:

-Ethics is about agreements between existing agents.
-Future people matter only to the degree that current people care about them.
-No moral duty exists to create people.
-Existing people should not be made worse off for the sake of hypothetical future ones.

I don't think there's a solid argument for the dangers of overpopulation right now or in the near future, and I mostly trust the economic arguments about increased productivity and progress that come from more people. Admittedly, there are some issues that I can think of that would make this less clear:

-If AGI takes off and doesn't kill us all, it is very likely we can offshore most of the productivity and creativity to it, denying the advantage of bigger populations

-A lot of the increase in carbon emissions come from developing countries that are trying to increase the consumer capacities and lifestyle of their citizens. If scientific breakthroughs do not allow for progress, more people with more Western-like lifestyles will make it incredibly difficult to lower fossil fuel consumption, so if technology doesn't make the breakthroughs, it makes sense to want less people so that more can enjoy our type of lifestyle.

-Again, with technology, we've been extremely lucky in finding low hanging fruit that allowed us to expand food production (i.e., fertilizers, the Green Revolution). Again, one can be skeptic of indefinite future breakthroughs, which could push us down to some Malthusian state.

  • Do people, on average, have positive or negative externalities (instrumental value)?

I imagine both yes. Most current calculations would say the positive outweigh the negative, but I can imagine how this can cease to be so.

  • Do people's lives, on average, have positive intrinsic value (of a sort that warrants promotion, all else equal)?

Can't really debate this, as I don't think I believe in any sort of intrinsic value to begin with.

I am trying to articulate (probably wrongly) the disconnect I perceive here. I think 'vibes' might sound condescending, but ultimately, you seem to agree with assumptions (like math axioms) not being amenable to disputation. Like, technically, in philosophical practice, one can try to show, I imagine, that given assumption x some contradiction (or at least, something very generally perceived as wrong and undesirable) follows.

I do share the feeling expressed by Charlie Guthmann here that a lot of starting arguments for moral realists are just of the type 'x is obvious/self-evident/feels good to be/feels worth believing', and when stated in that way, they feel equally obviously false to those who don't share those intuitions, and as magical thinking ('If you really want something, the universe conspires to make it come about' Paulo Coelho style). I feel more productive engaging strategies should just avoid altogether any claims of the mentioned sort, and perhaps start with stating what might follow from realist assumptions that might be convincing/persuasive to the other side, and vice versa.

Exactly. What morality is doing and scaffolding is something that is pragmatically accepted as good and external no any intrinsic goodness, i.e., individual and/or group flourishing. It is plausible that if we somehow discovered that furthering such flourishing should imply that we need to completely violate some moral framework (even a hypothetical 'true' one), it would be okay to do it. Large-scale cooperation is not an end in itself (at lest not for me): it is contingent on creating a framework that maximizes my individual well-being, with perhaps some sacrifices accepted as long as I'm still left overall better than without the large-scape cooperation and under the agreed-upon norms.

I wouldn't put mathematics in the same bag as morality. As per the indispensibility argument, one can make a fair case (that one can't for ethics) that strong, indirect evidence for the truth of mathematics (and some types of it actually 'hard-coded into the universe') is that all the hard sciences rely on it to explain stuff. Take the math away and there is no science. Take moral realism away and... nothing happens, really?

I agree that ethics does provide a shared structure for trust, fairness, and cooperation, but it makes much more sense to employ, then social-contractual language and speak about game-theoretic equilibria. Of course, the problem with this is that it doesn't satisfy the urge some people have of trying to force their deeply felt but historically and culturally deeply contingent values into some universal, unavoidable mandate. And we all can feel this when we try, as BB does, to bring up examples of concrete cases that really challenge those values that we've interiorized. 

They could, but they could also not. Desires and preferences are malleable, although not infinitely so. The critique is presuposing, I feel, that the subject is someone who knows with complete detail not only their preferences, but their exact weights, and that this configuration is stable. I think that is a first model approximation, but it fails to reflect the more messy and complex reality underneath. Still, even accepting the premises, I don't think an anti-realist would say procrastinating in that scenario is 'irrational', but rather that it is 'inefficient' or 'counterproductive' to attaining a stronger goal/desire, and that the subject should take this into account, whatever decision he or she ends up making .which might include changing the weights and importance of the originally 'stronger' desire.

Thanks! I think I can see your pov clearer now. One thing that often leads me astray is how words seem to latch different meanings, and this makes discussion and clarification difficult (as in 'realism' and 'objective'). I think my crux, given what you say, is that I indeed don't see the point of having a neutral, outsider, point of view of the universe in ethics. I'd need to think more about it. I think trying to be neutral or impartial makes sense in science, where the goal is understanding a mind-independent world. But in ethics, I don’t see why that outsider view would have any special authority unless we choose to give it weight. Objectivity in the sense of 'from nowhere' isn't automatically normatively relevant, I feel. I can see why, for example, when pragmatically trying to satisfy your preferences and being a human in contact with other humans with their own preferences, it makes sense to include in the social contract some specialized and limited uses of objectivity: they're useful tools for coordination, debate and decision-making, and it benefits the maximization of our personal preferences to have some figures of power (rulers, judges, etc...) who are constrained to follow them. But that wouldn't make them 'true' in any sense: they are just the result of agreements and negotiated duties for attaining certain agreed-upon ends.

I find the jump hard to understand. Your preferences matter to you -not 'objectively'. They just matter because you want x, y z-. It doesn't matter if your preferences don't matter objectively. You still care about them. You might have a preference for being nice to people, and that will still matter to you regardless of anything else -unless you change your preference, which I guess is possible but no easy. It depends on the preference. The principle of indifference... I really struggle to see how it could be meaningful, because one has an innate preference for oneself, so whatever uncertainty you have about other sentients, there's no reason at all to grant them and their concerns equal value to yours a priori.

Terminology can be a bugger in these discussions. I think we are accepting, as per BB's own definition at the start of the thread, that Moral Realism would basically reduce to accepting a stance-independent view that moral truths exist. As for truth, I would mean it in the way it gets used when studying other, stance-independent objects, i.e., electrons exist and their existence is independent of human minds and-or of humans having ever existed, and saying 'electrons exist' is true because of their correspondence to objects of an external, human-independent reality.

What I take from your examples (correct me if I am wrong or if I misrepresent you) is that you feel that moral statements are not as evidently subjective as say, 'Vanilla ice-cream is the best flavor' but not as objective as, say 'An electron has a negative charge', as living in some space of in-betweeness with respect to those two extremes. I'd still call this anti-realism, as you're just switching from a maximally subjective stance (an individual's particular culinary tastes) to a more general, but still stance-dependent one ( what a group of experts and-or human and some alien minds might possibly agree upon). I'd say again, an electron doesn't care for what a human or any other creature thinks about its electric charge.

As for each of the bullet points, what I'd say is:

  1. I can see why you'd feel the change from a previous view can be seen as a mistake rather than a preference change -when I first started thinking about morality I felt very strongly inclined to the strongest moral realism, and I know feel that pov was wrong- but this doesn't imply moral realism as much as that if feels as if moral principles and beliefs have objective truth status, even if they were actually a reorganization of stance-dependent beliefs.
  2. I, on the contrary, don't feel like there could be 'moral experts' - at most, people who seem to live up to their moral beliefs, whatever the knowledge and reasons for having them. Most surveys I've seen -there's a Rationally Speaking episode on this- show that Philosophers and Moral Philosophers specifically don't seem to behave more morally than their colleagues and similar social and intellectual peers.
  3. Convergence can be explained through evolutionary game theory, coordination pressures, and social learning, not objective moral truths. That many societies converge on certain norms just shows what tends to work given human psychology and conditions, not that these norms are true in any stance-independent sense. It's functional success, not moral facthood. 
Load more