Philosophy
Philosophy
Investigation of the abstract features of the world, including morals, ethics, and systems of value

Quick takes

10
2mo
Okay, so one thing I don't get about "common sense ethics" discourse in EA is, which common sense ethical norms prevail? Different people even in the same society have different attitudes about what's common sense. For example, pretty much everyone agrees that theft and fraud in the service of a good cause - as in the FTX case - is immoral. But what about cases where the governing norms are ambiguous or changing? For example, in the United States, it's considered customary to tip at restaurants and for deliveries, but there isn't much consensus on when and how much to tip, especially with digital point-of-sale systems encouraging people to tip in more situations. (Just as an example of how conceptions of "common sense ethics" can differ: I just learned that apparently, you're supposed to tip the courier before you get a delivery now, otherwise they might refuse to take your order at all. I've grown up believing that you're supposed to tip after you get service, but many drivers expect you to tip beforehand.) You're never required to tip as a condition of service, so what if you just never tipped and always donated the equivalent amount to highly effective charities instead? That sounds unethical to me but technically it's legal and not a breach of contract. Going further, what if you started a company, like a food delivery app, that hired contractors to do the important work and paid them subminimum wages[1], forcing them to rely on users' generosity (i.e. tips) to make a living? And then made a 40% profit margin and donated the profits to GiveWell? That also sounds unethical - you're taking with one hand and giving with the other. But in a capitalist society like the U.S., it's just business as usual. 1. ^ Under federal law and in most U.S. states, employers can pay tipped workers less than the minimum wage as long as their wages and tips add up to at least the minimum wage. However, many employers get away with not ensuring that tipped workers earn th
15
7mo
In his recent interview on the 80000 Hours Podcast, Toby Ord discussed how nonstandard analysis and its notion of hyperreals may help resolve some apparent issues arising from infinite ethics (link to transcript). For those interested in learning more about nonstandard analysis, there are various books and online resources. Many involve fairly high-level math as they are aimed at putting what was originally an intuitive but imprecise idea onto rigorous footing. Instead of those, you might want to check out a book like that of H. Jerome Keisler's Elementary Calculus: An Infinitesimal Approach, which is freely available online. This book aims to be an introductory calculus textbook for college students, which uses hyperreals instead of limits and delta-epsilon proofs to teach the essential ideas of calculus such as derivatives and integrals. I haven't actually read this book but believe it is the best known book of this sort. Here's another similar-seeming book by Dan Sloughter.
3
1mo
I am wondering whether people view EA vs. cause-specific field-building differently, especially about the Scout Mindset. My general thoughts are: EA - Focuses on providing knowledge and evidence to facilitate the self-determination of individuals to rationally weigh up the evidence provided to decide on updating beliefs to inform actions wherever they may go. Scout Mindset is intrinsically valuable to provide flexibility and to update beliefs and work on the beliefs that individuals hold. Field-Building - Focusing on convincing people that this is a cause area worth working on and will have a significant impact; less focus on individual thoughts based on the strength of the arguments and evidence field-builders already possess. Scout Mindset is instrumentally valuable to update and work on the beliefs that field-builders hold. Argument for Instrumental value: A more instrumental perspective is that it is much easier to ask someone to understand one thing and act on it rather than understand many things and struggle to act on any, which may be counterfactually more impactful. Argument for Intrinsic value: By focusing on the intrinsic value you're measuring for the internal change process that occurs in EA to see and understand the reason behind different cultural shifts across time with specific emphasis on the potential for value-drift.  The core difference between the two, as I see it, is whether the community builder focuses on promoting the individual or the cause. However, this may be an oversimplification or unfair misrepresentation and I am keen to hear the community's views.
10
7mo
Julia Nefsky is giving a research seminar in the Institute for Futures Studies titled "Expected utility, the pond analogy and imperfect duties", which sounds interesting for the community. It will be on September 27 at 10:00-11:45 (CEST) and can be attended for free in person or online (via zoom). You can find the abstract here and register here. I don't know Julia or her work and I'm not philosopher, so I cannot directly assess the expected quality of the seminar, but I've seen several seminars from the Institute for Futures Studies that where very good (eg. from Olle Häggström --and in Sep 20 Anders Sandberg gives one as well). I hope this is useful information.
11
10mo
1
Steelmanning is typically described as responding to the “strongest” version of an argument you can think of. Recently, I heard someone describe it a slightly different way, as responding to the argument that you “agree with the most.”  I like this framing because it signals an extra layer of epistemic humility: I am not a perfect judge of what the best possible argument is for a claim. In fact, reasonable people often disagree on what constitutes a strong argument for a given claim. This framing also helps avoid a tone of condescension that sometimes comes with steelmanning. I’ve been in a few conversations in which someone says they are “steelmanning” some claim X, but says it in a tone of voice that communicates two things: * The speaker thinks that X is crazy. * The speaker thinks that those who believe X need help coming up with a sane justification for X, because X-believers are either stupid or crazy. It’s probably fine to have this tone of voice if you’re talking about flat earthers or young earth creationists, and are only “steelmanning” X as a silly intellectual exercise. But if you’re in a serious discussion, framing “steelmanning” as being about the argument you "agree with the most" rather than the "strongest" argument might help signal that you take the other side seriously. Anyone have thoughts on this? Has this been discussed before? 
7
7mo
1
I'm curious what people who're more familiar with infinite ethics think of Manheim & Sandberg's What is the upper limit of value?, in particular where they discuss infinite ethics (emphasis mine): I first read their paper a few years ago and found their arguments for the finiteness of value persuasive, as well as their collectively-exhaustive responses in section 4 to possible objections. So ever since then I've been admittedly confused by claims that the problems of infinite ethics still warrant concern w.r.t. ethical decision-making (e.g. I don't really buy Joe Carlsmith's arguments for acknowledging that infinities matter in this context, same for Toby Ord's discussion in a recent 80K podcast). What am I missing?
4
4mo
I think this isn't mentioned enough in EA, and I feel the need to point out this quote from William_MacAskill_when-should-an-effective-altruist-donate.pdf (globalprioritiesinstitute.org): " " (p. 7) In other words" v(A)=[P(Utilitarianism is correct)⋅v(A|Utilitarianism is correct)] + [P(Rationalism is correct)⋅v(A|Rationalism is correct) + ⋅⋅ ⋅  , where v(A) is the value of some action A, P(B) is the probability of some thing B being true, and f(A|B) is f(A), given that B is true.
8
1y
I think we separate causes and interventions into "neartermist" and "longtermist" causes too much. Just as some members of the EA community have complained that AI safety is pigeonholed as a "long-term" risk when it's actually imminent within our lifetimes[1], I think we've been too quick to dismiss conventionally "neartermist" EA causes and interventions as not valuable from a longtermist perspective. This is the opposite failure mode of surprising and suspicious convergence - instead of assuming (or rationalizing) that the spaces of interventions that are promising from neartermist and longtermist perspectives overlap a lot, we tend to assume they don't overlap at all, because it's more surprising if the top longtermist causes are all different from the top neartermist ones. If the cost-effectiveness of different causes according to neartermism and longtermism are independent from one another (or at least somewhat positively correlated), I'd expect at least some causes to be valuable according to both ethical frameworks. I've noticed this in my own thinking, and I suspect that this is a common pattern among EA decision makers; for example, Open Phil's "Longtermism" and "Global Health and Wellbeing" grantmaking portfolios don't seem to overlap. Consider global health and poverty. These are usually considered "neartermist" causes, but we can also tell a just-so story about how global development interventions such as cash transfers might also be valuable from the perspective of longtermism: * People in extreme poverty who receive cash transfers often spend the money on investments as well as consumption. For example, a study by GiveDirectly found that people who received cash transfers owned 40% more durable goods (assets) than the control group. Also, anecdotes show that cash transfer recipients often spend their funds on education for their kids (a type of human capital investment), starting new businesses, building infrastructure for their communities, and h
Load more (8/28)