All of TruePath's Comments + Replies

I feel like there is some definitional slipping going on when you suggest that a painful experience is less bad when you are also experiencing a pleasurable one at the same time. Rather, it seems to me the right way to describe this situation is that the experience is simply not as painful as it would be otherwise.

To drive this intuition consider S&M play. It's not that the pain of being whipped is just as bad...it literally feels different than being whipped in a different context that simply makes it less painful.

Better yet notice the way opiates w... (read more)

The "go extinct" condition is a bit fuzzy. It seems like it would be better to express what you want to change your mind about as something more like (forget the term for this). P(go extinct| AGI)/P(go extinct).

I know you've written the question in terms of go extinct because of AGI but I worry this leads to relatively trivial/uninformative about AI ways to shift that value upward.

For instance, consider a line of argument:

  1. AGI is quite likely (probably by your own lights) to be developed by 2070.

  2. If AGI is developed either it will suffer from serious

... (read more)
2
David Johnston
2y
I agree with this, and the “drastic reduction in long term value” part is even worse. It is implicitly counterfactual - drastic reductions have to be in reference to *something * - but what exactly the proposed counterfactual is is extremely vague. I worry that to some extent this vagueness will lead to people not exploring some answers to the question because they’re trying to self impose a “sensible counterfactual” constraint which, due to vagueness, won’t actually line up well with the kinds of counterfactuals the FTX foundation is interested in exploring.

I think all the problems with involving EA with causes that require political changes (increase gov funding for mental health...even all Gates' billions wouldn't go very far if he tried to directly fund a substantial slice of the mental health healthcare expenditures for first world) apply to changing gov funding and many of the issues are even harder because they derive from hard to shift societal attitudes. These make even direct funding of many types of research difficult.

For instance, a big problem (imo) with the way depression drugs are researched ... (read more)

1
Dvir Caspi
2y
Thank you for your comment. I would like to address your first point. While gov. funds do need a political push,  and that societal change is trickier than thought, general innovation in Mental Health that could benefit society does not require any grand political change or push. There is meaningful innovation already both in non-profit and for-profit sectors. And your example of Gates' funds that if he tries to directly fund health in general he will run out of money, it's obviously true. But that doesn't mean that careful capital couldn't be allocated to promising health innovations for-profit and non-profit, similarly to many other fields.  

I'm not sure I complete followed #1 but maybe this will answer what you are getting at.

I agree that the following argument is valid:

Either the time discounting rate is 0 or it is morally preferable to use your money/resources to produce utility now than to freeze yourself and produce utility later.

However, I still don't think you can make the argument that I can't think that time discounting is irrelevant to what I selfishly prefer while believing that you shouldn't apply discounting when evaluating what is morally preferable.  And I think this substa... (read more)

I ultimately agree with you (pure time discounting is wrong…even if our increasing wealth makes it a useful practical assumption) but I don't think you're argument is quite as strong as you think (nor is Cowan's argument very good).

In particular, I'd distinguish my selfish emotional desires regarding my future mental states from my ultimate judgements about the goodness or badness of particular world states.  But I think we can show these have to be distinct notions[1].  Someone who was defending pure time discounting could just say: well while, ... (read more)

2
Ramiro
3y
I'm very grateful for your comment. 1. Do you think I should add an explicit caveat remarking that the reductio assumes on lyself-regarding reasons / preferences? For instance, I'm not in favor of cryonics for myself - I currently consider that, given the required investment plus all the uncertainties, I'm likely better off, from a moral point of view, by donating to effective charities (or even to another project I might value even after death, such as making my loved ones happy). But notice this has nothing to do with time preference (quite the opposite). 2. About Sarah's example... Well, I agree with you; but notice that the reasoning in the Cryonics  reductio is still valid - and that was my whole point. I'm not advocating for cryonics; I'm basically asking if one thinks that it's a bad option because  it aims at future experiences. I think someone could consistently bite this bullet. Actually, my whole point (which is still quite entalgled, I admit - and I thank your comment for exposing it) is that we often mix some types of reasoning  connected to a subjective  / contextual / (philosophically) relativistic notion of time (i.e., "Sarah in the present" vs. "Sarah in the future") to some sort of (quasi-) objective / t-series notion ("Sarah in t") - something like the "point of view of the universe" or "the point of view of humanity." (Again, thanks to Gavin for directing my attention to this.) When we specifiy what point of view we are doing the evaluation from, most conundrums seem to disappear... except the next one. 3. I'm very interested in reading more about this: Of course, this is a real theoretical problem. However, I guess discounting because of uncertainty (and the possibility of extinction, etc.) might be enough to avoid it - as Nicholas Stern proposes. But I really get lost when we start talking about infinities.

Could you provide some evidence that this rate of growth is unusual in history?   I mean it wouldn't shock me if we looked back at the last 5000 years and saw that most societies real production grew at similar rates during times of peace/tranquility but that resulted in small absolute growth that was regularly wiped out by invasion, plague or other calamity.  In which case the question becomes whether or not you believe that our technological accomplishments make us more resistant to such calamities (another discussion entirely).

Moreover, even i... (read more)

 I thought the archetypal example was where everyone had a mild preference to be with other members of their race (even if just because of somewhat more shared culture) and didn't personally  really care if they weren't in a mixed group.   But I take your point to be that, at least in the gender case, we do have the preference not to be entirely  divided by gender.

So yes, I agree that if the effect leads to too much sorting then it could be bad but it seems like a tough empirical question whether we are at a point where the utility gains from more sorting are more or less than the losses.  

Could you say a bit more about what you want this flag to symbolize/communicate?  Flags for nations need to symbolize what holds the members of that country together and unifies them but, when it comes to an idea, it seems the flag is more a matter of  what you want to communicate to others about the virtues of your idea.   I mean I'm having trouble imagining that a utilitarian flag could do $1000 worth of good unless it does some important PR work for utilitarianism.

If it was me I'd be trying to pick a flag to communicate the idea that util... (read more)

7
Dan H
3y
I think the "heart in a lightbulb" insignia for EA is a great design choice and excellent for outreach, but there is no such communicable symbol for utilitarianism. Companies know to spend much on design for outreach since visualization is not superfluous. I do not think the optimal spending is $0, as is currently the case. A point of the competition is finding a visual way of communicating a salient idea about utilitarianism suitable for broader outreach. I do not know what part is best to communicate or how best to communicate it--that's part of the reason for the competition.

Re your first point yup they won't try to recruit others to that belief but so what? That's already a bullet any utilitarian has to bite thanks to examples like the aliens who will torture the world if anyone believes utilitarianism is true or ties to act as of it is. There is absolutely nothing self defeating here.

Indeed if we define utilitarianism as simply the belief that ones preference relation on possible worlds is dictated by the total utility in then it follows by definition that the best act an agent can take are just the ones which maximize uti... (read more)

Yes and reading this again now I think I was way too harsh. I should have been more positive about what was obviously an earnest concern and desire to help even if I don't think it's going to work out. A better response would have been to suggest other ideas to help but other than reforming how medical practice works so mental suffering isn't treated as less important than being physically debilitated (docs will agree to risky procedures to avoid physical loss of function but won't with mental illness ...likely because the family doesn't see the suffering from the inside but do see the loss in a death so are liable to sue/complain if things go bad).

I apparently wasn't clear enough that I absolutely agree and support things like icebreakers etc. But we shouldn't either expect them to or judge their effectiveness based on how much it increases female representation. Absolutely do it and do it for everyone who will benefit but just don't be surprised if even if we do that everywhere it doesn't do much to affect gender balance in EA.

I think if we just do it because it makes ppl more comfortable without the gender overlay not only will it be more effective and more widely adopted but avoid the very real ... (read more)

No I didn't mean to suggest that. But I did mean to suggest that it's not at all obvious that this kind of Schelling style amplification of preferences is something that would be good to do something about. The archetypal example of Schelling style clustering is a net utility win even if a small one.

5
Kirsten
3y
So in the archetypal Schelling example, everyone would prefer to be at a table with both races, but strongly prefer to NOT be the only one of their race at their table, which led to complete racial segregation which no one was especially keen on...

I fear that we need to do Geoengineering right away or we will be locked into never undoing the warming. Problem is a few countries like russia massively benefit from warming and once they see that warming and then take advantage of the newly opened land they will see any attempt to artificially lower temps as an attack they will respond to with force and they have enough fossil fuels to maintain the warm temps even if everyone else stops carbon emissions (which they can easily scuttle).

IMO this concern is more persuasive than the risk of trying Geoenginee... (read more)

3
kbog
3y
Deleted my previous comment - I have some little doubts and don't think the international system will totally fail but some problems along these lines seem plausible to me
2[comment deleted]3y

I ultimately agree with you but I think you miss the best argument for the other side. I think it goes like this:

  1. Humans are particularly bad at coordinating to reduce harms that are distant in time or are small risks of large harms. In other words out of sight out of mind. We are much better at solving problems which we experience at least some current harm from but prefer to push off harms into the future or a low probability event.

The argument for this point is buttressed by the very fact that we aren't doing anything about warming right now.

  1. Geo
... (read more)
3
kbog
3y
I'm not sure if immediacy of the problem really would lead to a better response: maybe it would lead to a shift from prevention to adaptation, from innovation to degrowth, and from international cooperation to ecofascism. Immediacy could clarify who will be the minority of winners from global warming, whereas distance makes it easier to say that we are all in this together. At the very least, geoengineering does make the future more complicated, in that on top of the traditional combination of atmospheric uncertainties and emission uncertainties, we have to add uncertainty about how the geoengineering regime will proceed. And most humans don't do a great job of responding to uncertain problems like this. But I don't think we understand these psychological and political dynamics very well. This all reminds me of public health researchers, pre-COVID, theorizing about the consequences of restricting international travel during a pandemic. I'll think a bit more on this.

The parent post already responded to a number of these points but let me give a detailed reply.

First, the evidence you cite doesn't actually contradict the point being made. Just because women rate EA as somewhat less welcoming doesn't mean that this is the reason they return at a lower rate. Indeed, the alternate hypothesis that says it's the same reason women are less likely to be attracted to EA in the first place seems quite plausible.

As far as the quotes we can ignore the people simply agreeing that something should be done to increas... (read more)

9
Jon_Behar
5y
I intentionally avoided commenting on the OP’s broader claims as I’m squarely in the “Nobody's going to solve the question of social justice here” camp (per @Aidan O’Gara). I only meant to comment on the narrow issue of EA London’s gender-related attendance dynamics, to try and defuse speculation by pointing people to relevant data that’s available. In retrospect, I probably should have just commented on the thread about women being less likely to return to EA London meetups instead of this one, but here we are. I think the quotes from the surveys offer important insights, and that it’d be bizarre to try to understand how EA London’s events are perceived without them. I didn’t claim they offer a definitive explanation (just one that’s more informed than pure intuition), and I certainly didn’t argue we should start restricting discussions on lots of important topics. Actually, one of my biggest takeaways from the survey quotes is that there’s low-hanging fruit available, opportunities to make EA more inclusive and better at seeking truth at the same time. The cost/benefit profile of (for example) an icebreaker at a retreat is extremely attractive. It makes people feel more welcome, it builds the sort of trust that makes it easier to have conversations on controversial topics, and it makes those conversations better by inviting a broader range of perspectives. Even if you hate icebreakers (like I do), based on the survey data they seem like a really good idea for EA retreats and similar events.

I definitely prefer being in gender-balanced settings to being the only woman in a group of men, so I agree that's a preference. You seem to be suggesting that if it's a preference, it's not the cause of our homogeneity, but I think the preference to be near similar people is a good explanation for why EA isn't very diverse. (cf Thomas Schelling's work on informal segregation)

Also, your concern about some kind of disaster caused by wireheading addiction and resulting deaths and damage is pretty absurd.

Yes, people are more likely to do drugs when they are more available but even if the government can't limit the devices that enable wireheading from legal purchase it will still require a greater effort to put together your wireheading setup than it currently does to drive to the right part of the nearest city (discoverable via google) and purchasing some heroin. Even if it did become very easy to access it's still no... (read more)

You make a lot of claims here that seem unsupported and based on nothing but vague analogy with existing primitive means of altering our brain chemisty. For instance a key claim that pretty most of your consequences seem to depend on is this: "It is great to be in a good working mood, where you are in the flow and every task is easy, but if one feels “too good”, one will be able only to perform “trainspotting”, that is mindless staring at objects.

Why should this be true at all? The reason heroin abusers aren't very productive (and, imo, heroin ... (read more)

I'm disappointed that the link about which invertebrates feel pain doesn't go into more detail on the potential distinction between merely learning from damage signals and the actual qualitative experience of pain. It is relatively easy to build a simple robot or write a software program that demonstrates reinforcement learning in the face of some kind of damage but we generally don't believe such programs truly have a qualitative experience of pain. Moreover, the fact that some stimuli are both unpleasant yet rewarding (e.g. encourage repetition) indicates these notions come apart.

2
Brian_Tomasik
6y
It's a big topic area, and I think we need articles on lots of different issues. The overview piece for invertebrate sentience was just a small first step. Philosophers, neuroscientists, etc. have written thousands of papers debating criteria for sentience, so I don't expect such issues to be resolved soon. In the meanwhile, cataloguing what abilities different invertebrate taxa have seems valuable. But yes, some awareness of the arguments in philosophy of mind and how they bear on the empirical research is useful. :)

While this isn't an answer I suspect that if you are interested in insect welfare one first needs a philosophical/scientific program to get a grip on what that entails.

First, unlike other kinds of animal suffering it seems doubtful there are any interventions for insects that will substantially change their quality of life without also making a big difference in the total population. Thus, unlike large animals, where one can find common ground between various consequentialist moral views it seems quite likely that whether a particular intervention is go... (read more)

3
Brian_Tomasik
6y
Nice points. :) One exception might be identifying insecticides that are less painful than existing ones while having roughly similar effectiveness, broad/narrow-spectrum effects, etc. Other forms of humane slaughter, such as on insect farms, would also fall under this category.

I'm a huge supporter of drug policy reform and try to advocate it as much as I can in my personal life. Originally, I was going to post here suggesting we need a better breakdown of particular issues which are particularly ripe for policy reform (say reforming how drug weights are calculated) and the relative effectiveness of various interventions (lobbying, ads, lectures etc..).

However, on reflection I think there might be good reasons not to get involved in this project.

Probably the biggest problem for both EA and drug policy reform is the perception t... (read more)

1
ChristianKleineidam
7y
I don't think the average nootropics user would appear to have a goal of getting a legal high in a television broadcast. It's more interesting for a journalist to tell a story about a computer programmer who takes LSD to help him with a difficult programming problem on which he worked for months without a satisfying answer than to tell a story about the computer programmer wanting to get high with LSD. The story about how nerds in Silicon Valley do everything to enchance their performance is more interesting than the story about a random person taking drugs. More generally EA is also full of weird causes as Scott Alexander describes very well in his blog post about EA Global.
1
MichaelPlant
7y
You could have written the same thing 2 years ago replacing "drug policy reform" with "artificial intelligence" and made exactly the same argument: "AI is weird, it will damage EA, imagine interviewing an AI nerd like Elon Musk on tv, etc". Except lots of people now take AI seriously, it's received lots of public money and attention and lots is getting done.This is presumably because the arguments for AI were strong. You seem to be presenting me with a Morton's Fork (a false dilemma caused by contradictory observations reaching the same conclusons): "if X is seen as weird, don't work on it. If X is not seen as weird, then it can't be neglected so there's no point working on it either." This can't be right, because it would rule out every cause. I think the role EA fills in the world is exactly finding the important problems in the world others are ignoring, perhaps because those problems seem to weird, and then argue they are worth taking seriously. Notice there's something odd about saying "I've become convinced the arguments for X are very strong, but no one else will be convinced so let's abandon cause X". If you found argument for X persuasive, others probably will too and X is well worth working on. Clearly, we should avoid arguing for weird causes that would do no good. I didn't think DPR was important, now I think it's very substantial. More generally, I think concerns about reputation and backlash our overstated (#spotlight effect), but I'd be open to someone showing me evidence to the contrary.

This is, IMO, a pretty unpersuasive argument. At least if you are willing, like me, to bite the bullet that SUFFICIENTLY many small gains in utility could make up for a few large gains. I don't even find this particularly difficult to swallow. Indeed, I can explain away our feeling that somehow this shouldn't be true by appealing to our inclination to (as a matter of practical life navigation) to round down sufficiently small hurts to zero.

Also I would suggest that many of the examples that seem problematic are delibrately rigged so the overt descriptio... (read more)

I'm wondering if it is technically possible to stop pyroclastic flows from volcanoes (particularly ones near population centers like vesuivious) by building barriers and if so if its an efficient use of resources. Not quite world changing but it is still a low risk and high impact issue and there are US cities that are near volcanoes.

I'm sure someone has thought of this before and done some analysis.

I simply don't believe that anyone is really (when it comes down to it) a presentist or a necessitist.

I don't think anyone is willing to actually endorse making choices which eliminate the headache of an existing person at the cost of bringing an infant into the world who will be tortured extensively for all time (but no one currently existing will see it and be made sad).

More generally, these views have more basic problems than anything considered here. Consider, for instance, the problem of personal identity. For either presentism or necessitism to b... (read more)

1
MichaelStJules
7y
We can make necessitarianism asymmetric: only people who will necessarily exist OR would have negative utility (or less than the average/median utility, etc.) count. Some prioritarian views, which also introduce some kind of asymmetry between good and bad, might also work.
1
MichaelPlant
7y
I'm probably a necessitiarian, and many (most?) people implicitly hold person-affecting views. However, that's besides the point. I'm neither defending nor evaluating person-affecting views, or indeed any positions in population axiology. As I mentioned, and is widely accepted by philosophers, all the views in population ethics have weird outcomes. FWIW, and this is unrelated to anything said above, nothing about person-affecting views need rely on person identity. The entity of concern can just be something that is able to feel happiness or unhappiness. This is typically the same line total utilitarians take. What person-affectors and totalism disagree about is whether (for one reason on another) creating new entities is good. In fact, all the problems you've raised for person-affecting views also arise for totalists. To see this, let's imagine a scenario where a mad scientist is creating a brain inside a body, where the body is being shocked with electricity. Suppose he grows it to a certain size, takes bits out, shrinks it, grows it again, etc. Now the totalist needs to take a stance on how much harm the scientist is doing and draw a line somewhere. The totalist and the person-affector can draw the line in the same place, wherever that is. Whatever puzzles qualia poses for person-affecting views also apply to totalism (at least, the part of morality concerned with subjective experience).

Before I should admit my bias here. I have a pet peeve about posts about mental illness like this. When I suffered from depression and my friend killed himself over it there was nothing that pissed me off more than people passing on the same useless facts and advice to get help (as if that magically made it betteR) with the self-congratulatory attitude that they had done something about the problem and could move on. So what follows may be a result of unjust irritation/anger but I do really believe that it causes harm when we past on truisms like that a... (read more)

5
Julia_Wise
7y
I did question whether this was on-topic enough to be a good fit for this forum. (I don't think awareness about every health issue that affects EAs would be a good use of the space, even if it affects a higher proportion than these problems.) I do think these problems can be unusually and spectacularly destructive when unchecked, and often even when much effort has been made. I also think most people don't have a good concept of how to recognize these conditions or even what to google; I certainly wouldn't have before getting training as a social worker. I definitely don't want us to congratulate ourselves for having dealt with these problems, because there have been cases when people in this community have needed help here and not gotten enough. I wrote this in the hope that it will tip the balance in some future crisis toward people having the knowledge they need, not so that we can check this off our list as a solved problem. These are really hard problems to deal with, both for people who have them and for people trying to help, and that's exactly why I wanted a resource available. I'm so sorry about your friend. This kind of information definitely isn't fail-safe, but I think it's the best we have.

This feels like nitpicking that gives the impression of undermining Singer's original claim when in reality the figures support them. I have no reason to believe Singer was claiming that of all possible charitable donations trauchoma is the most effective, merely to give the most stunningly large difference in cost effectiveness between charitable donations used for comparable ends (both about blindness so no hard comparisons across kinds of suffering/disability).

I agree that within the EA community and when presenting EA analysis of cost-effectiveness ... (read more)

If we're ignoring getting the numbers right and instead focusing on the emotional impact, we have no claim to the term "effective". This sort of reasoning is why epistemics around dogooding are so bad in the first place.

I think there is truth in what you said. But I also have disagreements:

"The only way to convince them is to ignore getting the numbers perfectly right and focus on the emotional impact"

That's a dangerous line of reasoning. If we can't make a point with honest numbers, we shouldn't make the point at all. We might fail to notice when we are wrong when we use bogus numbers to prove whatever opinion we already hold.

What is more, many people who become EAs after hearing such TED talks already think in numbers. They continue in believing the same ... (read more)

As for the issue of acquiring power/money/influence and then using it to do good it is important to be precise here and distinguish several questions:

1) Would it be a good thing to amass power/wealth/etc.. (perhaps deceptively) and then use those to do good?

2) Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do X" where X is a good thing.

2') Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do good".

3) Is it a good idea to support (or not object) to others ... (read more)

That is good to know and I understand the motivation to keep the analysis simple.

As far as the definition go that is a reasonable definition of the term (our notion of catastrophe doesn't include an accumulation of many small utility losses) so is a good criteria for classifying the charity objective. I only meant to comment on QALYs as a means to measure effectiveness.


WTF is with the votedown. I nicely and briefly suggested that another metric might be more compelling (though the author's point about mass appeal is a convincing rebuttal). Did the comment come off as simply bitching rather than a suggestion/observation?

1
Denkenberger
7y
I did not do the vote down, but I did think that calling lives saved a mostly useless metric was a little harsh. :-)

The idea that EA charities should somehow court epistemic virtue among their donors seems to me to be over-asking in a way that will drastically reduce their effectiveness.

No human behaves like some kind of Spock stereotype making all their decisions merely by weighing the evidence. We all respond to cheerleading and upbeat pronouncements and make spontaneous choices based on what we happen to see first. We are all more likely to give when asked in ways which make us feel bad/guilty for saying no or when we forget that we are even doing it (annual credit... (read more)

It seems to me that a great deal of this supposed 'problem' is simply the unsurprising and totally human response to feeling that an organization you have invested in (monetarily, emotionally or temporally) is under attack and that the good work it does is in danger of being undermined. EVERYONE on facebook engages in crazy justificatory dances when their people are threatened.

It's a nice ideal that we should all nod and say 'yes that's a valid criticism' when our baby is attacked but it's not going to happen. There is nothing we can do about this aspect... (read more)

Lives saved is a very very weird and mostly useless metric. At the very least try and give an estimate in QALYs (quality adjusted life years) since very few people actually value saving life per say (e.g. stopping someone who is about to die of cancer from dying a few minutes earlier).

Given that many non-deaths from food scarcity are probably pretty damn unpleasant this would probably be a more compelling figure.

2
Denkenberger
7y
I agree that QALYs are more robust, and I guess it was an earlier version of the paper where we noted that using QALYs would likely produce similar comparison of cost-effectiveness to global poverty interventions. But we wanted to keep this analysis simple, and most people (though perhaps not most EAs) think in terms of saving lives. Also, two definitions of a global catastrophic risk are based on number of lives lost (I believe 10 million according to the book Global Catastrophic Risks and 10% of human population according to Open Philanthropy).

This doesn't actually provide anything like a framework to evaluate Cause X candidates. Indeed, I would argue it doesn't even provide a decent guide to finding plausible Cause X candidates.

Only the first methodology (expanding the moral sphere) identifies a type of moral claim that we have historically looked back on and found to be compelling. The second and third methods just list typical ways people in the EA community claim to have found Cause X. Moreover, there is good reason for thinking that successfully finding something that qualifies as Cause X will require coming up with something that isn't an obvious candidate.

I think this post is confused on a number of levels.

First, as far as ideal behavior is concerned integrity isn't a relevant concept. The ideal utilitarian agent will simply always behave in the manner that optimizes expected future utility factoring in the effect that breaking one's word or other actions will have on the perceptions (and thus future actions) of other people.

Now the post rightly notes that as a limited human agent we aren't truly able to engage in this kind of analysis. Both because of our computational limitations and our inability to pe... (read more)

2
Robert_Wiblin
7y
"WE WILL BE TRUSTED TO THE EXTENT WE RESPECT THE STANDARD SOCIETAL NOTIONS OF INTEGRITY AND TRUST" I think there is a lot to this, but I feel it can be subsumed into Paul's rule of thumb: * You should follow a standard societal notion of what is decent behaviour (unless you say ahead of time that you won't in this case) if you want people to have always thought that you are the kind of person who does that. Because following standard social rules that everyone assumes to exist is an important part of being able to coordinate with others without very high communication and agreement overheads, you want to at least meet that standard (including following some norms you might have reservations about). Of course this doesn't preclude you meeting a higher standard if having a reputation for going above and beyond would be useful to you (as Paul argues it often is for most of us).
5
Paul_Christiano
7y
I apologize in advance if I'm a bit snarky. This view is not broadly accepted amongst the EA community. At the very least, this view is self-defeating in the following sense: such an "ideal utilitarian" should not try to convince other people to be an ideal utilitarian, and should attempt to become a non-ideal utilitarian ASAP (see e.g. Parfit's hitchhiker for the standard counterexample, though obviously there are more realistic cases). I argued for my conclusion. You may not buy the arguments, and indeed they aren't totally tight, but calling it "mere assertion" seems silly. This is neither true, nor what I said. This is what it looks like when something is asserted without argument. I do agree roughly with this sentiment, but only if it is interpreted sufficiently broadly that it is consistent with my post. I tried to spell out pretty explicitly what I recommend in the post, right at the beginning ("when I imagine picking an action, I pretend that picking it causes everyone to know that I am the kind of person who picks that option"), and it clearly doesn't recommend anything like this. You seem to use "being straightforward" in a different way than I do. Saying "I'll be there for you whatever happens" is straightforward if you actually mean the thing that people will understand you as meaning.