The "go extinct" condition is a bit fuzzy. It seems like it would be better to express what you want to change your mind about as something more like (forget the term for this). P(go extinct| AGI)/P(go extinct).
I know you've written the question in terms of go extinct because of AGI but I worry this leads to relatively trivial/uninformative about AI ways to shift that value upward.
For instance, consider a line of argument:
AGI is quite likely (probably by your own lights) to be developed by 2070.
If AGI is developed either it will suffer from serious
I think all the problems with involving EA with causes that require political changes (increase gov funding for mental health...even all Gates' billions wouldn't go very far if he tried to directly fund a substantial slice of the mental health healthcare expenditures for first world) apply to changing gov funding and many of the issues are even harder because they derive from hard to shift societal attitudes. These make even direct funding of many types of research difficult.
For instance, a big problem (imo) with the way depression drugs are researched ...
I'm not sure I complete followed #1 but maybe this will answer what you are getting at.
I agree that the following argument is valid:
Either the time discounting rate is 0 or it is morally preferable to use your money/resources to produce utility now than to freeze yourself and produce utility later.
However, I still don't think you can make the argument that I can't think that time discounting is irrelevant to what I selfishly prefer while believing that you shouldn't apply discounting when evaluating what is morally preferable. And I think this substa...
I ultimately agree with you (pure time discounting is wrong…even if our increasing wealth makes it a useful practical assumption) but I don't think you're argument is quite as strong as you think (nor is Cowan's argument very good).
In particular, I'd distinguish my selfish emotional desires regarding my future mental states from my ultimate judgements about the goodness or badness of particular world states. But I think we can show these have to be distinct notions[1]. Someone who was defending pure time discounting could just say: well while, ...
Could you provide some evidence that this rate of growth is unusual in history? I mean it wouldn't shock me if we looked back at the last 5000 years and saw that most societies real production grew at similar rates during times of peace/tranquility but that resulted in small absolute growth that was regularly wiped out by invasion, plague or other calamity. In which case the question becomes whether or not you believe that our technological accomplishments make us more resistant to such calamities (another discussion entirely).
Moreover, even i...
I thought the archetypal example was where everyone had a mild preference to be with other members of their race (even if just because of somewhat more shared culture) and didn't personally really care if they weren't in a mixed group. But I take your point to be that, at least in the gender case, we do have the preference not to be entirely divided by gender.
So yes, I agree that if the effect leads to too much sorting then it could be bad but it seems like a tough empirical question whether we are at a point where the utility gains from more sorting are more or less than the losses.
Could you say a bit more about what you want this flag to symbolize/communicate? Flags for nations need to symbolize what holds the members of that country together and unifies them but, when it comes to an idea, it seems the flag is more a matter of what you want to communicate to others about the virtues of your idea. I mean I'm having trouble imagining that a utilitarian flag could do $1000 worth of good unless it does some important PR work for utilitarianism.
If it was me I'd be trying to pick a flag to communicate the idea that util...
Re your first point yup they won't try to recruit others to that belief but so what? That's already a bullet any utilitarian has to bite thanks to examples like the aliens who will torture the world if anyone believes utilitarianism is true or ties to act as of it is. There is absolutely nothing self defeating here.
Indeed if we define utilitarianism as simply the belief that ones preference relation on possible worlds is dictated by the total utility in then it follows by definition that the best act an agent can take are just the ones which maximize uti...
Yes and reading this again now I think I was way too harsh. I should have been more positive about what was obviously an earnest concern and desire to help even if I don't think it's going to work out. A better response would have been to suggest other ideas to help but other than reforming how medical practice works so mental suffering isn't treated as less important than being physically debilitated (docs will agree to risky procedures to avoid physical loss of function but won't with mental illness ...likely because the family doesn't see the suffering from the inside but do see the loss in a death so are liable to sue/complain if things go bad).
I apparently wasn't clear enough that I absolutely agree and support things like icebreakers etc. But we shouldn't either expect them to or judge their effectiveness based on how much it increases female representation. Absolutely do it and do it for everyone who will benefit but just don't be surprised if even if we do that everywhere it doesn't do much to affect gender balance in EA.
I think if we just do it because it makes ppl more comfortable without the gender overlay not only will it be more effective and more widely adopted but avoid the very real ...
No I didn't mean to suggest that. But I did mean to suggest that it's not at all obvious that this kind of Schelling style amplification of preferences is something that would be good to do something about. The archetypal example of Schelling style clustering is a net utility win even if a small one.
I fear that we need to do Geoengineering right away or we will be locked into never undoing the warming. Problem is a few countries like russia massively benefit from warming and once they see that warming and then take advantage of the newly opened land they will see any attempt to artificially lower temps as an attack they will respond to with force and they have enough fossil fuels to maintain the warm temps even if everyone else stops carbon emissions (which they can easily scuttle).
IMO this concern is more persuasive than the risk of trying Geoenginee...
I ultimately agree with you but I think you miss the best argument for the other side. I think it goes like this:
The argument for this point is buttressed by the very fact that we aren't doing anything about warming right now.
The parent post already responded to a number of these points but let me give a detailed reply.
First, the evidence you cite doesn't actually contradict the point being made. Just because women rate EA as somewhat less welcoming doesn't mean that this is the reason they return at a lower rate. Indeed, the alternate hypothesis that says it's the same reason women are less likely to be attracted to EA in the first place seems quite plausible.
As far as the quotes we can ignore the people simply agreeing that something should be done to increas...
I definitely prefer being in gender-balanced settings to being the only woman in a group of men, so I agree that's a preference. You seem to be suggesting that if it's a preference, it's not the cause of our homogeneity, but I think the preference to be near similar people is a good explanation for why EA isn't very diverse. (cf Thomas Schelling's work on informal segregation)
Also, your concern about some kind of disaster caused by wireheading addiction and resulting deaths and damage is pretty absurd.
Yes, people are more likely to do drugs when they are more available but even if the government can't limit the devices that enable wireheading from legal purchase it will still require a greater effort to put together your wireheading setup than it currently does to drive to the right part of the nearest city (discoverable via google) and purchasing some heroin. Even if it did become very easy to access it's still no...
You make a lot of claims here that seem unsupported and based on nothing but vague analogy with existing primitive means of altering our brain chemisty. For instance a key claim that pretty most of your consequences seem to depend on is this: "It is great to be in a good working mood, where you are in the flow and every task is easy, but if one feels “too good”, one will be able only to perform “trainspotting”, that is mindless staring at objects.
Why should this be true at all? The reason heroin abusers aren't very productive (and, imo, heroin ...
I'm disappointed that the link about which invertebrates feel pain doesn't go into more detail on the potential distinction between merely learning from damage signals and the actual qualitative experience of pain. It is relatively easy to build a simple robot or write a software program that demonstrates reinforcement learning in the face of some kind of damage but we generally don't believe such programs truly have a qualitative experience of pain. Moreover, the fact that some stimuli are both unpleasant yet rewarding (e.g. encourage repetition) indicates these notions come apart.
While this isn't an answer I suspect that if you are interested in insect welfare one first needs a philosophical/scientific program to get a grip on what that entails.
First, unlike other kinds of animal suffering it seems doubtful there are any interventions for insects that will substantially change their quality of life without also making a big difference in the total population. Thus, unlike large animals, where one can find common ground between various consequentialist moral views it seems quite likely that whether a particular intervention is go...
I'm a huge supporter of drug policy reform and try to advocate it as much as I can in my personal life. Originally, I was going to post here suggesting we need a better breakdown of particular issues which are particularly ripe for policy reform (say reforming how drug weights are calculated) and the relative effectiveness of various interventions (lobbying, ads, lectures etc..).
However, on reflection I think there might be good reasons not to get involved in this project.
Probably the biggest problem for both EA and drug policy reform is the perception t...
This is, IMO, a pretty unpersuasive argument. At least if you are willing, like me, to bite the bullet that SUFFICIENTLY many small gains in utility could make up for a few large gains. I don't even find this particularly difficult to swallow. Indeed, I can explain away our feeling that somehow this shouldn't be true by appealing to our inclination to (as a matter of practical life navigation) to round down sufficiently small hurts to zero.
Also I would suggest that many of the examples that seem problematic are delibrately rigged so the overt descriptio...
I'm wondering if it is technically possible to stop pyroclastic flows from volcanoes (particularly ones near population centers like vesuivious) by building barriers and if so if its an efficient use of resources. Not quite world changing but it is still a low risk and high impact issue and there are US cities that are near volcanoes.
I'm sure someone has thought of this before and done some analysis.
I simply don't believe that anyone is really (when it comes down to it) a presentist or a necessitist.
More generally, these views have more basic problems than anything considered here. Consider, for instance, the problem of personal identity. For either presentism or necessitism to b...
Before I should admit my bias here. I have a pet peeve about posts about mental illness like this. When I suffered from depression and my friend killed himself over it there was nothing that pissed me off more than people passing on the same useless facts and advice to get help (as if that magically made it betteR) with the self-congratulatory attitude that they had done something about the problem and could move on. So what follows may be a result of unjust irritation/anger but I do really believe that it causes harm when we past on truisms like that a...
This feels like nitpicking that gives the impression of undermining Singer's original claim when in reality the figures support them. I have no reason to believe Singer was claiming that of all possible charitable donations trauchoma is the most effective, merely to give the most stunningly large difference in cost effectiveness between charitable donations used for comparable ends (both about blindness so no hard comparisons across kinds of suffering/disability).
I agree that within the EA community and when presenting EA analysis of cost-effectiveness ...
If we're ignoring getting the numbers right and instead focusing on the emotional impact, we have no claim to the term "effective". This sort of reasoning is why epistemics around dogooding are so bad in the first place.
I think there is truth in what you said. But I also have disagreements:
"The only way to convince them is to ignore getting the numbers perfectly right and focus on the emotional impact"
That's a dangerous line of reasoning. If we can't make a point with honest numbers, we shouldn't make the point at all. We might fail to notice when we are wrong when we use bogus numbers to prove whatever opinion we already hold.
What is more, many people who become EAs after hearing such TED talks already think in numbers. They continue in believing the same ...
As for the issue of acquiring power/money/influence and then using it to do good it is important to be precise here and distinguish several questions:
1) Would it be a good thing to amass power/wealth/etc.. (perhaps deceptively) and then use those to do good?
2) Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do X" where X is a good thing.
2') Is it a good thing to PLAN to amass power/wealth/etc.. with the intention of "using it to do good".
3) Is it a good idea to support (or not object) to others ...
That is good to know and I understand the motivation to keep the analysis simple.
As far as the definition go that is a reasonable definition of the term (our notion of catastrophe doesn't include an accumulation of many small utility losses) so is a good criteria for classifying the charity objective. I only meant to comment on QALYs as a means to measure effectiveness.
WTF is with the votedown. I nicely and briefly suggested that another metric might be more compelling (though the author's point about mass appeal is a convincing rebuttal). Did the comment come off as simply bitching rather than a suggestion/observation?
The idea that EA charities should somehow court epistemic virtue among their donors seems to me to be over-asking in a way that will drastically reduce their effectiveness.
No human behaves like some kind of Spock stereotype making all their decisions merely by weighing the evidence. We all respond to cheerleading and upbeat pronouncements and make spontaneous choices based on what we happen to see first. We are all more likely to give when asked in ways which make us feel bad/guilty for saying no or when we forget that we are even doing it (annual credit...
It seems to me that a great deal of this supposed 'problem' is simply the unsurprising and totally human response to feeling that an organization you have invested in (monetarily, emotionally or temporally) is under attack and that the good work it does is in danger of being undermined. EVERYONE on facebook engages in crazy justificatory dances when their people are threatened.
It's a nice ideal that we should all nod and say 'yes that's a valid criticism' when our baby is attacked but it's not going to happen. There is nothing we can do about this aspect...
Lives saved is a very very weird and mostly useless metric. At the very least try and give an estimate in QALYs (quality adjusted life years) since very few people actually value saving life per say (e.g. stopping someone who is about to die of cancer from dying a few minutes earlier).
Given that many non-deaths from food scarcity are probably pretty damn unpleasant this would probably be a more compelling figure.
This doesn't actually provide anything like a framework to evaluate Cause X candidates. Indeed, I would argue it doesn't even provide a decent guide to finding plausible Cause X candidates.
Only the first methodology (expanding the moral sphere) identifies a type of moral claim that we have historically looked back on and found to be compelling. The second and third methods just list typical ways people in the EA community claim to have found Cause X. Moreover, there is good reason for thinking that successfully finding something that qualifies as Cause X will require coming up with something that isn't an obvious candidate.
I think this post is confused on a number of levels.
First, as far as ideal behavior is concerned integrity isn't a relevant concept. The ideal utilitarian agent will simply always behave in the manner that optimizes expected future utility factoring in the effect that breaking one's word or other actions will have on the perceptions (and thus future actions) of other people.
Now the post rightly notes that as a limited human agent we aren't truly able to engage in this kind of analysis. Both because of our computational limitations and our inability to pe...
I feel like there is some definitional slipping going on when you suggest that a painful experience is less bad when you are also experiencing a pleasurable one at the same time. Rather, it seems to me the right way to describe this situation is that the experience is simply not as painful as it would be otherwise.
To drive this intuition consider S&M play. It's not that the pain of being whipped is just as bad...it literally feels different than being whipped in a different context that simply makes it less painful.
Better yet notice the way opiates w... (read more)