All of Lila's Comments + Replies

It looks like there might be confounders in the time series because there is a negative "effect" on life satisfaction prior to becoming disabled or unemployed. (With divorce and widowhood it's plausible that some people would see it coming years in advance.)

1
alintz
5y
Good point. I think it does make some sense for unemployment though as at least some proportion of people will forsee becoming unemployed either because they think they'll get fired or because they are dissatisfied with their jobs to the point they want to quit. The data on being disabled it is a bit more troubling, but (without having read the report) one reason might be that this was official registration (or even official diagnosis) for disability which, for disabilities with slow onset (arthritis, some disabling diseases like MS, etc..), there would perhaps be a period of discomfort and pain before becoming officially disabled. Keep in mind that only some proportion of subjects need to have slow onset conditions to skew the average toward a negative dip before diagnosis (governmental or medical)

Academics will not find a new journal run by non-academics credible, much less prestigious. No one would be able to put this journal on an academic CV. So there's really no benefit to "publishing" relative to posting publicly and letting people vote and comment.

1
kbog
6y
I didn't say it would be run by non-academics. That will depend on who runs it! Well there are many ways to run a review process besides public votes and comments. You can always have a more closed/formal/blind process even if you don't publish it.

Metformin isn't a supplement though. It's unlikely it would ever get approved as a supplement or OTC, especially given that it has serious side effects.

0
turchin
6y
That is why I think that we should divide discussion in two lines: One is the potential impact of simple interventions in life extension, which are many, and another is, is it possible that metformin will be such simple intervention. In case of metformin, there is a tendency to prescribe it to the larger share of the population, as a first line drug of diabetes 2, but I think that its safety should be personalized by some genetic tests and bloodwork for vitamin deficiency. Around 30 mln people in US or 10 per cent of the population already have diabetes 2 (https://www.healthline.com/health/type-2-diabetes/statistics) and this population share is eligible for metformin prescriptions. This means that we could get large life expecting benefits replacing prescription drugs not associated with longevity - with longevity associated drugs for the same condition, like metformin for diabetes, lazortan for hypertension, aspirin for blood thining etc.

Really interesting. I appreciate you sharing this and your attitude toward this. Good luck with your career in philosophy - epistemic honesty will take you far.

You might consider cross-posting this on a site like Medium to reach a larger audience.

1
MichaelPlant
6y
Interesting thought puting it on medium. Someone put it on Hacker News here were people were, um, not terribly nice about it, so I had some reservations about that.

It's not either/or. It's likely not to be a single disease - would probably be more accurate to call it a syndrome.

I'm not sure how the beliefs in Table 3 would lead to positive social change. Mostly just seems like an increase in some vague theism, along with acceptance/complacency/indifference/nihilism. The former is epistemically shaky, and the latter doesn't seem like an engine for social change.

You might as well randomly go through the list of multimillionaires/billionaires and cold-call them. Maybe not the worst idea, but there's nothing in particular to suggest this guy would be special.

3
Milan_Griffes
6y
That seems like not a bad idea, though probably not very tractable. A couple things stand out here as special: * Missler suddenly came into a windfall, so probably is under a house money effect. * He's very young, so probably doesn't yet have a calcified theory about how to do good. * He's easy to contact (you can just shoot him a DM on facebook), which isn't true for most ultra-high-net-worth individuals.

Technology to do something like this is already being developed, but it's not nanotechnology: https://www.nature.com/articles/nmeth.3151

Nanotechnology is rarely the most practical way to probe very small things. People have been able to infer molecular structures since the 19th century. Modern molecular biology/biochemistry makes use of electron microscope, fluorescent microscopy, and sequencing-based assays, among other techniques.

0
Daniel_Eth
6y
Nanotechnology is technology that has parts operating in the range of between 1 nm and 100 nm, so actually this technology is nanotechnology - as is much of the rest of biotechnology. You're right that the usefulness of non-biotech based nanotechnology (what people typically think of as nanotechnology) hasn't been used much - that's largely due to it being a nascent area. I expect that to change over the coming decades as the technology improves. It might not, though, as biotech based nanotechnology might stay in the lead.

What do you mean by nanoscale neural probes? What are the questions that these probes would answer?

0
Daniel_Eth
6y
Broadly speaking, nanoparticles (or nanorobots, depending on how complicated they are) that scan the brain from the inside, in vivo. The sort of capabilities I'm imagining is the ability to monitor every neuron in large neural circuits simultaneously, each for many different chemical signals (such as certain neurotransmitters). Of course, since this technology doesn't exist yet, the specifics are necessarily uncertain - these probes might include CMOS circuitry, they might be based on DNA origami, or they might be unlike any technology that currently exists. Such probes would allow for building much more accurate maps of brain activity.

Modeling the risk of psychedelics as nonexistent seems like a very selective reading of Carbonaro 2016:

"Eleven percent put self or others at risk of physical harm; factors increasing the likelihood of risk included estimated dose, duration and difficulty of the experience, and absence of physical comfort and social support. Of the respondents, 2.6% behaved in a physically aggressive or violent manner and 2.7% received medical help. Of those whose experience occurred >1 year before, 7.6% sought treatment for enduring psychological symptoms. Three cases appeared associated with onset of enduring psychotic symptoms and three cases with attempted suicide."

0
enkin
6y
The sexual violence I've endured has had disastrous affects on my health and wellbeing. I have treatment resistant disabling PTSD and depression, have not been able to hold a job due to symptoms, have not been able to have anything close to a healthy sex life, and in spite of availing myself of all the help I can afford (and putting myself in debt getting help I could not afford) am still not in any way functional or healthy. Sexual abuse and rape, both that which occurred when I was very young and when I was an adult, have driven me to suicidality many times, and being suicidal for extended periods of time, especially without recourse to fix it (either by dying, which is harder than it looks when you have very little resources, or by getting better), is far worse than an immediate death.

You reveal that you are highly motivated to argue that exterminating humanity is not in the interest of an AI, regardless of whether that statement is true. So your arguments will present weak evidence at best, given your clear bias.

0
turchin
6y
There are types of arguments which doesn't depend on my motivation, like "deals" and "questions". For example, if I say "I will sell you 10 paperclips if you will not kill me", - in that case, my motivation is an evidence that I will stick to my side of the deal.
1
Liam_Donovan
6y
Wouldn't this be an issue with or without an explanation? It seems like an AI can reasonably infer from other actions humans in general, or Alexey in particular, take that they are highly motivated to argue against being exterminated. IDK if I'm missing something obvious -- I don't know much about AI safety.
0
turchin
6y
Yes, I expect that future AI will read the text. Not sure what you mean about "tips your hand", as English is not my first language.

Neither of those statements are upsetting to me.

It's often useful to be able to imagine what will be upsetting to other people and why, even if it's not upsetting to you. Maybe you'll decide that it's worth hurting people, but at least make your decisions with an accurate model of the world. (By the way, "because they're oversensitive" doesn't count as an explanation.)

So let's try to think about why someone might be upset if you told them that they're more likely to be a rapist because of their race. I can think of a few reasons: They feel af... (read more)

-1
Zeke_Sherman
6y
It's nice to imagine things. But I'll wait for actual EAs to tell me about what does or doesn't upset them before drawing conclusions about what they think.

I hope you're just using this as a demonstration and not seriously suggesting that we start racially profiling people in EA.

This unpleasant tangent is a great example of why applying aggregate statistics to actual people isn't a good strategy. It should be clear why people find the following statements upsetting:

Statistically, there are X rapists in the EA community.

Statistically, as a man/black person/Mexican/non-college grad/Muslim, there is X probability you're a rapist.

Let's please not go down this path.

-1
Zeke_Sherman
6y
I think it's pretty odd of you to try to tell me about what upsets EAs or how we feel, given that you have already left the movement. To speak as if you have some kind of personal stake or connection to this matter is rather dishonest. Racial profiling is something that is conducted by law enforcement and criminal investigation, and EA does neither of those things. I would be much more bothered if EA started trying to hunt for criminals within its ranks than I would be from the mere fact that the manner in which we did this involved racial profiling. Neither of those statements are upsetting to me.

I would far prefer being raped over a 1% chance of dying immediately. I think the tradeoff would be something like 100,000 to 1.

0
enkin
6y
I would far prefer dying immediately to being raped again.

I don't think most of these will convince people to share your views, often because they come from different moral perspectives. They seem too negative or directly contradictory for people to change their minds - particularly the ones on social justice. However, it might help people understand your personal choices. What have been your results?

0
Aaron Gertler
6y
These are all composites -- I'm not giving these exact speeches, but I might borrow different examples at different times to use in conversations. When used in context, with the specific people I think are likely to respond best, bits and pieces of these frames been fairly effective; something like 20 people I've introduced to this have gone on to donate some amount of money to an EA-approved charity. The idea of using "different moral perspectives" is specifically to convince as wide a range of people as possible. Too many common EA arguments assume that everyone is consequentialist, deep down. But you do have to match the perspective to the person -- otherwise, the conversation can certainly backfire!

I'm a 4th year PhD student in bioinformatics. I've previously considered doing something similar, though I focused more on stem cell technology, which is most relevant to my current research. However, would definitely be interested in discussing further!

0
scottweathers
6y
Excellent to hear! Please get in touch! :)

I agree with this for the most part, but let's not exclude people from EA who, like me, are low-IQ and high-libido.

It seems that you are vastly underestimating the intensity of psychological trauma that comes with rape.

Even if this is descriptively true (and I think it varies a lot - some people aren't bothered long-term), there's no reason that this is a desirable outcome. Everything is mediated through attitudes.

1
Kathy_Forth
6y
Some people have blue eyes and other people have brown eyes. A lot of mind-related traits vary from intelligence to personality to capacity to pay attention. Not everybody even has two chromosomes (see XXY). If not everyone experiences sexual trauma, let's not jump to the conclusion that it's due to culture. There are a multitude of possible reasons. For just one example: they might have different genes. I definitely have the capacity to experience trauma, and I'm pretty sure that's genetic, so it's not fair to me for people to expect me not to experience it. In fact, I think it would be more traumatic for me to experience my natural instinct for trauma and then be told I shouldn't experience trauma. Telling me I should have experienced less trauma would hurt me too. If someone doesn't experience trauma, don't assume it's genes, either. It might not be genes or culture. To assume it must be one of these is a false dichotomy. There could be dozens of different possible reasons why that might happen, and we just don't know. Point: just because some people didn't experience trauma when they could have does not mean we should expect for everyone else to stop experiencing trauma. First of all, we don't even know why some people don't experience it. This is totally unfair to the victim because victims do not actually know how to stop experiencing trauma. Second of all, expecting people to reduce their experience of trauma puts the responsibility onto the victim. Sex offenders might be confused by this sort of thinking. They might tell themselves "the victim shouldn't feel trauma" and then feel good about going off to commit a whole bunch of sex offences, blaming the victims for all the negative consequences. This is how sex offenders think. They create justifications to commit crimes. These are called cognitive distortions. By arguing in favor of an attitude that can be used as a justification to commit sex offences, you are making us all less safe.

I'm convinced that most people have an instinctive reaction to sexual violence which involves psychological trauma being triggered automatically.

There's no reason that this should be the case.

Yet, if a child is raped, that's psychologically devastating. The damage can last their whole lives. Explain that.

There are a lot of factors that are difficult to untangle. The ways that adults or peers react can certainly have an influence. I heard one father saying that a sexual abuser "stole his daughter's innocence", or something in a similar vei... (read more)

2
Kathy_Forth
6y
There are many statements people make to other people that are similarly discouraging / humiliating / upsetting. Verbal abuse is certainly bad for people, but people's reaction to sexual abuse is very different. Making statements like the one you described does not cause the sort of sudden, deep, intense, devastating psychological trauma you see with rape. You're comparing an apple to an orange here. Additionally, hearing one's dad say a rapist stole your innocence is bad, but it's not going to account for most of the upset. Not nearly. It seems that you are vastly underestimating the intensity of psychological trauma that comes with rape. Attributing a sexual trauma to a verbal statement is like blaming a snowflake for an ice berg. The ice berg was not caused by the snowflake. The snowflake is too small. It seems like you'd really like to understand trauma better. There are good authors on this topic. Instead of chatting with me, it would be far higher value for you to read this book: https://www.amazon.com/Body-Keeps-Score-Healing-Trauma/dp/0143127748/ref=sr_1_2?ie=UTF8&qid=1510541134&sr=8-2&keywords=the+body+remembers
Lila
6y17
0
0

103 - 607 male rapists in EA

False precision much? This seems like an inappropriately specific number - it makes it sound like you have concrete evidence, but in reality you're just multiplying the number of men in EA by 6%. I hope that this number won't start getting spread around.

A more tractable approach to reducing the trauma from sexual violence might be to change perceptions of sexuality. Many people believe that it's important for women to be sexually "pure", which is one reason that female victims experience trauma.

Feminists, to their ... (read more)

xccf
6y15
0
0

I also found this stat frustrating. The "A 1:6 ratio means 7 rapes per 6 women on average" stat frustrated me even more--it assumes that EA men are rapists at the base rate of the population at large (probably false), and that every time a rapist rapes someone, if the rapist is an EA, their victim must be an EA too.

I worry that hearing stats like this will cause women to avoid EA, which will then contribute to the imbalanced gender ratio that Kathy has identified as being part of the problem.

5
Kathy_Forth
6y
Good point about false precision. I hadn't thought of that. The article has been updated! You wrote: "A more tractable approach to reducing the trauma from sexual violence might be to change perceptions of sexuality. Many people believe that it's important for women to be sexually "pure", which is one reason that female victims experience trauma." You didn't cite anything for this. I am concerned that some people may become confused and think they can convince women to tolerate atrocity. There are people out there who will twist anything into a justification to rape. Your paragraph there is the sort of information they might twist into rationalizations and cognitive distortions. I've read a lot of research on psychological trauma. I'm convinced that most people have an instinctive reaction to sexual violence which involves psychological trauma being triggered automatically. From where I'm sitting, it looks like you just haven't done very much reading on this topic. For one thing, if sexual trauma is social programming, why do men respond in the same manner? Shouldn't they have a different reaction? If a woman rapes a man, he will be psychologically traumatized. I've heard of men who were baffled by their own sexual trauma. Men are harmed, too, and in a similar way to what women experience. Children don't even have social programming about sex yet. A lot of children have never even heard of sex. Yet, if a child is raped, that's psychologically devastating. The effects can last their whole lives. Explain that.

Treating Candida via diet isn't accepted science: https://www.mayoclinic.org/healthy-lifestyle/consumer-health/expert-answers/candida-cleanse/faq-20058174

So it's not surprising a doctor wouldn't diagnose you.

4
kbog
6y
Eliezer's solution wasn't dietary treatment, it was to use imported Nizoral.

I consider GWAS applied, not basic, because it's not mechanistic. Most biologists I've spoken to have a fairly poor opinion of GWAS, as do I. Much of the biological research that gets funded is basic.

The p-value critique doesn't apply to many scientific fields. As far as I can tell, it mostly applies to social science and maybe epidemiological research. In basic biological research, a paper wouldn't be published in a good journal on the basis of a single p-value. In fact, many papers don't have any p-values. When p-values are presented, they're often so low (10^-15) that they're unnecessary confirmations of a clearly visible effect. (Silly, in my opinion.) Most papers rely on many experiments, which ideally provide multiple lines of evidence. It's also... (read more)

"The p-value critique doesn't apply to many scientific fields." I agree with this, or at least that it is vastly weaker when overwhelming data are available to pin down results.

"As far as I can tell, it mostly applies to social science and maybe epidemiological research. "

I disagree with this.

For instance, p-value issues have been catastrophic in quantitative genetics. The vast bulk of candidate gene research in genetics was non-replicable p-hacking of radically underpowered studies. E.g. schizophrenia candidate genes replicate at chanc... (read more)

I'm a current PhD student in computational biology, so I can offer a perspective on academic research in biology. I agree that biologists aren't optimizing for benefiting humanity - instead, I think high-quality basic research gets the most respect and that academia can't be beat here in most cases.

EAs attempting to do biology outside academia have two options. They can try to circumvent basic research and simply "hack" biology by experimenting with various interventions. However, given the complexity of biological systems, this seems unlikely t... (read more)

Where do we draw the line? Is intrinsic abilities an acceptable topic of casual discussion? Do you think it would be humiliating for people who are being discussed as having less intrinsic ability?

3
Chris Leong
6y
I think it depends on the particular space. The rationality community should aim to have everything open to discussion because that is its purpose. The EA community should minimise these discussions in that they are rarely necessarily and quite often a distraction. In most groups I've been in, social norms can prevent the need for formal rules though.
Lila
6y12
0
0

I can see 1-3 being problems to some extent (and I don't think Kelly would disagree)... but "overrepresentation of vegetarians and vegans"?? You might as well complain about an overrepresentation of people who donate to charity.

So I think that if you identify with or against some group (e.g. 'anti-SJWs'), then anything that people say that pattern matches to something that this group would say triggers a reflexive negative reaction. This manifests in various ways: you're inclined to attribute way more to the person's statements than what they're actually saying or you set an overly demanding bar for them to "prove" that what they're saying is correct. And I think all of that is pretty bad for discourse.

This used to be me... It wasn't so much my beliefs that changed (... (read more)

But you don't want discrimination hypotheses to be discussed either? I guess that could be an acceptable compromise, to not debate the causes of disparities but at the same time focus on improving diversity in recruitment.

4
xccf
6y
Yeah. I'm also in favor of trying to grab low-hanging fruit from addressing discrimination, as long as we don't get overzealous. But in terms of trying to make our demographics completely representative... there are already a lot of groups trying and failing to do that, sometimes in a way that crashes & burns spectacularly, so I would rather hang back and wait for a model that seems workable/reliable before aiming that high.

I think there's a bit of an empathy gap in this community. When people are angry for what seems to be no reason, a good first step is to ask whether you've done something that made them feel unsafe/humiliated/demeaned/etc, even if that wasn't your intention. It doesn't take a lot of imagination to see how unsolicited exploration of "other hypotheses" (cough cough) for racial and gender disparities could be very distressing for the people who are being discussed as if they're not there.

5
Chris Leong
6y
I actually think we should discuss other hypotheses. Firstly, "other hypotheses" includes all kinds of inoffensive explanations like the primary cause of a difference being: * Broader society has instilled certain social norms in people, as opposed to it being anything specific about this group * Founder effects - A guy gets a few of his mates to start the group, they rope in their mates, ect. * That the message happens to resonate among groups of people that are currently disproportionately one gender (ie. programmers) But going further than this, I don't think we should limit discussion of different intrinsic preferences either, especially if someone makes an argument that is dependent on this being false.
2
xccf
6y
Oh, I totally agree, and I don't think we should explore them. [I edited my comment in an attempt to clarify this.]

Politics is rarely used as an example of a positive environment for women.

It's not just the actual numbers that are concerning (though I disagree with you that a 70% skew can be brushed off). It's the exclusionary behavior within EA.

Lila
6y10
0
0

Thanks Kelly. I agree that this is a problem in EA in ways that people don't realize. In retrospect, I feel stupid for not realizing how casual discussion of IQ and eugenics would be hurtful. Same thing with applying that classic EA skepticism to people's lived experiences.

Culture isn't the main reason I left EA, but it's #3. And I think it contributes to the top two reasons I felt alienated: the mockery of moral views that deviate from strict utilitarianism, and what I believed were naive over-confident tactics.

"Same thing with applying that classic EA skepticism to people's lived experiences"

I suppose this comes down to why the person is sharing their lived experience. If someone is just telling you their story, you want to try and keep an open mind. On the other hand, if someone is sharing their lived experiences in order to make a political argument, a certain amount of criticism, whilst not being unnecessarily insensitive, is fair game.

Humans are generally not evil, just lazy

?

Human history has many examples of systematic unnecessary sadism, such as torture for religious reasons. Modern Western moral values are an anomaly.

4
Ben_West
7y
Thanks for the response! But is that true? The examples I can think of seem better explained by a desire for power etc. than suffering as an end goal in itself. (To quote every placeholder text: Lorem ipsum dolor sit amet...)

You're free to offer your own thoughts on the matter, but you seemed to be trying to engage me in a personal debate, which I have no interest in doing. This isn't a clickbait title, I'm not concern trolling, I really have left the EA community. I don't know of any other people who have changed their mind about EA like this, so I thought my story might be of some interest to people. And hey, maybe a few of y'all were wondering where I went.

I don't expect you to convince me to stay.

Maybe I should have said "I'd prefer if you didn't try to convince me to stay". Moral philosophy isn't a huge interest of mine anymore, and I don't really feel like justifying myself on this. I am giving an account of something that happened to me. Not making an argument for what you should believe. I was very careful to say "in my view" for non-trivial claims. I explicitly said "Prioritizing animals (particularly invertebrates) relied on total-view utilitarianism (for me)." So I'm not interested in hearing why prioritizing animals does not necessarily rely on total view utilitarianism.

9
kbog
7y
I'm clearing up the philosophical issues here. It's fine if you don't agree, but I want others to have a better view of the issue. After all, you started your post by saying that EAs are overconfident and think their views are self evident. Well, what I'm doing here is explaining the reasons I have for believing these things, to combat such perceptions and improve people's understanding of the issues. Because other people are going to see this conversation, and they're going to make some judgement about EAs like me because of it. But if you explicitly didn't want people to respond to your points... heck, I dunno what you were looking for. You shouldn't expect to not have people respond with their points of view, especially when you disagree on a public forum.

To the extent that we decide to devote resources to helping other people, it makes sense that we should do this to the maximal extent possible

I don't think I do anything in my life to the maximal extent possible

0
adamaero
6y
So you don't want to raise your kids so that they can achieve their highest potential? Or if you're training for a 5K/half-marathon, and you don't want to make the best use of your time training? You don't want to get your maximal PR? I digress. I do not believe in all the ideas, especially about MIRI (AI risk). Although, in my mind, EA is just getting the biggest bang for your buck. Donating is huge! And organizations, such as GiveWell, are just tools. Sure, I could scour GuideStar and evaluate and compare 990 forms--but why go though all the hassle? Anyway, honestly it doesn't really matter that people call themselves "effective altruists." And the philosophical underpinnings--which are built to be utilitarian independent--seem after the fact. "Effective Altruism" is just a label really; so we can be on the same general page: Effective Altruism has Five Serious Flaws - Avoid It - Be a DIY Philanthropist Instead There's some statistic out there that says two-thirds or something of donors do no research at all into the organizations they give to. I hope that some people just wouldn't give at all ~ nonmalfeasance.

That's a good point, though my main reason for being wary of EV is related to rejecting utilitarianism. I don't think that quantitative, systematic ways of thinking are necessarily well-suited to thinking about morality, any more than they'd be suited to thinking about aesthetics. Even in biology (my field), a priori first-principles approaches can be misleading. Biology is too squishy and context-dependent. And moral psychology is probably even squishier.

EV is one tool in our moral toolkit. I find it most insightful when comparing fairly similar actions,... (read more)

Lila
8y-2
0
0

"But I think supporting the continuation of humanity and the socialization of the next generation can be considered a pretty basic part of human life."

Maybe it's a good thing at the margins, but we have more than enough people breeding at this point. There's nothing particularly noble about it, anymore than it's noble for an EA to become a sanitation worker. Sure, society would fall apart without sanitation workers, but still...

You're entitled to do what you want with your life, but there's no reason to be smug about it.

[This comment is no longer endorsed by its author]Reply
3
Bernadette_Young
8y
The post doesn't claim that having children makes you "good" or "particularly noble", and there's no moral connotation inherent in something being "a pretty basic part of human life". You're entitled to think what you like, but there's no reason to be nasty about it.
1
Bernadette_Young
8y
What an incredibly unfriendly thing to say 12 months later to somebody you've never met in person. Given the context above I'm not sure if you are writing it to say I did not competently parent our child at EAG? The EAG I attended happened 8 months after I wrote that comment. In 8 months young children develop and their needs and behaviour change. What a shock. Our daughter (14 months old at the time of EAG) was present in the lecture theatre for parts of 2 talks. She did not cry during in any of the lectures. She babbled loudly and I removed her when that happened. For the rest of the conference one of us missed the talks in order to keep her in the hall. I paid the full cost of attending the conference. Both her parents were there by the way, have you made sure to pass your criticism on to her father?

Do you have plans to publish summaries of the research you do, e.g. on Wikipedia

Wikipedia's policies forbid original research. Publishing the research on the organization's website and then citing it on Wikipedia would also be discouraged, because of exclusive reliance on primary sources. (And the close connection to the subject would raise eyebrows.)

I think this is worth mentioning because I've seen some embarrassing violations of Wikipedia policy on EA-related articles recently.

1
John_Maxwell
8y
If someone at CEA reads a bunch of studies on a particular topic, and writes several well-cited paragraphs that summarize the literature, this would be appropriate for Wikipedia, no? (I agree other ways of interpreting "research" might not be.)

It feels like telling two rival universities to cut their football programs and donate the savings to AMF. "Everyone wins!"

Anyway, two billion dollars isn't that much in the scheme of things. I remember reading somewhere that Americans spend more money on Halloween candy than politics.

My point was that opiates are extremely pleasurable but I wouldn't want to experience them all the time, even with no consequences. Just sometimes.

"Reducing "existential risk" will of course increase wild animal suffering as well as factory farming, and future equivalents."

Yes, this isn't a novel claim. This is why people who care a lot about wild animal suffering are less likely to work on reducing x risk.

I've had vicodin and china white and sometimes indulge in an oxy. They're quite good, but it hasn't really changed my views on morality. Despite my opiate experience, I'm much less utilitarian than the typical EA.

0
kbog
8y
Interesting. Well, if opiates simply aren't that pleasurable, then it doesn't say anything about utilitarianism either way. If people experienced things which were really pleasurable but still felt like it would be bad to keep experiencing it, that would be a strike against utilitarianism. If people experienced total pleasure and preferred sticking with it after total reflection and introspection, then that would be a point in favor of utilitarianism.

I agree that points 1 and 2 are unrelated, but I think most people outside EA would agree that a universe of happy bricks is bad. (As I argued in a previous post, it's pretty indistinguishable from a universe of paperclips.) This is one problem that I (and possibly others) have with EA.

1
kokotajlod
8y
I second this! I'm one of the many people who think that maximizing happiness would be terrible. (I mean, there would be worse things you could do, but compared to what a normal, decent person would do, it's terrible.) The reason is simple: when you maximize something, by definition that means being willing to sacrifice everything else for the sake of that thing. Depending on the situation you are in, you might not need to sacrifice anything else; in fact, depending on the situation, maximizing that one thing might lead to lots of other things as a bonus--but in principle, if you are maximizing something, then you are willing to sacrifice everything else for the sake of it. Justice. Beauty. Fairness. Equality. Friendship. Art. Wisdom. Knowledge. Adventure. The list goes on and on. If maximizing happiness required sacrificing all of those things, such that the world contained none of them, would you still think it was the right thing to do? I hope not. (Moreover, based on the laws of physics as we currently understand them, maximizing happiness WILL require us to sacrifice all of the things mentioned above, except possibly Wisdom and Knowledge, and even they will be concentrated in one being or kind of being.) This is a problem with utilitarianism, not EA, but EA is currently dominated by utilitarians.

I'd be happy if the EA movement became interested in this, just as I'd be happy if the Democratic Party did. But my point was, the label EA means nothing to me. I follow my own views, and it doesn't matter to me what this community thinks of it. Just as you're free to follow your own views, regardless of EA.

Yeah it's confusing because the general description is very vague: do the most good in the world. EAs are often reluctant to be more specific than that. But in practice EAs tend to make arguments from a utilitarian perspective, and the cause areas have been well-defined for a long time: GiveWell recommended charities (typically global health), existential risk (particularly AI), factory farming, and self-improvement (e.g. CFAR). There's nothing terribly wrong with these causes, but I've become interested in violence and poor governance in the developing world. EA just doesn't have much to offer there.

0
cdc482
8y
EA is an evolving movement, but the reasons for prioritizing violence and poor governance in the developing world seem weak. It's certainly altruistic and the amount of suffering it addresses is enormous. However, the world is in such a sad state of affairs, that I don't think such a complex and unexplored will compete with charities addressing basic needs like alleviating poverty or even OpenPhil's current agenda of prison reform and factory farm suffering. That said, you could start the exploring. Isn't that how the other causes became mainstream within the EA movement?
Load more