Lumpyproletariat

Lumpy is an undergraduate at some state college somewhere in the States. He isn't an interesting person and interesting things seldom happen to him.

Among his skills are such diverse elements as linguistic tomfoolery, procrastination, being terrible with computers yet running Linux anyway, a genial temperament and magnanimous spirit, a fairly swell necktie if he does say so himself, mounting dread, and quiet desperation.

Plays as a wizard in any table top or video game where that's an option, regardless of whether it's a [i]strong[/i] option. Has never failed a Hogwarts sorting test, of any sort or on any platform. (If you were about to say how one can't fail a sorting test . . . one surmises that you didn't make Ravenclaw.) Read The Fellowship, Two Towers, and Return of the King over the course of three sleepless days at age seven; couldn't keep down solid food after, because he'd forgotten to eat. Was really into the MBTI as a tweenager; thought it ridiculous how people said that no personality type was "better" than the others when ENTJ is clearly the most powerful. (Scored INFP, his self, but hey, one out of four isn't so bad. (However, found a better fit in INTP.)) Out of the Disney princesses Lumpy is Mulan--that is, if one is willing to trust BuzzFeed. Which, alas, one is not.

No, but seriously.

Mulan?? 0_o

If, despite this exhaustive list of traits and deeds, your burning question is left unanswered, send a missive in private. Should your quest be noble and intentions pure, it is said that Lumpyproletariat might respond in kind.

Comments

Is EA just about population growth?

"Regarding your aside, I think that illustrates an interesting potential solution to the dilemma (?) The purpose is not to save lives (because in your case, the world where 100% of people die is less or equally bad than 50% of people dying). This is an interesting case, and perhaps there's a way to rephrase the original claim to accommodate it, though I'm not certain how."

I must have inadequately written my parenthetical aside; perhaps I inadequately wrote everything. 

The purpose is entirely to save lives. We have a world with seven billion people. If all of them died, it amount of disutility in my view would be X times seven billion, where X is the disutility from someone dying. If the world instead had fourteen billion people and seven billion of them died, the disutility would still be X times seven billion. The human race existing doesn't matter to me, only the humans. If no one had any kids and this generation was the last one, I don't think that would be a bad thing.

This isn't something which all EAs think (some of them value "humanity" as well as the humans), though it does seem to be a view over represented by people who responded to this thread.

"The way I see it, the people of the future 'existing' is a knob that we have the power to control (in a broad sense). It's not something that would happen 'either way.'"

I know a man who plans to have a child the traditional way. We've spoken about the topic and I've told him my views; there's not terribly much more I could do. I have very little power over whether or not that child will exist--none whatsoever, in any practical way.

That child doesn't exist yet--there's some chance they never will. I want that child to have a happy life, and to not die unless they want to. When that entity becomes existent, the odds are very good I'll be personally involved in said entity's happiness; I'll be a friend of the family. Certainly, if twelve years in the child fell in a river and started to drown, I'd muddy my jacket to save them.

But I wouldn't lift a finger to create them. Do I explain myself? 

Something analogous could be said about all the humans who do not exist, but will. We have control over the "existence knob" in such a broad sense that there's little point bringing it up at all. So, living in a world where people exist, and will continue to do so, it seems like the most important thing is to keep them alive.

Valuing the people who exist is a very different thing from valuing people existing. EA is not just about population growth--it isn't about population growth at all.

Is EA just about population growth?

There must be something I don't understand; I don't see a puzzle here at all. You spent a lot of time writing this up, presumably you spent a lot of time thinking about it, so I'm going to spend at least a small amount of time trying to find where our worldviews glide past each other.

Here's my take. It's  a fairly simple take, as I'm a fairly simple person. 

If someone exists, one ought to be nice to them. Certainly, one ought not to let them die--to do so would be unkind, to say the least. People who exist should have good lives--if someone doesn't have a good life or will lose their good life, this is a problem one ought to fix. So far, nothing but bog-standard moral fare. 

If someone doesn't exist, they don't exist--it's impossible to be kind or cruel to someone who doesn't exist. I don't think many would disagree on that point either. 

Now here, perhaps, is where we lose each other: if someone is going to exist, and one is aware of this fact, one should probably take preemptive steps to ensure that future person will have a good life--a life happy, fulfilling, and long. This isn't because hypothetical people have moral value, it's because we are aware in advance that the problem won't always be a hypothetical one. We can realistically foresee that unless we course correct on this destroying the biosphere project we've undertaken, people will come into existence and lead terrible, cruelly short lives.

I (and many others, I gather) aren't doing this so that more people will be born--we're doing this so that people who will be born either way live happily.

(Parenthetical aside: some people place value on the human species continuing to exist--I don't, personally; if everyone alive died that would be awful, but I don't think it'd be more awful than if there had been fourteen billion minds before seven billion died. That said, if we care at all about aesthetics I can see the aesthetic argument in favor of human survival, in that all aesthetics would die with us.)

This is a very different problem from educating women and predictably causing fewer people to exist in the first place. My value isn't people existing, my value is good long lives for those who do (or will).

Is EA just about population growth?
  • Suppose, towards a contradiction, that the goal of life is to save lives.
  • We know educating women more is good and would be done in an ideal world.
  • Increasing women's education leads to fewer lives because of declining fertility.
  • Therefore, the goal of life must not be to save lives.

 

What if one's goal is to save lives which already exist, contingent on their already existing? 

Pure utilitarianism doesn't necessarily lead to screwy answers when thinking about the future--for instance, suppose that matter is convertible to computronium, and computronium is convertible to hedonium, and that there is thus a set amount of joy in the universe; in that instance, creating more people just trades against the happiness of those who already exist, who could have used all that matter for themselves, but are now morally obligated to share.

But I tend to be of the view that potential people don't exist and thus don't have moral significance. If it's foreseeable that someone in particular will exist (and at that point have moral significance) we ought to make sure things go well for them. But I don't feel any moral obligation to bring them into existence.

Lumpyproletariat's Shortform

This is crossposted from the December career advice thread:

I notice that the thread has gotten long and a lot of people's questions are being buried (one thing I intensely dislike about upvote-style forums is that it isn't trivial to scroll down to the end of the thread and see what's new ("Oh, but you can sort by new if you want to," one replies, and, sure, I guess, but unless everyone else with good opinions does too that doesn't exactly solve the problem, now does it?)). The buried questions don't seem less important than the ones posted first, and I wish I was competent to give expert advice apropos them/had a way to direct the community's gaze to them.

I have a question of my own--regarding changing my undergraduate major--but I'll wait for the January thread to ask it.

Careers Questions Open Thread

I notice that the thread has gotten long and a lot of people's questions are being buried (one thing I intensely dislike about upvote-style forums is that it isn't trivial to scroll down to the end of the thread and see what's new ("Oh, but you can sort by new if you want to," one replies, and, sure, I guess, but unless everyone else with good opinions does too that doesn't exactly solve the problem, now does it?)). The buried questions don't seem less important than the ones posted first, and I wish I was competent to give expert advice apropos them/had a way to direct the community's gaze to them.

I have a question of my own--regarding changing my undergraduate major--but I'll wait for the January thread to ask it.

A Case Study in Newtonian Ethics--Kindly Advise

You are an amazing alien, a soul akin enough to mine that I feel slightly less an alien for talking to you. I really don't know why people don't live stranger lives, when ordinary lives chasing money and status are so terribly depressing. It is nice to meet a fellow denizen of planet Camazotz dancing to the beat of a drum other than Its.

(Does one still waive the apostrophe when they're referring to a possession of the proper noun It?)

Clarification clarified. If someone invaded my personal space and dark triaded at me, I imagine I would use my bigness and noise to make them leave. I'm sympathetic to people less big. 

I feel fairly negative towards upvotes my self. They make it easy to pile on someone without actually engaging with them. 

A Case Study in Newtonian Ethics--Kindly Advise

I accidentally posted this comment four times, due largely to technical incompetence. Which is fine, I suppose; it adds emphasis!

A Case Study in Newtonian Ethics--Kindly Advise

This comment is a test, I'm troubleshooting a thing.

A Case Study in Newtonian Ethics--Kindly Advise

Well. I'm floored. People keep upvoting this and saying such wonderfully kind things in the comments . . . Every time I got the notification there was a new comment under this post, I internally flinched and cringed. I'd just written at length about my internal subjective experience, and I regretted writing it from before I clicked submit. It took a lot of evidence piling up to convince the socially cautious part of my brain it was wrong. 

I'm going to update hard towards writing pieces like this one/writing more frequently. It seems like other people ought to as well, it seems like something people want to read. I imagine most of us don't have any new breakthroughs to report in the field of effective altruism. But we probably all have interesting days where we face dilemmas or win victories which would make utterly no sense to most anyone. And, I guess it makes sense you'd want to hear mine because I'd like to hear yours.

A Case Study in Newtonian Ethics--Kindly Advise

I put on my goggles to attempt literary analysis, and then I took them back off. Anyone else want to give it a go?

Load More