All of ClaireZabel's Comments + Replies

Concerns with ACE's Recent Behavior

[As is always the default, but perhaps worth repeating in sensitive situations, my views are my own and by default I'm not speaking on behalf of the Open Phil. I don't do professional grantmaking in this area, haven't been following it closely recently, and others at Open Phil might have different opinions.]

I'm disappointed by ACE's comment (I thought Jakub's comment seemed very polite and even-handed, and not hostile, given the context, nor do I agree with characterizing what seems to me to be sincere concern in the OP just a... (read more)

What does failure look like?

I like this question :) 

One thing I've found pretty helpful in the context of my failures is to try to separate out (a) my intuitive emotional disappointment, regret, feelings of mourning, etc. (b) the question of what lessons, if any, I can take from my failure, now that I've seen the failure take place (c) the question of whether, ex ante, I should have known the endeavor was doomed, and perhaps something more meta about my decision-making procedure was off and ought to be corrected. 

I think all these things are valid and good to process, but I... (read more)

I scraped all public "Effective Altruists" Goodreads reading lists

I’ll consider it a big success of this project if some people will have read Julia Galef's The Scout Mindset next time I check.

It's not out yet, so I expect you will get your wish if you check a bit after it's released :) 

Early Alpha Version of the Probably Good Website

The website isn't working for me, screenshot below:

3omernevo7moThanks for letting us know! If that's alright, I'll send you an email with some questions to figure out what the problem is...
Resources On Mental Health And Finding A Therapist

Just a personal note, in case it's helpful for others: in the past, I thought that medications for mental health issues were likely to be pretty bad, in terms of side effects, and generally associated them with people in situations of pretty extreme suffering.  And so I thought it would only be worth it or appropriate to seek psychiatric help if I were really struggling, e.g. on the brink of a breakdown or full burn-out. So I avoided seeking help, even though I did have some issues that were bothering me.  In my experience, a lot of other people ... (read more)

6Julia_Wise7moSeconding this. My partner was spooked by seeing a family member on heavy-duty medications for a more serious mental health situation, so our vague impression was that antidepressants might really change who I was. I did need to try a couple meds and try different times of day, etc to deal with side effects, but at this point I have a med and dose that makes my life better and has very minor side effects.

As a second data point, my thought process was pretty similar to Claire's - I didn't really consider medication until reading Rob's post because I didn't think I was capital D depressed, and I'm really glad now that I changed my mind about trying it for mild depression. I personally haven't had any negative side effects from Wellbutrin, although some of my friends have. 

Resources On Mental Health And Finding A Therapist

Scott's new practice, Lorien Psychiatry, also has some resources that I (at least) have found helpful. 

4Julia_Wise7moI also like the writeups there. I was hoping I could refer community members to the actual practice, but Scott writes in a recent post: "Stop trying to sign up for my psychiatry practice. It says in three different places there that it's only currently open to patients who are transferring from my previous practice."
Some thoughts on EA outreach to high schoolers

Also, I believe it's much easier to become a teacher for high schoolers at top high schools than a teacher for students at top universities, because most teachers at top unis are professors, or at least lecturers with PhDs, while even at fancy high schools, most teachers don't have PhDs, and I think it's generally just much less selective. So EAs might have an easier time finding positions teaching high schoolers than uni students of a given eliteness level. (Of course, there are other ways to engage people, like student groups, for which different dynamics are at play.) 

4jackmalde7moVery true, also teaching at top private schools doesn’t even require you to have gone through a teaching qualification (at least in the UK). They’re happy to hire anyone with a degree from a respected uni who has some aptitude for teaching. I have a feeling this might be quite an underrated path.
Asking for advice

Huh, this is great to know. Personally, I'm the opposite, I find it annoying when people ask to meet and don't  include a calendly link or similar, I am slightly annoyed by the time it takes to write a reply email and generate a calendar invite, and the often greater overall back-and-forth and attention drain from having the issue linger. 

Curious how anti-Calendly people feel about the "include a calendly link + ask people to send timeslots if they prefer" strategy. 

My feelings are both that it's a great app and yet sometimes I'm irritated when the other person sends me theirs.

If I introspect on the times when I feel the irritation, I notice I feel like they are shirking some work. Previously we were working together to have a meeting, but now I'm doing the work to have a meeting with the other person, where it's my job and not theirs to make it happen.

I think I expect some of of the following asymmetries in responsibility to happen with a much higher frequency than with old-fashioned-coordination:

  • I will book a time,
... (read more)
3Khorton1yDon't feel great about that, for the same reasons as before - it prioritizes your comfort and schedule over mine, which is kind of rude if you're asking me for a favour. But like other people, I don't necessarily endorse these feelings, and they're not super strong. It's fine for people to keep sending me calendly links.
avacyn's Shortform

Some people are making predictions about this topic here.

On that link, someone comments:

Berkeley's incumbent mayor got the endorsement of Bernie Sanders in 2016, and Gavin Newsom for 2020. Berkeley also has a strong record of reelecting mayors. So I think his base rate for reelection should be above 80%, barring a JerryBrownesque run from a much larger state politician.
Is the Buy-One-Give-One model an effective form of altruism?

I just wanted to say I thought this was overall an impressively thorough and thoughtful comment. Thank you for making it!

Information security careers for GCR reduction

I’ve created a survey about barriers to entering information security careers for GCR reduction, with a focus on whether funding might be able to help make entering the space easier. If you’re considering this career path or know people that are, and especially if you foresee money being an obstacle, I’d appreciate you taking the survey/forwarding it to relevant people. 

The survey is here: Open Philanthropy a... (read more)

Some personal thoughts on EA and systemic change

[meta] Carl, I think you should consider going through other long, highly upvoted comments you've written and making them top-level posts. I'd be happy to look over options with you if that'd be helpful.

What book(s) would you want a gifted teenager to come across?

Cool project. I went to maybe-similar type of school and I think if I had encountered certain books earlier, it would have had a really good effect on me. The book categories I think I would most have benefitted from when I was that age:

  • Books about how the world very broadly works. A lot of history felt very detail-oriented and archival, but did less to give me a broad sense of how things had changed over time, what kinds of changes are possible, and what drives them. Top rec in that category: Global Economic History: A Very Short Introduction. Other recs
... (read more)
5Nathan Young2yWorth noting that Thinking Fast and Slows has some issues in the replication crisis (mainly around priming). Eg. this article here. []
2Milan_Griffes2y+1 to Sapiens, parts of [] Moral Mazes, Deep Work, and Seeing like a State.
4EmeryCooper2yI just want to flag up that The Better Angels of Our Nature, whilst a great book, contains quite a few graphic descriptions of torture, which even as an adult I found somewhat disturbing. I don't necessarily think teenage-me would have been affected any worse, but you might still not want to put it in a school library.
8tessa2yI'd second Thinking, Fast and Slow. I took a general primer on human biases ("Psychology of Critical Thinking") at a local university in high school, which overall had an enormously beneficial effect on my thinking. Thinking, Fast and Slow is the most comprehensive popular book I've read which covers that territory, and wins points for describing in detail the experiments that Kahneman and Tversky used to reach their various conclusions. My understanding is that most of Kahneman and Tversky's results have held up, but not everything the book discusses has replicated well- many of the results it describes on priming are questionable. Might be worth complementing with some of Ben Goldacre's books (e.g. Bad Science or I Think You'll Find It's A Bit More Complicated Than That) for very object-level critiques of research (and especially research reporting in the press and the UK government) or Atul Gawande's The Checklist Manifesto for descriptions of how to systematically avoid human errors when doing complicated tasks.
EA is vetting-constrained

I would guess the bottleneck is elsewhere too, think the bottleneck is something like managerial capacity/trust/mentorship/vetting of grantmakers. I recently started thinking about this a bit, but am still in the very early stages.

EA is vetting-constrained

(Just saw this via Rob's post on Facebook) :)

Thanks for writing this up, I think you make some useful points here.

Based on my experience doing some EA grantmaking at Open Phil, my impression is that the bottleneck isn't in vetting precisely, though that's somewhat directionally correct. It's more like there's a distribution of projects, and we've picked some of the low-hanging fruit, and on the current margin, grantmaking in this space requires more effort per grant to feel comfortable with, either to vet (e.g. because the ca... (read more)

Importantly, I suspect it'd be bad for the world if we lowered our bar, though unfortunately I don't think I want to or easily can articulate why I think that now. 

Do you think it is bad that other pools of EA capital exist, with perhaps lower thresholds, who presumably sometimes fund things that OP has deliberately passed on?

Overall, I think generating more experienced grantmakers/mentors for new projects is a priority for the movement.

Do you have any thoughts on how to best do this, and on who is in a position to do this? For example, my own weakly held guess is that I could have substantially more impact in a "grantmaker/mentor for new projects" role than in my current role, but I have a poor sense of how I could go about getting more information on whether that guess is correct; and if it was correct, I wouldn't know if this means I should actively try to get... (read more)

In defence of epistemic modesty

I'm not sure where I picked it up, though I'm pretty sure it was somewhere in the rationalist community.

E.g. from What epistemic hygiene norms should there be?:

Explicitly separate “individual impressions” (impressions based only on evidence you've verified yourself) from “beliefs” (which include evidence from others’ impressions)

In defence of epistemic modesty

Thank so much for the clear and eloquent post. I think a lot of the issues related to lack of expertise and expert bias are stronger than I think you do, and I think it's both rare and not inordinately difficult to adjust for common biases such that in certain cases a less-informed individual can often beat the expert consensus (because few enough of the experts are doing this, for now). But it was useful to read this detailed and compelling explanation of your view.

The following point seems essential, and I think underemphasized:

Modesty can lead to d

... (read more)
2Michael_PJ4yConcur that the distinction between "credence by lights" and "credence all things considered" seems very helpful, possibly deserving of it's own post.
2Gregory_Lewis4yThanks for your generous reply, Claire. I agree the 'double counting' issue remains challenging, although my thought was that most people, at least in the wider world, are currently pretty immodest, the downsides are not too large in what I take to be common applications where you are trying to weigh up large groups of people/experts. I agree there's a risk of degrading norms if people mistakenly switch to offering 'outside view' credences publicly. I regret I hadn't seen the 'impressions' versus 'beliefs' distinction being used before. 'Impression' works very well for 'credence by my lights' (I had toyed with using the term 'image'), but I'm not sure 'belief' translates quite so well for those who haven't seen the way the term is used in the rationalist community. I guess this might just be hard, as there does seem to be a good word (or two) I can find which captures modesty ("being modest, my credence is X", "modestly, I think it's Y", maybe?)
2RobBensinger4yThe dichotomy I see the most at MIRI is 'one's inside-view model' v. 'one's belief', where the latter tries to take into account things like model uncertainty, outside-view debiasing for addressing things like the planning fallacy, and deference to epistemic peers. Nate draws this distinction a lot.
1rohinmshah4yAs one data point, I did not have this association with "impressions" vs. "beliefs", even though I do in fact distinguish between these two kinds of credences and often report both (usually with a long clunky explanation since I don't know of good terminology for it).
5Robert_Wiblin4yI just thought I'd note that this appears similar to the 'herding' phenomenon in political polling, which reduces aggregate accuracy: []
5Stefan_Schubert4yI agree that this distinction is important and should be used more frequently. I also think good terminology is very important. Clunky terms are unlikely to be used. Something along the lines of "impressions" or "seemings" may be good for "credence by my lights" (cf optical illusions, where the way certain matter of facts seem or appear to you differs from your beliefs about them). Another possibility is "private signal". I don't think inside vs outside view is a good terminology. E.g., I may have a credence by my lights about X partly because I believe that X falls in a certain reference class. Such reasoning is normally called "outside-view"-reasoning, yet it doesn't involve deference to epistemic peers.
Why & How to Make Progress on Diversity & Inclusion in EA

Flaws aren't the only things I want to discover when I scrutinize a paper. I also want to discover truths, if they exist, among other things

Why & How to Make Progress on Diversity & Inclusion in EA

[random] I find the survey numbers interesting, insofar as they suggest that EA is more left-leaning than almost any profession or discipline.

(see e.g. this and this).

Why & How to Make Progress on Diversity & Inclusion in EA

The incentive gradient I was referring to goes from trying to actually figure out the truth to using arguments as weapons to win against opponents. You can totally use proxies for the truth if you have to(like an article being written by someone you've audited in the past, or someone who's made sound predictions in the past). You can totally decide not to engage with an issue because it's not worth the time.

But if you just shrug your shoulders and cite average social science reporting on a forum you care about, you are not justified in expecting good outc... (read more)

1xccf4yHow does "this should be obvious" compare to average social science reporting on the epistemic hygiene scale? Like, this is an empirical claim we could test: give people social psych papers that have known flaws, and see whether curiosity or disagreement with the paper's conclusion predicts flaw discovery better. I don't think the result of such an experiment is obvious.
2ClaireZabel4y[random] I find the survey numbers interesting, insofar as they suggest that EA is more left-leaning than almost any profession or discipline. (see e.g. this [] and this [] ).
Why & How to Make Progress on Diversity & Inclusion in EA

To be charitable to Kelly, in most parts of the internet, a link to popular reporting on social science research is a high quality argument.

I dearly hope we never become one of those parts of the internet.

And think we should fight against every slip down that terrible incentive gradient, for example by pointing out that the bottom of that gradient is a really terribly unproductive place, and by pushing back against steps down that doomy path.

7xccf4yMe too. However, I'm not entirely clear what incentive gradient you are referring to. But I do see an incentive gradient which goes like this: Most people responding to threads like this do so in their spare time and run on intrinsic motivation. For whatever reason, on average they find it more intrinsically motivating to look for holes in social psych research if it supports a liberal conclusion. There's a small population motivated the opposite way, but since people find it less intrinsically motivating to hang out in groups where their viewpoint is a minority, those people gradually drift off. The end result is a forum where papers that point to liberal conclusions get torn apart, and papers that point the other way get a pass. As far as I can tell, essentially all online discussions of politicized topics fall prey to a failure mode akin to this, so it's very much something to be aware of. Full disclosure: I'm not much of a paper scrutinizer. And the way I've been behaving in this thread is the same way Kelly has been. For example, I linked to Bryan Caplan's blog post [] covering a paper on ideological imbalance in social psychology. The original paper is 53 pages long [] . Did I read over the entire thing, carefully checking for flaws in the methodology? No, I didn't. I'm not even sure it would be useful for me to do that--the best scrutinizer is someone who feels motivated to disprove a paper's conclusion, and this ideological imbalance paper very much flatters my preconceptions. But the point is that Kelly got called out and I didn't. I don't know what a good solution to this problem looks like. (Maybe LW 2.0 will find one.) But an obvious solution is to extend special charity to anyone who's an ideological minority, to try & forestall evaporative cooling [
Why & How to Make Progress on Diversity & Inclusion in EA

Kelly, I don't think the study you cite is good or compelling evidence of the conclusion you're stating. See Scott's comments on it for the reasons why.

(edited because the original link didn't work)

-3Kelly_Witwicki4yThanks, clarified.
Effective Altruism Grants project update

Ah, k, thanks for explaining, I misinterpreted what you wrote. I agree 25 hours is in the right ballpark for that sum (though it varies a lot).

Personally, I downvoted because I guessed that the post was likely to be of interest to sufficiently few people that it felt somewhat spammy. If I imagine everyone posting with that level of selectivity I would guess the Forum would become a worse place, so it's the type of behavior I think should probably be discouraged.

I'm not very confident about that, though.

1MichaelPlant4yAs a reply, I feel like uncharitably downvoting things because you don't think it matches the interests of large numbers of forums users is the sort of behaviour that should probably be discouraged. Unless you think the poster is being insincere, I think it's unhelpful and unfriendly to downvote what they say and thus discourage them. It contributes to the feeling EA is an unwelcoming place. If we only talk about causes people already find plausible then we will never improve the causes we work on, and it will drive away people interested in other causes. I agree that posting about conferences is maybe not very exciting, but it's not obvious this should be posted somewhere else. Facebook would be the alternative, but that not ideal for reasons not worth discussing here. I upvoted the original post for balance.
Effective Altruism Grants project update

An Open Phil staff member made a rough guess that it takes them 13-75 hours per grant distributed. Their average grant size is quite a bit larger, so it seems reasonable to assume it would take them about 25 hours to distribute a pot the size of EA Grants.

My experience making grants at Open Phil suggests it would take us substantially more than 25 hours to evaluate the number of grant applications you received, decide which ones to fund, and disburse the money (counting grant investigator, logistics, and communications staff time). I haven't found that... (read more)

1vipulnaik4yAlso related: []
0Roxanne_Heston4yRight, neither do I. My 25-hour estimate was how long it would take you to make one grant of ~£500,000, not a bunch of grants adding up to that amount. I assumed that if Open Phil had been distributing these funds it would have done so by giving greater amounts to far fewer recipients.
EA Survey 2017 Series: Community Demographics & Beliefs

I think it would be useful to frontload info like 1) the number of people to took this vs. previous surveys, 2) links to previous surveys.

I think I would also prefer mildly strongly if all of the survey results were in one blog post (to make them easier to find), and prefer it strongly to have all the results for the demographic info in the demographics post. But is seems like this post doesn't include information that was requested on the survey and that seems interesting, like race/ethnicity and political views.

The proportion of atheist, agnostic or no

... (read more)
4Tee4yThanks for bringing these to our attention, Claire. I like both of these ideas. This post will be updated to include the former, and the latter will be included in all subsequent posts for ease of navigation. We decided to go with a multi-part series because the prior survey ended up being an unwieldy 30+ page PDF, which likely resulted in far less engagement. As I said above, in all subsequent survey posts we'll link to the previous articles for ease of navigation. This is probably an oversight on our part. It's likely we will revise the article to include some or all of this information very soon. +1, will edit that. The first handful of posts will be more descriptive, but you can expect future ones to inject a bit more commentary
Students for High-Impact Charity Interim Report

[minor] In the sentence, "While more pilot testing is necessary in order to make definitive judgements on SHIC as a whole, we feel that we have gathered enough data to guide strategic changes to this exceedingly novel project." "exceedingly novel" seems like a substantial exaggeration to me. There have been EA student groups, and LEAN, before (as you know), as well as inter-school groups for many different causes.

1baxterb4yI see where you're coming from. When we wrote this I think we were referring to the fact that there don't seem to be any clubs or programs at the high school level that have the same goals as SHIC, and that we're feeling like we're in uncharted territory as we've been reaching out to these institutions, teachers and students, mainly because building a curriculum into the network seems like a new approach. In any case, that's not well explained in the document, and I don't think that sentence gives off the message we want it to, so I'll strongly consider editing. Thanks!
Advisory panel at CEA

Note though that ACE was originally a part of 80k Hours, which was a part of CEA. The organizations now feel quite separate, at least to me.

Additionally, I am not paid by ACE or CEA. Being on the ACE Board is a volunteer position, as is this.

Generally, I don't feel constrained in my ability to criticize CEA, outside a desire to generally maintain collegial relations, though it seems plausible to me that I'm in an echo chamber too similar to CEAs to help as much as I could if I was more on the outside. Generally, trying to do as much good as possible is t... (read more)

EA essay contest for <18s

I found the formatting of this post difficult to read. I would recommend making it neater and clearer.

My 5 favorite posts of 2016

I would prefer if the title of this post was something like "My 5 favorite EA posts of 2016". When I see "best" I expect a more objective and comprehensive ranking system (and think "best" is an irritatingly nonspecific and subjective word), so I think the current wording is misleading.

4Kerry_Vaughan5yGreat idea. Changed the title accordingly.
0vipulnaik5yMy thoughts precisely!
Futures of altruism special issue?

For EAs that don't know, if might be helpful to provide some information about the journal, such as the size and general characteristics of the readership, as well as information about writing for it, such as what sort of background is likely helpful and how long the papers would probably be. Also hopes and expectations for the special issue, if you have any.

1Denkenberger5yThanks for the feedback-I have put some more description in.
What is the expected value of creating a GiveWell top charity?

This gets very tricky very fast. In general, the difference in EV between people's first and second choice plan is likely to be small in situations with many options, if only because their first and second choice plans are likely to have many of the same qualities (depending on how different a plan has to be to be considered a different plan). Subtracting the most plausible (or something) counterfactual from almost anyone's impact makes it seem very small.

EAs write about where they give

Nice idea, Julia. Thanks for doing this!

Concerns with Intentional Insights

No shame if you lose, so much glory if you win

Concerns with Intentional Insights

I don't think incompetent and malicious are the only two options (I wouldn't bet on either as the primary driver of Gleb's behavior), and I don't think they're mutually exclusive or binary.

Also, the main job of the EA community is not to assess Gleb maximally accurately at all costs. Regardless of his motives, he seems less impactful and more destructive than the average EA, and he improves less per unit feedback than the average EA. Improving Gleb is low on tractability, low on neglectedness, and low on importance. Spending more of our resources on him unfairly privileges him and betrays the world and forsakes the good we can do in it.

Views my own, not my employer's.

5Kathy5yThat was a truly excellent argument. Thank you.
Setting Community Norms and Values: A response to the InIn Open Letter

I would recommend linking to Jeff's post at the beginning of this one.

Should you switch away from earning to give? Some considerations.

But many of those people aren't earning to give. If they were, they would probably give more. So the survey doesn't indicate you are in the top 15% in comparative advantage just because you could clear $8k.

3Pablo5yIf many of those people aren't earning to give, then either fewer EAs are earning to give than is generally assumed, or the EA survey is not a representative sample of the EA population. Alternatively, we may question the antecedent of that conditional, and either downgrade our confidence in our ability to infer whether someone is earning to give from information about how much they give, or lower the threshold for inferring that a person who fails to give at least that much is likely not earning to give.
Why Animals Matter for Effective Altruism

Have you experienced downvoting brigades? How do you distinguish them from sincere negative feedback?

-3thebestwecan5yEvidence is (i) downvoting is on certain users/topics, rather than certain arguments/rhetoric, (ii) lots of downvotes relative to a small amount of negative comments, (iii) strange timing, e.g. I quickly got two downvotes on the OP before anyone had time to read it (<2 minutes). I think it happens to me some, but I think it happens a lot to animal-focused content generally. Edit: jtbc, I mean "systematically downvoting content that contributes to the discussion because you disagree with it, you don't like the author, or other 'improper' reasons." Maybe "brigades" was the wrong word if that suggests coordination, which i'm updating towards after searching online for more uses of the term. Though there might be coordination, not really sure.
June 2016 GiveWell board meeting

To be clear, I'm saying that I think sometimes an organization's practices usefully reflect a community's values and that Linch was being overly dismissive of this possibility, not making a claim about this specific case.

June 2016 GiveWell board meeting

If the "you" here is the Effective Altruism community, then the hiring practices of a single organization shouldn't be a significant sign that the community as a whole is elitist.

I don't think that's entirely right. I think that given that the community includes relatively few organizations (of which GiveWell is one of the larger and older ones) GiveWell's practices may be but aren't always a significant (and relatively concrete) reflection of and on the community's views.

(views are my own, not my employer's)

2ClaireZabel5yTo be clear, I'm saying that I think sometimes an organization's practices usefully reflect a community's values and that Linch was being overly dismissive of this possibility, not making a claim about this specific case.
2vipulnaik5yFor the benefit of people reading this thread who are not familiar with Claire, Claire is employed by GiveWell, though much of her work relates to the Open Philanthropy Project part thereof. ETA: I funded the writing of the original post, as disclosed by Issa at the end of the post. I don't think that biases my comment.
Effective Altruists really love EA: Evidence from EA Global

In fact, the team most likely to be growing EA, the Effective Altruism Outreach team was cautioning against growth. It seems reasonably clear that EA is growing virally and organically -- exactly what you want in the early days of a project.

Why do you want a project to grow virally and organically in the early days of a project? That seems like the opposite of what I'd guess; when a project is young you want to steer it thoughtfully and deliberately and encourage it to grow slowly, so that it doesn't get off track or hijacked, and so you have time to onboard and build capacity in the new members. Has the EAO team come to think that fast growth is good?

1Kerry_Vaughan5yFair point. I've deleted that section as a result. The idea in my head is that it's better to be getting growth because people really value and want to share your product than it is to only be able to get growth through direct marketing. My view on growth is that as our tools to onboard new people improve we'll want to grow faster. The tools aren't excellent now, but I'm optimistic that we will develop some better material soon.
0angelinahli5yThanks for the links! I wasn't aware the EAHub wikia library existed ( []) but it looks like what I'm thinking of should go here. I'll get to work collating links soon.
1ClaireZabel5yAnd: []
EA != minimize suffering

That's deeply kind of you to say, and the most uplifting thing I've heard in a while. Thank you very much.

EA != minimize suffering

You see the same pattern in Clockwork Orange. Why does making Alex not a sadistic murderer necessitate destroying his love of music? (Music is another of our highest values, and so destroying it is a lazy way to signal that something is very bad.) There was no actual reason that makes sense in the story or in the real world; that was just an arbitrary choice by an author to avoid the hard work of actually trying to demonstrate a connection between two things.

Now people can say "but look at Clockwork Orange!" as if that provided evidence of anything, except that people will tolerate a hell of a lot of silliness when it's in line with their preexisting beliefs and ethics.

2cdc4825yI had fun talking with you, so I googled your username. :O Thank you for all the inspirational work you do for EA! You're a real-life superhero! I feel like a little kid meeting Batman. I can't believe you took the time to talk to me!
0cdc4825yTouche'. I concede, but I just want to reiterate that fiction "can and often does involve revealing important truths," so that I am not haunted by the ghost of Joseph Campbell.
EA != minimize suffering

Consider The Giver. Consider a world where everyone was high on opiates all the time. There is no suffering or beauty. Would you disturb it?

I think generalizing from these examples (and especially from fictional examples in general) is dangerous for a few reasons.

Fiction is not designed to be maximally truth-revealing. Its function is as art and entertainment, to move the audience, persuade them, woo them, etc. Doing this can and often does involve revealing important truths, but doesn't necessarily. Sometimes, fiction is effective because it affirms cu... (read more)

2kokotajlod5yI agree that it's dangerous to generalize from fictional evidence, BUT I think it's important not to fall into the opposite extreme, which I will now explain... Some people, usually philosophers or scientists, invent or find a simple, neat collection of principles that seems to more or less capture/explain all of our intuitive judgments about morality. They triumphantly declare "This is what morality is!" and go on to promote it. Then, they realize that there are some edge cases where their principles endorse something intuitively abhorrent, or prohibit something intuitively good. Usually these edge cases are described via science-fiction (or perhaps normal fiction). The danger, which I think is the opposite danger to the one you identified, is that people "bite the bullet" and say "I'm sticking with my principles. I guess what seems abhorrent isn't abhorrent after all; I guess what seems good isn't good after all." In my mind, this is almost always a mistake. In situations like this, we should revise or extend our principles to accommodate the new evidence, so to speak. Even if this makes our total set of principles more complicated. In science, simpler theories are believed to be better. Fine. But why should that be true in ethics? Maybe if you believe that the Laws of Morality are inscribed in the heavens somewhere, then it makes sense to think they are more likely to be simple. But if you think that morality is the way it is as a result of biology and culture, then it's almost certainly not simple enough to fit on a t-shirt. A final, separate point: Generalizing from fictional evidence is different from using fictional evidence to reject a generalization. The former makes you subject to various biases and vulnerable to propaganda, whereas the latter is precisely the opposite. Generalizations often seem plausible only because of biases and propaganda that prevent us from noticing the cases in which they don't hold. Sometimes it takes a powerful piece of fi
2ClaireZabel5yYou see the same pattern in Clockwork Orange. Why does making Alex not a sadistic murderer necessitate destroying his love of music? (Music is another of our highest values, and so destroying it is a lazy way to signal that something is very bad.) There was no actual reason that makes sense in the story or in the real world; that was just an arbitrary choice by an author to avoid the hard work of actually trying to demonstrate a connection between two things. Now people can say "but look at Clockwork Orange!" as if that provided evidence of anything, except that people will tolerate a hell of a lot of silliness when it's in line with their preexisting beliefs and ethics.
Load More