All of Jess_Whittlestone's Comments + Replies

Leaning into EA Disillusionment

Thanks Peter - I continue to feel unsure whether it's worth the effort for me to do this, and am probably holding myself to an uncecessarily high standard, but it's hard to get past that. At the same time, I also haven't been able to totally give up on the idea of writing something either - I do have a recent draft I've been working on that I'd be happy to share with you. 

I thought about the criticism contest, but I think trying to enter that creates the wrong incentives for me. It makes me feel like I need to write a super well-reasoned and evidenced... (read more)

I agree very much! I have a lot of half-finished but mostly not-even-really started drafts myself, and one thing that resonated in the OP was the need for spaces where those hunches can be explored, as opposed to expecting thought-through and well-articulated criticisms. 

Leaning into EA Disillusionment

Thank you for writing this - a lot of what you say here resonates strongly with me, and captures well my experience of going from very involved in EA back in 2012-14 or so, to much more actively distancing myself from the community for the last few years. I've tried to write about my perspective on this multiple times (I have so many half written Google docs) but never felt quite able to get to the point where I had the energy/clarity to post something and actually engage with EA responses to it. I appreciate this post and expect to point people to it sometimes when trying to explain why I'm not that involved in or positive about EA anymore.

6peterhartree1mo
For what it’s worth: I would very much like to read your perspective on this one day. Might the possibility of winning $20K [https://forum.effectivealtruism.org/posts/8hvmvrgcxJJ2pYR4X/announcing-a-contest-ea-criticism-and-red-teaming] help you get over the hill? I’d be happy to comment on drafts, of course. No pressure, of course—I imagine you’re very busy with your new role. In any case I’ve appreciated the conversations we’ve had about this in the past.
"Big tent" effective altruism is very important (particularly right now)

I also interpreted this comment as quite dismissive but I think most of that comes from the fact Max explicitly said he downvoted the post, rather than from the rest of the comment (which seems fine and reasonable).

 I think I naturally interpret a downvote as meaning "I think this post/comment isn't helpful and I generally want to discourage posts/comments like it." That seems pretty harsh in this case, and at odds with the fact Max seems to think the post actually points at some important things worth taking seriously. I also naturally feel a bit con... (read more)

This is a minor point in some ways but I think explicitly stating "I downvoted this post" can say quite a lot (especially when coming from someone with a senior position in the community).

I ran the Forum for 3+ years (and, caveat, worked with Max). This is a complicated question.

Something I've seen many times: A post or comment is downvoted, and the author writes a comment asking why people downvoted (often seeming pretty confused/dispirited). 

Some people really hate anonymous downvotes. I've heard multiple suggestions that we remove anonymity from vo... (read more)

4MaxDalton3mo
Nice to see you on the Forum again! Thanks for sharing that perspective - that makes sense. Possibly I was holding this to too high a standard - I think that I held it to a higher standard partly because Luke is also an organization/community leader, and probably I shouldn't have taken that into account. Still, overall my best guess is that this post distracted from the conversation, rather than adding to it (though others clearly disagree). Roughly, I think that the data points/perspectives were important but not particularly novel, and that the conflation of different questions could lead to people coming away more confused, or to making inaccurate inferences. But I agree that this is a pretty high standard, and maybe I should just comment in circumstances like this. I also think I should have been more careful re seeming to discourage suggestions about EA. I wanted to signal "this particular set of suggestions seems muddled" not "suggestions are bad", but I definitely see how my post above could make people feel more hesitant to share suggestions, and that seems like a mistake on my part. To be clear: I would love feedback and suggestions! [https://forum.effectivealtruism.org/posts/3jrkgEshwxJGzSrJi/please-give-cea-anonymous-feedback]
Long-Term Future Fund: August 2019 grant recommendations

Firstly, I very much appreciate the grant made by the LTF Fund! On the discussion of the paper by Stephen Cave & Seán Ó hÉigeartaigh in the addenda, I just wanted to briefly say that I’d be happy to talk further about both: (a) the specific ideas/approaches in the paper mentioned, and also (b) broader questions about CFI and CSER’s work. While there are probably some fundamental differences in approach here, I also think a lot may come down to misunderstanding/lack of communication. I recognise that both CFI and CSER could ... (read more)

4Habryka3y
I would definitely also be interested in talking about this, either somewhere on the forum, or in private, maybe with a transcript or summarized takeaways from the conversation posted back to the forum.
6Ben Pace3y
Neat! I’d be very interested in talking about/debating this, perhaps in the comments of another post. In particular, the sections above that feel most cruxy to me are the ones on the centrality of conceptual progress to AI strategy/policy work: what that looks like, how to figure out what new concepts are needed, or whether this is even an important part of AI policy, are all things I’d be interested to discuss.
Long-Term Future Fund: April 2019 grant recommendations

I'd be keen to hear a bit more more about the general process used for reviewing these grants. What did the overall process look like? Were participants interviewed? Were references collected? Were there general criteria used for all applications? Reasoning behind specific decisions is great, but also risks giving the impression that the grants were made just based on the opinions of one person, and that different applications might have gone through somewhat different processes.

Here is a rough summary of the process, it's hard to explain spreadsheets in words so this might end up sounding a bit confusing:

  • We added all the applications to a big spreadsheet, with a column for each fund member and advisor (Nick Beckstead and Jonas Vollmer) in which they would be encouraged to assign a number from -5 to +5 for each application
  • Then there was a period in which everyone individually and mostly independently reviewed each grant, abstaining if they had a conflict of interest, or voting positively or negatively if they thought the gra
... (read more)
Long-Term Future Fund: April 2019 grant recommendations

Thanks for your detailed response Ollie. I appreciate there are tradeoffs here, but based on what you've said I do think that more time needs to be going into these grant reviews.

It don't think it's unreasonable to suggest that it should require 2 people full time for a month to distribute nearly $1,o00,000 in grant funding, especially if the aim is to find the most effective ways of doing good/influencing the long-term future. (though I recognise that this decision isn't your responsibility personally!) Maybe it is very difficult for ... (read more)

2Evan_Gaensbauer3y
The way the management of the EA Funds is structured to me makes sense within the goals set for the EA Funds. So I think the situation in which 2 people are paid full-time for one month to evaluate EA Funds applications makes sense is one where 2 of the 4 volunteer fund managers took a month off from their other positions to evaluate the applications. Finding 2 people from out of the blue to evaluate applications for one month without continuity with how the LTF Fund has been managed seems like it'd be too difficult to effectively accomplish in the timeframe of a few months. In general, one issue the EA Funds face other granting bodies in EA don't face is the donations come from many different donors. This consequently means how much the EA Funds receive and distribute, and how it's distributed, is much more complicated than ones the CEA or a similar organization typically faces.

I would be in favour of this fund using ~5% of its money to pay for staff costs, including a permanent secretariat. The secretariat would probably decrease pressure on grantmakers a little, and improve grant/feedback quality a little, which makes the costs seem worth it. (I know you've already considered this and I want to encourage it!)

I imagine the secretariat would:

-Handle the admin of opening and advertising a funding round

-Respond to many questions on the Forum, Facebook, and by email, and direct more difficult questions to the correct person

-Coordina

... (read more)

I strongly agree that I would like there to be more people who have the competencies and resources necessary to assess grants like this. With the Open Philanthropy Project having access to ~10 billion dollars, the case for needing more people with that expertise is pretty clear, and my current sense is that there is a broad consensus in EA that finding more people for those roles is among, if not the, top priority.

I think giving less money to EA Funds would not clearly improve this situation from this perspective at all, since most other granting bodies ... (read more)

Long-Term Future Fund: April 2019 grant recommendations
The plan seemed good, but I had no way of assessing the applicant without investing significant amounts of time that I had not available (which is likely why you see a skew towards people the granting team had some past interactions with in the grants above)

I'm pretty concerned about this. I appreciate that there will always be reasonable limits to how long someone can spend vetting grant applications, but I think EA funds should not be hiring fund managers who don't have sufficient time to vet applications from people they don't already kno... (read more)

4Evan_Gaensbauer3y
One issue with this is the fund managers are unpaid volunteers who have other full-time jobs, so being a fund manager isn't a "job" in the most typical sense. Of course a lot of people think it should be treated like one though. When this came up in past discussions regarding how the EA Funds could be structured better, suggestions like hiring a full-time fund manager came up against trade-offs against other priorities for the EA Funds, like not spending too much overheard on them, or having the diversity of perspectives that comes with multiple volunteer fund managers.
but I think EA funds should not be hiring fund managers who don't have sufficient time to vet applications from people they don't already know

To be clear, we did invest time into vetting applications from people we didn't know, we just obviously have limits to how much time we can invest. I expect this will be a limiting factor for any grant body.

My guess is that if you don't have any information besides the application info, and the plan requires a significant level of skill (as the vast majority of grants do), you have to invest at l... (read more)

Effective Altruism Grants project update

This may be a bit late, but: I'd like to see a bit more explanation/justification of why the particular grants were chosen, and how you decided how much to fund - especially when some of the amounts are pretty big, and there's a lot of variation among the grants. e.g. £60,000 to revamp LessWrong sounds like a really large amount to me, and I'm struggling to imagine what that's being spent on.

3RyanCarey5y
60k GBP doesn't sound like too much to me to revamp LessWrong at all. * probably years of time were spent on design/coding/content-curation for LW1, right? * LW has dozens of features that aren't available off the shelf * Starting the EA forum took a couple months of time. Remaking LessWrong will involve more content/moderator work, more design, and an order of magnitude more coding. So it could easily take 1-2 person-years.
EA Survey 2017 Series: How do People Get Into EA?

Did SlateStarCodex even exist before 2009? I'm sceptical - the post archives only go back to 2013: http://slatestarcodex.com/archives/. Maybe not a big deal but does suggest at least some of your sample were just choosing options randomly/dishonestly.

3RyanCarey5y
They could also be referring to earlier writing by the same author at other addresses.
Anonymous EA comments

If I could wave a magic wand it would be for everyone to gain the knowledge that learning and implementing new analytical techniques cost spoons, and when a person is bleeding spoons in front of you you need a different strategy.

I strongly agree with this, and I hadn't heard anyone articulate it quite this explicitly - thank you. I also like the idea of there being more focus on helping EAs with mental health problems or life struggles where the advice isn't always "use this CFAR technique."

(I think CFAR are great and a lot of their techniques... (read more)

Use "care" with care.

Thanks for writing this Roxanne, I agree that this is a risk - and I've also cringed sometimes when I've heard EAs say they "don't care" about certain things. I think it's good to highlight this as a thing we should be wary of.

It reminds me a bit of how in academia people often say, "I'm interested in x", where x is some very specific, niche subfield, implying that they're not interested in anything else - whereas what they really mean is, "x is the focus of my research." I've found myself saying this wrt my own research, and... (read more)

Should I be vegan?

If you haven't tried just avoiding eggs, it seems worth at least trying.

Yeah, that seems right!

I don't understand the "completely trivial difference" line. How do you think it compares to the quality of life lost by eating somewhat cheaper food? For me, the cheaper food is much more cost-effective, in terms of world-bettering per unit of foregone joy.

I think this is probably just a personal thing - for me I think eating somewhat cheaper food would be worse in terms of enjoyment than cutting out dairy. The reason I say it's a basically tri... (read more)

1Nekoinentr7y
People's mileage on these things clearly varies very much, leading to a lot of talking past one another.
Should I be vegan?

Regarding willpower: If you maintain a vegan diet for a few months, it will probably stop requiring willpower since you will stop thinking of animal products as an option that you have available. This has been my experience and the experience of lots of other vegans, although it's probably not universal.

Yeah, my experience previously has been that the willpower required mostly decreases over time - there was definitely a time a while ago when the thought of buying and eating eggs was kind of absurd to me. This was slightly counterbalanced by sometimes g... (read more)

Should I be vegan?

I like the idea of counting non-vegan meals, that sounds great. Maybe I'll beemind it... then I'd have an incentive to keep it low, but I don't have to be absolute about it. Diana told me that whenever she eats something non-vegan she makes a donation to an animal welfare charity - I like that idea too.

The way I see this is getting from 85% to 100% is probably the most costly part for me (most inconvenience, most social cost) and I am getting the vast majority of the benefit with very little of the cost. I do feel uncomfortable with that 15% though. I th

... (read more)
Should I be vegan?

Yeah, I think lacto-vegetarianism is probably 95% of the way in terms of impact on animal suffering anyway (or even more.) As I said above, for me the main reason for cutting out dairy too is that I think if I eat dairy I might be more likely to slip into eating eggs too down the line. But it's possible I could just protect against that by setting more solid rules in place etc.

9Brian_Tomasik7y
I agree with Paul that there's a big gulf between milk and eggs. I wish we had a short, 5-letter word for "lacto-vegetarian" and that more people advocated lacto-vegetarianism as the baseline, since lacto-vegetarianism is quite a bit easier than veganism but has almost the same animal impact. "Veganism" is a Schelling point but isn't morally special, because you could go further still by choosing the plant products that are better for wild animals [http://reducing-suffering.org/crop-cultivation-and-wild-animals/#A_specific_attempt_at_ranking] , by driving less to reduce the https://en.wikipedia.org/wiki/Roadkill#Insectsinsects killed by that [https://en.wikipedia.org/wiki/Roadkill#Insects], by doing more animal activism, etc.
Should I be vegan?

Yeah, good point. I'm definitely a lot less concerned about eating dairy than I am eggs. The main reason for lumping them together is that I think I'd find it quite a bit easier psychologically to be "vegan" than to be someone who "doesn't eat eggs", and I think I'd be more likely to keep it up, but it's possible that's more malleable than I think.

I'm not totally convinced that not eating dairy will make my life worse in any nontrivial way, though. I enjoy eating cheese, sure, but it's not an experience that's unlike any other. I'm pretty sure that the difference in enjoyment in a life in which I eat dairy products and one in which I don't will basically be completely trivial.

4Paul_Christiano7y
I can sympathize with this perspective, but if you are actually on the fence regarding animal welfare concerns, it seems like it would be a shame if you ended up eating eggs because you didn't want to give up milk! (e.g. if you actually caved because of cheese/butter). If you haven't tried just avoiding eggs, it seems worth at least trying. If the only reason it's psychologically harder is that "vegan" is a more familiar concept, then you will also be doing significant auxiliary good by giving more currency to lacto-vegetarianism. I expect more people would adopt this than would adopt veganism (if the two concepts had equal currency), and it seems basically equally morally good. I don't understand the "completely trivial difference" line. How do you think it compares to the quality of life lost by eating somewhat cheaper food? For me, the cheaper food is much more cost-effective, in terms of world-bettering per unit of foregone joy.
Hope: How Far Humanity Has Come

Ah, thanks for pointing these things out! I didn't realise either of these things - admittedly, I didn't have as much time as I would have liked to research the historical facts for this. A lot of these points were taken from some top posts on Quora on a thread about progress over the past few centuries, and I was (perhaps naively) hoping that crowdsourced info would give me fairly accurate info. Anyway, I was thinking of writing a more detailed article about human progress at some point, so I'll definitely try to do a bit more research and take these points into account - thanks for flagging my errors/sloppiness!

Why I Don't Account for Moral Uncertainty

Yeah, I think it was a really good thing to prompt discussion of, the post just could have been framed a little better to make it clear you just wanted to prompt discussion. Please don't take this as a reason to stop posting though! I'd just take it as a reason to think a little more about your tone and whether it might appear overconfident, and try and hedge or explain your claims a bit more. It's a difficult thing to get exactly right though and I think something all of us can work on.

Why I Don't Account for Moral Uncertainty

Good point to raise Owen! I strongly agree that we don't want to put people off contributing ideas that might run against default opinion or have flaws - these kinds of ideas are definitely really useful. And I think there were points in this post that did contribute something useful - I hadn't thought before about whether a subjectivist should take into account moral uncertainty, and that strikes me as an interesting question. I didn't downvote the post for this reason - it's certainly relevant and it prompted me to think about some useful things - although I was initially very tempted to, because it did strike me as unreasonably overconfident.

Why I Don't Account for Moral Uncertainty

Even if being a subjectivist means you don't need to account for uncertainty as to which normative view is correct, shouldn't you still account for meta-ethical uncertainty i.e. that you could be wrong about subjectivism? Which would then suggest you should in turn account for moral uncertainty over normative views.

I think you're kind of trying to address this in what you wrote about moral realism, but it doesn't seem clear or convincing to me. There are a lot of premises here (there's no reason to prefer one moral realism over another, we can just cancel ... (read more)

6[anonymous]8y
Ah, I definitely could have went into more detail. This was just meant to prompt discussion on an important topic. I'll avoid posting (things like this) in the future. I'm sorry :(
The perspectives on effective altruism we don't hear

Thanks, Ben! This is a great idea, especially for student groups.

The perspectives on effective altruism we don't hear

Thanks for being so honest, Nicholas, really useful to hear your perspective - especially as it sounds like you've been paying a fair amount of attention to what's going on in the EA movement. I can empathise with your point 4. quite a bit and I think a fair number of others do too - it's so hard to be motivated if you're unsure about whether you're actually making a difference, and it's so hard to be sure you're making a difference, especially when we start questioning everything. For me it doesn't stop me wanting to try, but it does affect my motivation sometimes, and I'd love to know of better ways to deal with uncertainty.

Economic altruism

Have you heard about how Beeminder cofounders Danny and Bethany use exactly this to split up chores between them? http://messymatters.com/autonomy

Spitballing EA career ideas

Yeah, fair - the way I read it at the beginning sounded more like the whole thing was talking about 80k than perhaps it was. Anyway, just wanted to make clear that I think 80k very much agrees with most (if not all) or what you're saying here :)

Spitballing EA career ideas

This is cool Ben, thanks for doing this! I agree with the general idea that personal fit is very important and we should be open to considering a wide range of careers.

I do think this slightly misrepresents 80k though. You say they only have a list of four top careers, but in fact what they have is four careers they consider "very promising, but highly competitive and with low chance of success", and ten more careers they consider promising. I think 80k also talk about most of the careers you list - if not all the specific sub-categories. And th... (read more)

0Benjamin_Todd8y
Just adding a few more clarifications of our views here: * The current list is just the careers that we've found promising among those we've investigated, but we haven't yet investigated very many. There's definitely many other good options out there. * We also include 'learning value' [https://80000hours.org/career-guide/framework/learning-value/] in our framework, to flag the value of doing things you don't already know much about. * Many of these career options are already listed on the 'other careers we'd like to investigate' list here [https://80000hours.org/career-guide/top-careers/career-profiles/] I really like the idea of having a list of promising careers we haven't investigated yet that includes a couple of sentences of explanation, like you've done here, so I'll probably add a page like that in the next iteration of the site. Thanks for suggesting it. You're also right that if you want to coordinate with the EA community, then there's extra value from doing stuff that other EAs aren't doing. We don't include that in our guide, however, because the guide isn't aimed only at EAs. I think this is an important consideration that's often neglected though, so I'm really pleased to see it being discussed here.
1Ben_Kuhn8y
Most of what I said was not attempting to represent 80k at all, though--it was largely based on how I observed real EAs making career decisions, which somehow often got framed as "tech vs. trading" (possibly vs. academia) despite 80k's excellent advice! The sentence that you picked out did end "...and ten or so other second- or third-tier options", which was my understanding of how that list was organized--perhaps the headings were different the last time I read it or something. At any rate, three of the second four are dominated in the sub-rankings by one of the first four, so I think it's fair to call them "second-tier" based on that, even if the headings don't make it explicit. (Edited for grammar)
Should Giving What We Can change its Pledge?

It's worth noting that many people do, and that this isn't obviously indefensible. So people can genuinely care more about existing people or existing creatures :-)

Yeah, I don't mean that it's unheard of - but I do think this is a pretty rare view within the EA community.

Should Giving What We Can change its Pledge?

For example, there are many people in the world today who believe that the best cause to help other people is to donate a significant part (10% infact) of their income towards god's plan by funding the expansion of evangelical churches across the world. Would you be comfortable with them signing the GWWC pledge and associating themselves with the organisation? What about those who feel that legalising drugs is the most important cause because they like to get high? or Hindu charities who fund sanctuaries for cows because they believe cows are sacred anima

... (read more)
2Ervin8y
Technically that's possible but in practice GWWC members don't currently tend to have those beliefs - the pledging community has a clear feel of being focused on evidence-based poverty charities. The new pledge that's being consulted about would certainly include more people, and AlasdairGives is right that there's nothing in it that'd exclude the large numbers of people who tithe to their churches. If they joined in mass (which is unlikely absent a concerted effort to sign them up) that would certainly change the feel of the community to me. It's worth noting that many people do, and that this isn't obviously indefensible. So people can genuinely care more about existing people or existing creatures :-)
Should Giving What We Can change its Pledge?

Thanks for writing this up and seeking feedback, Michelle!

I'm in favour of the change - you know this, but I'm saying it here because I'm concerned that only people with strong disagreements will respond to this post, and so it will end up looking like the community is more against the change than it in fact is.

I think ultimately having a broader pledge will better represent the views of those who take it and the community, and agree that having a clear action which becomes standard for all EAs could be very beneficial.

4Vincent_deB8y
I think it'll pull the other way - I've felt awkward about explicitly stating my disagreement, whereas it's much easier to say 'Great!' I don't find this a convincing reason, because GWWC doesn't need to represent every view in any particular community (be it 'EA' or something else altogether - and many GWWC members have identified with GWWC rather than EA as such). And there can be a clear action for EAs to take (like donating) without that going through GWWC, and conversely pledging 10% is not the best candidate for a clear next step following someone first encountering EA after reading Peter Singer's book on it.
Should Giving What We Can change its Pledge?

Making this change would basically allow other causes that may have significant philosophical and/or practical baggage to trade on that reputation while undermining the focus and work on extreme poverty. It does nothing to help the fight against extreme poverty and may harm it, while boosting those who are seeking to advance other causes.

This makes it sound like the causes are competing with each other, which I don't think is true. Changing the pledge isn't about undermining the focus on extreme poverty, it's about recognising that what we ultimately ca... (read more)

1Evan_Gaensbauer8y
I consider existing online communities, and official organizations aligned with effective altruism, sufficient to host such debates (between existential risk reduction, poverty reduction, and/or other popular cause areas). If they aren't doing so already, I believe an investment of effort would make them so. Thus, I don't that as an argument in favor of Giving What We Can changing its pledge.
6Dale8y
Well, they are competing for time and money, both of which are scarce.
6AlasdairGives8y
"it's about recognising that what we ultimately care about is saving lives, no matter where or when they are" This is not why I joined GWWC. I joined because I am concerned about causes that demonstrably and effectively help human people today - not causes that may conceivably if we accept unfalsifiable/provable premises help people in the future or causes that provably help animals (because I reject the philosophical premises of that cause). I fully support cause X - effectively fighting poverty in the developing world. I find causes Y & Z interesting but highly problematic, and don't want to be a part of an organisation that lends them undue credibility and support beyond discussion and debate. I signed up because I believe in cause X - if the organisation changes to be about causes XYZ I would probably leave to find somewhere that only supports the cause I actually support or just declare my donations independently or something. So this new pledge would change the whole relationship of the pledge. Currently, GGWC members make a pledge to give 10% of their income to a very narrow range of charities based on very strict criteria. Under the new pledge, all you need is a philosophical argument about why the cause you support is one that does "the most good" - all the rigour and testing based on actually comparable measures is gone. There are loads of other causes , not much discussed around here, which would qualify under the new pledge. For example, there are many people in the world today who believe that the best cause to help other people is to donate a significant part (10% infact) of their income towards god's plan by funding the expansion of evangelical churches across the world. Would you be comfortable with them signing the GWWC pledge and associating themselves with the organisation? What about those who feel that legalising drugs is the most important cause because they like to get high? or Hindu charities who fund sanctuaries for cows because they b
Should Giving What We Can change its Pledge?

I don't think it's accurate to say that if the pledge were changed, GWWC would become a community of "singularitarians, rationalists and the like." It would be a community of people who want to donate 10% of their income to most effectively improve the lives of others, which could include singularitarians and rationalists, but certainly wouldn't be defined by it. Saying you wouldn't want to take the pledge for this reason seems a bit like saying you don't want to be part of the EA community because it contains those people.

Also, note that the cur... (read more)

Saying you wouldn't want to take the pledge for this reason seems a bit like saying you don't want to be part of the EA community because it contains those people.

I see why you might say that, and understand your position, but I hope you can see how it could be a little uncharitable to those of us who feel crowded out of what was originally an organisation that made a compelling case about our obligation to help people in the developing world (with things like the calculator showing that many potential GWWC members were in the richest 1-5% of the world)... (read more)

How a lazy eater went vegan

Thanks for posting this Topher. When I was vegan, my diet was very similar to the one you described, and all in all I didn't find it that difficult. You'll notice the "was" in that sentence though - the thing that got me was eating out or eating socially with friends - I found it very difficult to maintain a vegan diet then, and so I found myself slipping. I'd be interested in how you deal with this - do you stick to a vegan diet even when eating out or going to friends houses, and if so, how difficult do you find it?

My solution for a while was ... (read more)

7lincolnq8y
Currently, I beemind (using a Do Less goal) "non-vegan meals per week". This has provided the mild positive pressure for me to choose to be vegan for most of my meals but allow myself to eat a few meals a week with friends without paying a social penalty.
1Greg_Colbourn8y
For eating out, it's nearly always possible to get a vegan version of a non-vegan dish, even when there isn't anything vegan listed on the menu (e.g. pizza without cheese). However, it does perhaps take a bit of effort/practice to get over the "I'm being difficult" feeling - keep in mind that veganism is a positive thing, not something to feel guilty about. Failing that, chips and salad is a fallback option :-) As for eating at friends houses, I guess it's similar: you have to feel comfortable with requesting vegan food (or otherwise limiting your options). I've never been that into food, so these things don't bother me that much.
Career choice: Evaluate opportunities, not just fields

Great post Ben, this seems like a really good point to make clear. I think there's a general point here that it's much easier, and often better, to choose between specific options than general categories of options.

Generally when I think about career choice I think it's useful to begin by narrowing down to a few different fields that seem best for impact and fit, and then within those fields seek out concrete opportunities - and ultimately the decision will come down to how good the opportunities are, not a comparison between the fields themselves. But yo... (read more)

Effective altruism as the most exciting cause in the world

Great post, and definitely agree we should focus on this more.

Another thing I personally find exciting about effective altruism is that the question "How do I do the most good?" (with my career, money etc.) is a really motivating, intellectually challenging question to spend my time thinking about. So for those who enjoy spending their time thinking about interesting questions, effective altruism offers an environment in which to discuss one of the most important and stimulating questions out there - that's pretty exciting to me. I would imagine at least some others feel similarly.

An epistemology for effective altruism?

When I hover over the 3 upvotes in the corner by the title, it says "100% positive" - which suggests people haven't downvoted it, it's just that not many people have upvoted it? But maybe I'm reading that wrong.

I thought it was a good and useful post, I don't see any reason why people would downvote it - but would also be interested to hear why if there were people who did.

0[anonymous]8y
Same. Doesn't show any downvotes for me either. Maybe it's a bug?
Learning From Less Wrong: Special Threads, and Making This Forum More Useful

There are already has open threads

Think you've got an extra word in here :)

1Evan_Gaensbauer8y
Noted, and fixed. Thanks.
Cooperation in a movement supporting diverse causes

A nice middle ground between "not talking about our reasons for supporting different causes at all" and "having people try to persuade others that their cause is the most important one" could be to simply encourage more truth-seeking, collaborative discussion about causes.

So rather than having people lay out their case for different causes (which risks causing people to get defensive and further entrenching peoples' sense of affiliation to a certain cause, and a divide between different "groups" in the movement) it would be n... (read more)

Cooperation in a movement supporting diverse causes

My reading of Michelle's point was not that we should be writing about and defending causes that we wouldn't normally think of as EA (although this could also be beneficial!) - I think she meant, within the space of the causes EAs generally talk about, it would be good if people wrote about and defended causes different to the ones they normally do. So, for example, if a person is known for talking about and defending animal causes, they could spend some time also writing about and defending xrisk or poverty. This would then lessen the impression that many people are "fixed" to one cause, but wouldn't have the problem you mention. I might be reading this wrong though.

2Michelle_Hutchinson8y
I meant Jess' reading, sorry I wasn't clear. I was thinking people would write about / defend causes they thought were very effective, though they weren't the ones they usually focused on (and perhaps weren't the one they thought very most effective). I think the knee-jerk would mostly be a problem if people wrote about causes they didn't actually think were particularly effective, which does seem like it would be problematic.
Supportive Scepticism

Yeah, agree that this is a simple but useful idea!

One concern I would have with this in some situations is that it might cause you to anchor on your initial option too much - you might miss some good alternatives because you're just looking for things that most easily come to mind as comparable to your first option. But I don't know how often this would actually be a problem.

Help spread the movement!

On the first page of the link, there's a box at the bottom - it's not clear what, if anything, should go in there?

1Thos_Thorogood8y
Thanks, mistake expunged!
Supportive Scepticism

Nice quote, and very relevant - thanks for sharing! A general worry is that EA is often framed as inherently critical - as being sceptical of typical ways of doing good, as debunking and criticising ineffective attempts at altruism etc. - and this will mean we naturally end up using a lot of negative words.

I think there's some evidence that being critical outside of a group can make people within the group feel closer to each other - which makes sense, because it strengthens the feeling of "us" versus "them." But doing this with EA seem... (read more)

Supportive Scepticism

Great points Erica, thanks! I've been using very similar ways of thinking recently, actually, and it's helped a lot.

One thing I've found, though, is that it's easy to reflectively know that all of these points are true, but still not believe them on an emotional level, and so still find it difficult to make decisions. I think the main thing that's helped me here is just time and persistence - I'm gradually coming to believe these things on a more gut level the more times I do just make a decision, even though I'm not certain, and it turns out ok. I think ... (read more)

Supportive Scepticism

Thanks Dette :)

I suspect sometimes we feel it's tougher, stronger or somehow virtuous not to need support

Yeah, agree. I think the solution to this is just for more people to stand up and admit they need support, and for us to reward those people for doing so, so that it becomes more socially acceptable. This can be hard to do though, of course. But it's easy to forget that everyone is trying to project their most confident image, and that we may not always be as confident as we try to project!

Open Thread

There seem to be two questions here:

(1) Does believing in or identifying as EA require having a certain amount of hubris and arrogance?

(2) Is EA more likely to attract arrogant people than more modest people?

I think the answer to (1) is clearly no - you can believe that you should try to work out what the best way to use resources is, without thinking you are necessarily better than other people at doing it - it's just that other people aren't thinking about it. My impression is a lot of EAs are like this - they don't think they're in a better position to... (read more)

Open Thread

I think it originated with GiveWell - they used something like this framework for assessing cause areas, which 80k then based their framework on. It's possible I'm misremembering this though.

2RyanCarey8y
Yeah I concur that GiveWell started it.
Introduce Yourself

Hi, I'm Jess. I'm currently living in Oxford and doing a PhD in Behavioural Science - I'm looking at ways of making people more willing to consider evidence that conflicts with their existing views, and more likely to change their minds about important topics. I want to figure out how people can be more open-minded and truth-seeking, basically :)

I got involved in effective altruism after I finished my degree at Oxford, and came across 80,000 Hours. I'd always wanted to make a difference in the world, but was feeling a bit disillusioned about how to do it a... (read more)

To Inspire People to Give, Be Public About Your Giving

Nice post, Peter!

Asides from seeming boastful, I think the other risk of talking publicly about giving is that it can risk seeming critical, or alienating people. I've definitely found some people respond to me talking about giving defensively - if I say I donate x%, they might look for reasons why I'm being unreasonable, or why my situation is very different from theirs. I think this is because they feel threatened - talking about giving can make some people feel like you are judging them for not giving, which provokes a defensive reaction.

Of course, in a... (read more)

1ImmaSix8y
To what extent would people turn off if I told them that I give an amount that is unreasonable in their point of view? Or that I sometimes choose to deny myself something because I think I can do much more good to people far away. Making priorities that are not optimal for your own happy and comfortable lifestyle seems to be socially undesirable even if the people near to you don't suffer from it. E.g. I tell I give $x per month, which they would not expect from any sensible person with a modest income and would definitely not see themselves doing. Would it be better if I did not mention any number?
2Peter Wildeford8y
Definitely. I think it takes a good amount of social awareness to decide when and where to announce oneself. Perhaps a better title for this post is "To Inspire People to Give, Don't Be Overly Anonymous About Your Giving"...