All of Gleb_T's Comments + Replies

Rational Politics Project

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

This is simply false. See what I actually said here

Intentional Insights and the EA Movement – Q & A

Gleb Tsipursky has also repeatedly said he will leave the EA movement.

This is simply false. See what I actually said here

Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things)

Let me first clarify that I see the goal of doing the most good as my end goal, and YMMV - no judgment on anyone who cares more about truth than doing good. This is just my value set.

Within that value set, using "insufficient" means to get to EA ends is just as bad as using "excessive" means. In this case, being "too honest" is just as bad as "not being honest enough." The correct course of actions is to correctly calibrate one's level of honesty to maximize for positive long-term impact for doing the most good.

Now... (read more)

Building Cooperative Epistemology (Response to "EA has a Lying Problem", among other things)

Sarah's post highlights some of the essential tensions at the heart of Effective Altruism.

Do we care about "doing the most good that we can" or "being as transparent and honest as we can"? These are two different value sets. They will sometimes overlap, and in other cases will not.

And please don't say that "we do the most good that we can by being as transparent and honest as we can" or that "being as transparent and honest as we can" is best in the long term. Just don't. You're simply lying to yourself and to ever... (read more)

2JBeshir5yI at least would say that I care about doing the most good that I can, but am also mindful of the fact that I run on corrupted hardware, which makes ends justifying means arguments unreliable, per EY's classic argument ( http://lesswrong.com/lw/uv/ends_dont_justify_means_among_humans/ [http://lesswrong.com/lw/uv/ends_dont_justify_means_among_humans/]) ""The end does not justify the means" is just consequentialist reasoning at one meta-level up. If a human starts thinking on the object level that the end justifies the means, this has awful consequences given our untrustworthy brains; therefore a human shouldn't think this way. But it is all still ultimately consequentialism. It's just reflective consequentialism, for beings who know that their moment-by-moment decisions are made by untrusted hardware." This doesn't mean I think there's never a circumstance where you need to breach a deontological rule; I agree with EY when they say "I think the universe is sufficiently unkind that we can justly be forced to consider situations of this sort.". This is the reason under Sarah's definition of absolutely binding promises, I would simply never make such a promise- I might say that I would try my best and to the best of my knowledge there was nothing that would prevent me from doing a thing, or something like that- but I think the universe can be amazingly inconvenient and don't want to be a pretender at principles I would not actually in extremis live up to. The theory I tend to operate under I think of as "biased naive consequentialism", where I do naive consequentialism- estimating out as far as I can see easily- then introduce heavy bias against things which are likely to have untracked bad consequences, e.g. lying, theft. (I kind of am amused by how all the adjectives in the description are negative ones.). But under a sufficiently massive difference, sure, I'd lie to an axe murderer. This means there is a "price", somewhere. This is probably most similar to the con
Rational Politics Project

We have a number of collaborative venues, such as a Facebook group, blog, email lists, etc. for people who get involved.

Rational Politics Project

Yup, we're focusing on a core of people who are upset about lies and deceptions in the US election and the Brexit campaign, and aiming to provide them with means to address these deceptions in an effective manner. That's the goal!

0kbog5yI mean a core as in a fixed point of interest. E.g. a forum, a blog, a website, a college club, etc. Something to seed the initiative that can stand on its own without having thousands of active members. You can't gather interested people without having something valuable to attract them.
Rational Politics Project

Broad social movement. We're aiming to focus on social media organizing at first, and then spread to local grassroots organizing later. There will be a lot of marketing and PR associated with it as well.

1kbog5yI don't know if social movements ever start from concerted efforts like this. For instance, EA started because one or two organizations and philosophers got a lot of interest from a few people. Other social movements start spontaneously when people are triggered into protest and action by major events. It seems good to have an identifiable 'core' to any kind of movement, like the idea I had - "a formal or semi-formal structure to aggregate and compare evidence from both sides." If you leverage swarm intelligence, prediction markets, argument mapping or more basic online mechanisms then you can start to make something impressive that stands on its own. Though such a system would be more difficult to make successful if you tried to make it relevant for the broad population rather than just EAs. It's just one example.
Rational Politics Project

Well, ok, are you really going to make this semantic argument with me? Trump is widely accepted by the Republican party as its leader. I'll be happy to agree on using the term "Republican" instead of "conservative" to address your concerns.

Setting Community Norms and Values: A response to the InIn Open Letter

You are mistaken, we have never claimed that we will distance InIn publicly from the EA movement.

We have previously talked about us not focusing on EA in our broad audience writings, and instead talking about effective giving - which is what we've been doing. At the same time, we were quite active on the EA Forum, and engaging in a lot of behind-the-scenes, and also public, collaborations to promote effective marketing within the EA sphere.

Now, we are distancing from the EA movement as a whole.

Setting Community Norms and Values: A response to the InIn Open Letter

FYI, we decided to distance InIn publicly from the EA movement for the foreseeable future.

We will only reference effective giving and individual orgs that are interested in being promoted, as evidenced by being interested in providing InIn with stats for how many people we are sending to their websites, and similar forms of collaboration (yes, I'm comfortable using the term collaboration for this form of activity). Since GWWC/CEA seem not interested, we will not mention them in our future content.

Our work of course will continue to be motivated by EA con... (read more)

2Czynski5yThe only concrete change specified here is something you've previously claimed to already do. This is yet one more instance of you not actually changing your behavior when sanctioned.
Setting Community Norms and Values: A response to the InIn Open Letter

FYI, we removed references to GWWC and CEA from our documents

5William_MacAskill5yThanks, Gleb, it's appreciated.
Setting Community Norms and Values: A response to the InIn Open Letter

Interesting to see how many downvotes this got. Disappointing that people choose to downvote instead of engaging with the substance of my comments. I would have hoped for better from a rationally-oriented community.

Oh well, I guess it is what it is. I'm taking a break from all this based on my therapist's recommendation. Good luck!

I didn't down vote it, but I suspect others who did were - like me - frustrated by the accusation of not engaging with you on the substantive points that are summarised in Jeff's post. This post followed a discussion with literally hundreds of comments and dozens of people in this community discussing them with you.

I could explain why I think the term astroturfing does apply to your actions, even though they were not exactly the same as Holden's activities, but the pattern of discussion I've experienced and witnessed with you gives me very low credence that the discussion will lead to any change in our relative positions.

I hope the break is good for your health and wish you well.

-1Gleb_T5yInteresting to see how many downvotes this got. Disappointing that people choose to downvote instead of engaging with the substance of my comments. I would have hoped for better from a rationally-oriented community. Oh well, I guess it is what it is. I'm taking a break from all this based on my therapist's recommendation. Good luck!
Concerns with Intentional Insights

This makes sense for spreading the message among EAs, which is why we have the Effective Altruist Accomplishments Facebook group. I'll have to think further about the most effective ways of spreading this message more broadly, as I'm not in a good mental space to think about it right now.

Concerns with Intentional Insights

I am unwilling to take "active members of the EA group" as representative of the EA community, since your actual claim was that I made the experience of the EA community significantly worse, and that includes all members, not simply activists. On average, only 1% of any internet community contribute, but the rest are still community members. Instead, I am fine taking the bet than Benito describes - who is clearly far from friendly to InIn.

I am even fine with going with your lower estimate of 14 out of 20.

I am fine including friends.

I am fine w... (read more)

3Chriswaterguy5yI read "active" to mean actually involved in things, whether socially, online, finding, or campaigning. The word "activist" has a stronger connotation in spite of the same root.
Concerns with Intentional Insights

If the organizations concerned give permission, I am happy to share documentary evidence in my email of them reviewing the script and giving access to their high-quality logo images. I am also happy to share evidence of me running the final video by them and giving them an opportunity to comment on the wording of the description below the video, which some did to help optimize the description to suit their preferences. I would need permission from the orgs before sharing such email evidence, of course.

4CarlShulman5yI am confident this is true. And at least some of the orgs have been contacted (see Neela's comment) and have the opportunity to disclaim if they wish. [ETA: and have said this was true in their own case, see Neela's second comment.]
9Jeff_Kaufman5yThese are entirely compatible. I had and have multiple concerns, and described the one I was most worried about as my "primary concern". There's no contradiction here. I think you're confused about what bikeshedding means: http://bikeshed.org/ [http://bikeshed.org/] In this case it's more of a motte-and-motte, with the document authors agreeing to focus on motte-A because we didn't have consensus that motte-B should be defended. (I also appreciate Gregory's response to your comment.)

I hope Jeff will forgive me for answering this comment on his behalf, and Gleb will forgive me for ceasing to pretend he asking in good faith, rather than risible mudslinging in a misguided attempt at a damage limitation exercise (I particularly like the catty "Are you revealing your true beliefs earlier or now?" - setting an example for aspiring rationalists on how to garb their passive-aggressiveness with the appropriate verbiage).

Jeff notes here and in what you link there are two broad families concerns 1) your product is awful, and 2) your gr... (read more)

Concerns with Intentional Insights

I'm fine taking a random sample of 20 people.

Regarding positive connections, the claim made by Oliver is what we're trying to measure - that I made "significantly worse" the experience of being a member of the EA community for "something like 80%" of the people there. I had not made any claims about my positive connections.

8Habryka5yAfter some private conversation with Carl Shulman, who thinks that I am miscalibrated on this, and whose reasoning I trust quite a bit, I have updated away from me winning a bet with the words "significantly worse" and also think it's probably unlikely I would win a bet with 8/10, instead of 7/10. I have however taken on a bet with Carl with the exact wording I supplied below, i.e. with the words "net negative" and 7/10. Though given Carl's track record of winning bets, I feel a feeling of doom about the outcome of that bet, and on some level expect to lose that bet as well. At this point, my epistemic status on this is definitely more confused, and I assign significant probability to me overestimating the degree to which people will report that have InIn or Gleb had a negative impact on their experience (though I am even more confused whether I am just updating about people's reports, or the actual effects on the EA community, both of which seem like plausible candidates to me).
Concerns with Intentional Insights

I will think about this further, as I am not in a good space mentally to give this the consideration it deserves

Concerns with Intentional Insights

One of the things I'm trying to do, as I noted above, is a meta-move to change the culture of humility about good deeds. I generally have an attitude of trying to be the change that I want to see in the world and leading by example. It's a long-term strategy that has short-term costs, clearly :-)

I understand the long-term goal. I'm claiming that this strategy is actually instrumentally bad for that long-term goal, as it is too widely read as negative (hence reinforcing cultural norms towards humility). More effective would be to embody something which is superior to current cultural norms but will still be seen as positive.

Concerns with Intentional Insights

I'll be happy to take that bet. So if I understand correctly, we'd choose a random 10 people on the EA FB group - ones who are not FB friends with you or I to avoid potential personal factors getting into play - and then ask them if their experience of the EA community has been "significantly worsened" by InIn. If 8 or more say yes, you win. I suggest 1K to a charity of the choice of the winning party? We can let a third party send messages to prevent any framing effects.

4Habryka5ySince the majority of the FB group is inactive, I propose that we limit ourselves to the 50 or 100 most recently active members on the FB group, which will give a more representative sample of people who are actually engaging with the community (and since I don't want to get into debates of what precisely an EA is). Given that I am friends with a large chunk of the core EA community, I don't think it's sensible to exclude my circle of friends, or your circle of friends for that matter. Splitting this into two questions seems like a better idea. Here is a concrete proposal: 1. Do you identify as a member of the EA community? [Yes] [No] 2. Do you feel like the engagement of Gleb Tsipursky or Intentional Insights with the EA community has had a net negative impact on your experience as a member of the EA community? [Yes] [No] I am happy to take a bet that chosen from the top 50 most recent posters on the FB group (at this current point in time), 7 out of 10 people who said yes to the first question, will say yes to the second. Or, since I would prefer a larger sample size, 14 out of 20 people. (Since I think this is obviously a system of high noise, I only assign about 60% probability to winning this bet.) I sadly don't have $1000 left right now, but would be happy about a $50 bet.
4Ben Pace5yActually, I'd suggest just taking a random sample from the FB group. My guess is that your positive connections should be taken into account in this bet Gleb - if you've personally had a significant positive impact on many people's lives in the movement (and helped them be better effective altruists) then that's something this is trying to measure. Also, 10 seems like a small sample, 20 seems better.
7CarlShulman5yIndeed. However, I will note that my understanding (based on experience, analogy to law, and some web searching) is that my view is standard, while yours is not.
Concerns with Intentional Insights

True, I don't have a very good perception of social status instincts. I focus more on the quality of someone's contributions and expertise rather than their status. I despise status games.

Also, there's a basic inference gap between people who perceive InIn and me as being excessively self-promotional. I am trying to break the typical and very unhelpful humility characteristic of do-gooders. See more about this in my piece here.

5Kathy5yI think liberating altruists to talk about their accomplishments has potential to be really high value, but I don't think the world is ready for it yet. I think promoting discussions about accomplishments among effective altruists is a great idea. I think if we do that enough, then effective altruists will eventually manage to present that to friends and family members effectively. This is a slow process but I really think word of mouth is the best promotional method for spreading this cultural change outside of EA, at least for now. I totally agree with you that the world should not shut altruists down for talking about accomplishments, however we have to make a distinction between what we think people should do and what they are actually going to do. Also, we cannot simply tell people "You shouldn't shut down altruists for talking about accomplishments." because it takes around 11 repetitions for them to even remember that. One cannot just post a single article and expect everyone to update. Even the most popular authors in our network don't get that level of attention. At best, only a significant minority reads all of what is written by a given author. Only some, not all, of those readers remember all the points. Fewer choose to apply them. Only some of the people applying a thing succeed in making a habit. Additionally, we currently have no idea how to present this idea to the outside world in a way that is persuasive yet. That part requires a bunch of testing. So, we could repeat the idea 11 times, and succeed at absolutely no change whatsoever. Or we could repeat it 11 times and be ridiculed, succeeding only at causing people to remember that we did something which, to them, made us look ridiculous. Then, there's the fact that the friends of the people who receive our message won't necessarily receive the message, too. Friends of our audience members will not understand this cultural element. That makes it very hard for the people in our audience to practi
-1Qiaochu_Yuan5yI don't believe you.

FWIW, I read quite a bit of the self-promotional stuff as being status-gamey. I expect I'm not all that unusual in this.

That it gets read this way is a challenge here, and indeed a challenge to the general problem of trying to dial back humility re. good deeds. I think some humility about good deeds is instrumentally pretty important for sending the right signals and encouraging others to be attracted to the idea (not of course to the point of keeping them all private).

Gleb, there is a social norm that things one says in private email will not be publicized without consent. In the document quotes attributed to you from private messages are only included where you have been asked for consent, it has been given, and you have had opportunities to review prior to publication.

The same expectation does not apply to you vetoing Michelle's statements about what she said (not what you said).

No "exchange" has been disclosed. Michelle has disclosed her own words and that she said them to you. Are you claiming people can not report their own speech without the permission of their audience?

Concerns with Intentional Insights

Note – I will make separate responses as my original comment was too long for the system to handle. This is part three of my comments.

Now that we got through the specifics, let me share my concerns with this document.

1) This document is a wonderful testimony to bikeshedding, motte-and-bailey, and confirmation bias.

It’s an example of bikeshedding because the much larger underlying concerns are quite different from the relatively trivial things brought up in this document: see link

Consider the disclosures. Heck, even one of the authors of this document who... (read more)

2Jeff_Kaufman5yI would like to strongly encourage you to keep posting in this thread, and I̶ ̶w̶o̶u̶l̶d̶ ̶l̶i̶k̶e̶ ̶t̶o̶ ̶e̶n̶c̶o̶u̶r̶a̶g̶e̶ ̶o̶t̶h̶e̶r̶s̶ ̶t̶o̶ ̶u̶p̶v̶o̶t̶e̶ ̶y̶o̶u̶r̶ ̶p̶o̶s̶t̶s̶ ̶h̶e̶r̶e̶ ̶t̶o̶ ̶s̶h̶o̶w̶ ̶t̶h̶a̶t̶ ̶y̶o̶u̶r̶ ̶c̶o̶n̶t̶i̶n̶u̶e̶d̶ ̶p̶a̶r̶t̶i̶c̶i̶p̶a̶t̶i̶o̶n̶ ̶i̶n̶ ̶t̶h̶i̶s̶ ̶d̶i̶s̶c̶u̶s̶s̶i̶o̶n̶ ̶i̶s̶ ̶v̶a̶l̶u̶e̶d̶. Having this dialog out in the open helps keep everyone on the same page. EDIT: Rob has convinced me [http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8o9] that my recommendation that people upvote Gleb's responses was not a good idea. Instead, also per Rob's suggestion, I've added links to Gleb's three response comments at the end of the top-level post.
8Jeff_Kaufman5yI have more or less two kinds of concerns: * Gleb/InIn acting unethically, overstating impact, manufacturing the illusion of support * InIn content turning people off of EA and EA ideas by presenting them badly While I think the second category is more serious, the first category is much easier to document and communicate. And, crucially, the concerns in the first category are bad enough that we can just focus there. When I originally started writing this document I included quite a bit about my concerns in the second category, as you can see in this early draft [https://docs.google.com/document/d/1KxPSpc5GFefUIH8Fh4hD6isk1zENbDRJe76JuwxvQTA/edit] . Carl and Gregory convinced me [http://www.jefftk.com/inin-types-of-concerns.png] that we should instead focus just on the first category. (Also, the section of conversation [https://www.facebook.com/jefftk/posts/805642967912?comment_id=805689434792] you cite doesn't show that I didn't care about the first category, just that I thought the second category was even more serious.)
7Habryka5yI don't have much interest in engaging much further in this discussion, since I think most things are covered by other people, and I've already spent far more time than I think is warranted on this issue. I mostly wanted to quickly reply to this section of your comment, given that it directly addresses me: "I find it hard to fathom how Oliver can say what he said, as all three comments and the upvotes happened before Oliver’s comment. This is a clear case of confirmation bias – twisting the evidence to make it agree with one’s pre-formed conclusion: see link To me Oliver right now is fundamentally discredited as either someone with integrity or as someone who has a good grasp of the mood and dynamics of EAs overall, despite being a central figure in the EA movement and a CEA staff member." I've responded to Carl Shulman's comment below regarding my thoughts on the hyperbole used in the linked comment, which I do think muddled the message, and for which I do apologize. I do also think that your strict dismissal here of my observation is worrying, and I think misses the point that I was trying to make with my comment. I do agree with Gregory's top comment on this post, in that I think your engagement with Effective Altruism has had a large negative impact on the community, and I do also think that you worsened the experience of being a member of the EA community for at least 70% of its members, and more likely something like 80%. If you disagree, I am happy to send Facebook messages to a random sample of 10-20 people who were recently active on the EA Facebook group, and ask them whether they felt that the work of InIn had a negative impact on their experience as an EA, and bet with you on the outcome. I think your judgement of me as someone "fundamentally discredited", "without integrity" or as someone out of touch with the EA community would be misguided, and that the way you wrote it, feels like a fairly unjustified social attack to me. I am happy to have
6Ben Pace5yUnfortunately, you and InIn have lost all credibility. There may be nuance to be had, there may be a few errors in the document, there may even be additional deeper reasons for why Carl Shulman, Jeff Kaufman, and the other excellent members of our community have spent so much of their time trying to explain their discomfort with you; however, when the core community has wasted this much time on you, and has shouted this strongly about their discomfort, I simply will not engage further. I'll not be reading any comment or post by yourself in future, or continuing any conversation with you. This is where the line is drawn in the sand.

Regarding Gleb's point #1 I would like to agree in particular that harsh hyperbole like "Gleb made the experience of almost all EAs significantly worse" is objectionable, and Oliver should not have used it.

Also it's worth signal-boosting and reiterating to all commenters on this thread that public criticism on the internet, particularly with many critics and one or a few people being criticized, is very stressful, and people should be mindful about that and empathize with Gleb's difficult situation. I will also add that my belief is that Gleb is ... (read more)

Regarding point #2, Gleb writes above:

2) This document engages in unethical disclosures of my private messages with others. When I corresponded with Michelle, I did so from a position as a member of GWWC and the head of another EA organization. Neither was I asked nor did I implicitly permit my personal email exchange to be disclosed publicly. In other words, it was done without my permission in an explicit attempt to damage InIn.

Here is the entirety of section 1.2, which does not cite or quote any statement from Gleb's email to Michelle, but rather ci... (read more)

Concerns with Intentional Insights

Note – I will make separate responses as my original comment was too long for the system to handle. This is part two of my comments.

Some of you will be tempted to just downvote this comment because I wrote it. I want you to think about whether that’s the best thing to do for the sake of transparency. If this post gets significant downvotes and is invisible, I’ll be happy to post it as a separate EA Forum post. If that’s what you want, please go ahead and downvote.

I disagree with other aspects of the post.

1) For instance, the points about affiliation, of wh... (read more)

Concerns with Intentional Insights

Note – I will make separate responses as my original comment was too long for the system to handle. This is part one of my comments.

Some of you will be tempted to just downvote this comment because I wrote it. I want you to think about whether that’s the best thing to do for the sake of transparency. If this post gets significant downvotes and is invisible, I’ll be happy to post it as a separate EA Forum post. If that’s what you want, please go ahead and downvote.

I’m very proud of and happy with the work that Intentional Insights does to promote rational ... (read more)

I have down-voted this comment because I think as a community we should strongly disapprove of this sort of threat

"If this post gets significant downvotes and is invisible, I’ll be happy to post it as a separate EA Forum post. If that’s what you want, please go ahead and downvote."

The criticisms have been raised in an exceptionally transparent manner: Jeff made a public post on Facebook, and Gleb was tagged in to participate. Within that thread the plans to make this document were explained and even linked to: anybody (Gleb included) could r... (read more)

Students for High Impact Charity: Review and $10K Grant

Glad to hear of your support, SHIC is an important and worthwhile project!

Is not giving to X-risk or far future orgs for reasons of risk aversion selfish?

It seems to me that risk aversion and selfishness are orthogonal to each other - i.e., they are different axes. Based on the case study of Alex, it seems that Alex does not truly - with their System 1 - believe that a far-future cause is 10X better than a current cause. Their System 1 has a lower expected utility on donating to a far future cause than poverty relief, and the "risk aversion" is a post-factum rationalization of a System 1, subconscious mental calculus.

I'd suggest for Alex to sit down and see if they have any emotional doubts about... (read more)

How to Measure and Optimize EA Marketing

Thanks for the tip! We haven't looked into these, we'll have to check them out.

How to Measure and Optimize EA Marketing

Good question about RCTs! We're actually gathering funding to conduct a study on various forms of messaging using Mechanical Turk.

2Michael_S5yYou might want to also try using google consumer surveys. If you restrict it to a single question (you can put the message in the question), they're incredibly cheap.
Accomplishments Open Thread - August 2016

I think it's a really great piece, and look forward to seeing it on the forum!

Promoting EA in Russia: Barriers and opportunities

Oh, I didn't mean "going to the people" as an activity, but a cultural tradition of valuing the masses. Namely, get to that part of the intelligentsia that values such activities, and show that EA is actually a great way to achieve their goal of valuing human beings in the most effective way possible (and later perhaps expand to other sentient beings).

Ah, didn't know about Yuliy's disengagement. Thanks for updating me about that.

Promoting EA in Russia: Barriers and opportunities

Speaking from my perspective as someone who has researched Soviet civic engagement, I'm curious if it would be good to tie EA to existing Russian cultural ideas. For example, the idea of "going to the people" might be useful. This sort of cultural translation is what is being tried right now in translating EA to Muslim norms of giving to charity. Also, have you worked with the sizable LessWrong community in Moscow? They might be particularly amenable to EA. I can put you in touch with the group leader there if you're not, email me at gleb@intentionalinsights.org

1berekuk5yI don't think "going to the people" would be a wise idea at this moment. EA at its core is much more fit for intelligentsia than for peasants (going by Narodniks [https://en.wikipedia.org/wiki/Narodniks] termonology), and we need this core to stay strong, in my opinion. Also, I am the leader of the Moscow LessWrong community, actually. I'm assuming you meant Yuliy? If so, he still maintains the lesswrong.ru website and a public page on vk.com, but he disengaged from the community and LW meetups a few years ago.
Introducing Envision: A new EA-Aligned Organization

Ok, thanks for clarifying. Sounds like there will be a significant focus on collaboration. Also consider collaborating with SHIC if you aren't yet!

Introducing Envision: A new EA-Aligned Organization

Thanks for sharing about the project! I'm curious how do you plan to engage with existing EA chapters in colleges?

0lucarade5yHi Gleb, The specifics aren't worked out yet, but we're working with EA Build and will coordinate with individual EA chapters at the universities we found chapters at. The general idea is that members of EA chapters who are interested in technology and the future will help with the setting up and growing of Envision chapters, and we will direct Envision members who seem interested in EA towards the EA chapter. There may be some events co-hosted; this is probably context-specific.
Promoting Effective Giving at Conferences via Speed Giving Games

Yup, scared straight is a famous example, but not a charity. Neither are the social interventions at the link. I'd love to see some charities that had scholarly studies proving them either ineffective or net negative.

0RandomEA5yI suppose it could be done with interventions instead of charities.
Promoting Effective Giving at Conferences via Speed Giving Games

I'm not sure I know of many studies of charities that show they have negative effects. Do you have any citations of such studies?

0adamaero3yOne doesn't need studies to determine which charities have negative effects. (That's not true for the reverse obviously.) Play Pump is the archetype. There are plenty others, especially in Haiti. Gleb_T, go on GuideStar. If you're truly interested in finding the charities with negative effects, there are transparent charities that do more harm then good. Additionally, some have enormous administrative/advertising fees, a vice in itself. I was reading a 990 Form for a charity in Florida with over 85% put to advertising!
1plinck5yMaybe something like this [http://programs.clearerthinking.org/can_you_guess_which_charities_work.html]? "Scared Straight" is the example I always hear.
Accomplishments Open Thread - August 2016

Interesting stuff about Effective Environmentalism. Can you share some relevant links for people who might want to learn more?

Accomplishments Open Thread - August 2016

I personally donated to the fundraiser and encourage other folks to do so as well, it's a great cause.

Accomplishments Open Thread - August 2016

Excellent to hear about both the outreach work, and the fundraiser too. We tend to focus too little energy on doing outreach by comparison to moving money, so it's great to see you and the EA Munich group doing so much great outreach!

Load More