A recent facebook post by Jeff Kaufman raised concerns about the behavior of Intentional Insights (InIn), an EA-aligned organization headed by Gleb Tsipursky. In discussion arising from this, a number of further concerns were raised.

This post summarizes the concerns found with InIn. It also notes some concerns which were mistaken and unfounded, and facts that arose which reflect well on InIn.

This post was contributed to by Jeff Kaufman, Gregory Lewis, Oliver Habryka, Carl Shulman, and Claire Zabel. They disclose relevant conflicts of interest below.


1 Exaggerated claims of affiliation or endorsement
1.1 Kerry Vaughan of CEA
1.2 Giving What We Can (GWWC)
1.3 Animal Charity Evaluators (ACE)
2 Astroturfing
2.1 The Intentional Insights blog
2.2 The Effective Altruism forum
2.3 LessWrong
2.4 Facebook
2.4.1 Soliciting upvotes and denying it
2.4.2 Not disclosing paid support
2.5 Amazon
3 Misleading figures
4 Dubious practices
4.1.1 Paid contractors' expected 'volunteering'
4.1.2 Further details regarding contractor 'volunteering'
4.2 Amazon bestseller
5 Inflated social media impact
5.1 Facebook
5.2 The Life You Can Save donations
5.3 Twitter
5.4 Pinterest
5.5 Presentations of media article traffic and reach
5.5.1 TIME article
5.5.2 Huffington Post
6 Mistaken/Unfair accusations
6.1 Supposed linearity of Twitter follower increase
6.2 Objections to Intensional Insights staff 'liking' Intentional Insights content
6.3 'Paid likes' from clickfarms
7 Positives
7.1 Jon Behar
7.2 Additional donations
7.3 Placement of articles in TIME and the Huffington Post
8 Policy responses from InIn
8.1 Post-criticism conflict-of-interest policy
8.2 Post-criticism Facebook boosting
9 Disclosures
10 Response comments from Gleb Tsipursky

1. Exaggerated claims of affiliation or endorsement

Intentional Insights claims 'active collaborations' with a number of Effective Altruist groups in its Theory of Change document which was on its "About" page (August 21, 2016).

In a number of cases InIn makes use of the name of an effective altruist organization without asking for that organization's consent, based on minor interactions such as the organization answering questions about web traffic. From the 'Effective Altruism impact of Intentional Insights' document (August 19, 2016):

As detailed below, we observe that after learning of such claims and use of their names, some of these groups had asked InIn to stop. Yet even in some of these cases InIn had not altered the mentions in its promotional materials months later. Tsipursky also does not appear to have adopted a practice of checking with organizations before using their names in InIn promotional materials.

1.1. Kerry Vaughan of CEA

Tsipursky previously posted notes from a Skype conversation with Kerry Vaughan without his consent, and suggested he had endorsed Intentional Insights where he had not:

Tsipursky later apologized, edited the post, and said he had updated. Yet he later engaged in similar behavior (see sections 1.2 and 1.3 below).

1.2. Giving What We Can (GWWC)

Gleb has taken the Giving What We Can pledge, and contributed an article on the Giving What We Can blog on December 23, 2015. He also mentioned and linked to GWWC in his articles elsewhere.

Michelle Hutchinson, Executive Director of Giving What We Can, wrote to Tsipursky in May 2016 asking him to cease "claiming to be supported by Giving What We Can." However, the use of Giving What We Can's name as an 'active collaboration' was not removed from Intentional Insights' website, and remained in both of the above InIn documents as of October 15, 2016.

1.3. Animal Charity Evaluators (ACE)

In the InIn impact document Tsipursky quotes Leah Edgerton of ACE:

Erika Alonso of ACE subsequently made the following statement:

2. Astroturfing

Astroturfing is giving the misleading impression of unaffiliated ("grassroots") support. In GiveWell's first year its cofounders engaged in astroturfing, and this was taken very seriously by its board. Among other responses, the GiveWell board demoted one of the co-founders and fined both $5,000 each. Tsipursky expressly claimed not to engage in astroturfing:

However, astroturfing is widespread across the Intentional Insights social media presence (documented in the sections below). Tsipursky did qualify his statement with "we are not asking people to do these sorts of activities in their paid time", but lack of payment isn't enough to prevent misleading people about the nature of the support. In any case, the distinction between contractors' paid and unpaid time is blurry (see section 4.1.1).

2.1. The Intentional Insights blog

Paid contractors for Intentional Insights leave complimentary remarks on the Intentional Insights blog, and the Intentional Insights account replies with gratitude, as if the comments were by strangers. At no stage do they disclose the financial relationship that exists between them. In the screenshot below (source, Candice, John, Beatrice, Jojo, and Shyam are all Intentional Insights contractors.

The most recent examples of this happened in late August 2016, after the initial post and discussion with Tsipursky on Jeff's Facebook wall, and during the drafting of this document.

2.2. The Effective Altruism forum

Tsipursky has done the same thing on the Effective Altruism forum. Here is one instance (note that "Nyor" also goes by "Jojo"):

Here is another example (note that "Anthonyemuobo" is a professional handle used by one of Tsipursky's acknowledged contractors, "Sargin"):

2.3. LessWrong

Tsipursky posted a link to some of his wife and InIn co-founder's writing in February 2016, without noting this connection:

This is a minor lapse, one which Gleb claimed to have learned from and updated. Yet similar behavior continued:

In March 2016, Intentional Insights' contractors created accounts and started posting non-specific praise on Tsipursky's LessWrong posts:

These are all people Tsipursky pays, but none of them acknowledged it in their comments or their posts in the welcome thread. Additionally, Tsipursky did not acknowledge this relationship when he thanked them for their remarks.

LessWrong user gjm pointed out that this was misleading, and Tsipursky acknowledged this was a problem and commented on Sargin's welcome post:

While Tsipursky knew both Beatrice Sargin and Alex Wenceslao had posted similar comments, since he had replied to them, he waited for these to be discovered and pointed out before acting:

This happened a third time, with JohnC2015:

2.4. Facebook

2.4.1. Soliciting upvotes and denying it

Tsipursky claimed "when I make a post on the EA Forum and LW I will let people who are involved with InIn know about it, for their consideration, and explicitly don't ask them to upvote":

In the comment Tsipursky denies soliciting upvotes, and demands that accusations that he did be substantiated or withdrawn. Six hours later someone responded with a screenshot of a post Tsipursky had made to the Intentional Insights Insiders group showing Tsipursky soliciting upvotes:

Tsipursky's response, a couple hours later in the same thread:

Tsipursky either genuinely believed posts like the above do not ask for upvotes, or he believed statements that are misleading on common-sense interpretation are acceptable providing they are arguably 'true' on some tendentious reading. Neither is reassuring. [He subsequently conceded this was 'less than fully forthcoming'.]

2.4.2. Not disclosing paid support

Intentional Insights proposed producing EA T-Shirts, and received multiple criticisms. Tsipursky claimed he had run the design by multiple people. Again, Tsipursky did not disclose that at least five of them were people he pays:

2.5. Amazon

Tsipursky's contractor posted a 5-star review for his self-help book on Amazon without disclosing the affiliation:

Tsipursky emailed copies of his self-help book to Intentional Insights volunteers, including contractors, who responded by posting 5-star reviews on Amazon:

He later followed up with:

This is true but incomplete: the 8th review is by Asraful Islam, a volunteer affiliated to Intentional Insights.

Another Intentional Insights affiliate, unpaid at that time but now a paid virtual assistant, Elle Acquino, posted another 5-star review, not in the top 10. In that review, however, the connection to Tsipursky and his nonprofit institute was disclosed.

3. Misleading figures

In December 2015 and January 2016, Tsipursky repeatedly claimed that his articles were shared thousands of times as evidence of the effectiveness of his approach. In fact, he had been reporting Facebook 'likes' and all views on Stumbleupon as shares, greatly exaggerating the extent of social media engagement.

The initial point reflected a common issue with the interpretation of social media activity counters on websites. After this was explained to him Tsipursky claimed to have updated on the correction. However, a June 2016 document on Intentional Insight's Effective Altruism Impact again reported views as shares, exaggerating sharing by many times.

4. Dubious practices

4.1.1. Paid contractors' expected 'volunteering'

Tsipursky only takes on contractors who spend at least two hours "volunteering" for Intentional Insights for each paid hour:

In a follow-up discussion, Tsipursky suggested that contractors could temporarily reduce their volunteer hours in special circumstances, but he would not affirm that contractors would be allowed to simply say no to "volunteering":

Depending on the nature of the volunteer work, this requirement seems potentially unethical, effectively requiring that contractors do three times as much work for a fixed amount of money. We also suggest this relationship undermines the distinction Tsipursky offers between 'paid' and 'volunteer time' and the defence that the promotion his contractors undertake on his behalf is innocuous as it occurs in their 'volunteer time'.

4.1.2. Further details regarding contractor 'volunteering'

Subsequent to the preparation of the above section Tsipursky provided additional information about how he came into contact with contractors, their donations, prior unpaid volunteering, wages, and other information as evidence of genuine support. They provide that, but also support concerns regarding linkage of paid and unpaid work and financial interests.

Tsipursky states the following regarding initial meetings and hiring:

Tsipursky stated the following regarding the length of unpaid volunteering prior to the first paid work:

He also notes donations by contractors, implemented by reducing their paid hours or paid hour wage rate, as evidence of genuine support:

I have pointed out many times that there is plenty of evidence showing that those folks who do contracting are passionate enthusiasts for InIn. Let's take the example of John Chavez, who the document brought up. He chose to respond to a fundraising email to our supporter listserve in June 2016 – long before Jeff Kaufman's original post – by donating $50 per month to InIn out of his $300 monthly salary:

This is bigger than a typical GWWC member, at over 15% of his annual income. Let me repeat – he voluntarily, out of his own volition in response to a fundraising that went out to all of our supporters, chose to make this donation. Just to be clear, we send out fundraising letters regularly, so it’s not like this was some special occasion. It was just that – as he said in the letter – it happened to fall on the 1-year anniversary of him joining InIn and he felt inspired and moved by the mission and work of the organization to give.

Before you go saying John is unique, here is another screenshot of a donation from another contractor who in October 2015, in response to a fundraising email, made a $10/month donation:

Again, voluntarily, out of her own volition, she chose to make this donation.

Tsipursky also indicates that paid and unpaid hours by contractors constitute only a minority of work hours at InIn, with most hours contributed by volunteers without financial compensation:

Regarding wages and requirement/expectations of unpaid volunteering, Tsipursky wrote the following:

The Upwork (formerly known as Odesk) freelancer marketplace on which contractors are hired has a minimum wage of $3.00 per hour. Combined with the expected unpaid volunteering the typical wage would be $1.00, 1/3rd of the minimum for the platform.

John is given as an example of a higher paid contractor at $7.5 per hour. However, this is combined with 3 hours of unpaid volunteering for each paid hour, rather than 2, for a combined wage of $1.875 per hour, prior to his donation of 1/3 of that wage.

In effect, the expectation of volunteering systematically circumvents the Upwork minimum wage for contractors. However, it should be noted that the Upwork minimum wage is a corporate policy, and not a national or local labor law. Contractors in low-income countries may be earning substantially more than the local minimum wages or average incomes. For example, according to Wikipedia the hourly minimum wage in US dollars at nominal exchange rates is $0.54 in Nigeria. In the Philippines minimum wages vary by location and sector, but Wikipedia lists a range of ~$0.6-$1.2 per hour for non-agricultural workers, with the latter group in the capital of Manila. So the wage per combined (paid+volunteer) hour of work would not appear to be in conflict with legal minimum wages in contractors' jurisdictions. Furthermore in a number of these jurisdictions the minimum wage is closer to the median wage, and unemployment is high.

Regarding the link between paid and unpaid hours, Tsipursky describes it as an informal understanding:

In aggregate the additional statements provide evidence of pre-existing support for InIn from new contractors. However, they also confirm a linkage of paid and unpaid labor, and contractor financial interests in promotional activity occurring during 'volunteer' hours.

4.2. "Best-selling author"

Tsipursky includes being a 'best-selling author' in his standard bio. For example, on his Patreon:


And on his Amazon author page:

Normally, a reader would take "best-selling author" to mean hitting a major best-seller list like the New York Times, which indicates that very many people have decided to buy the book, and is a hard signal to fake. In Tsipursky's case, "best-selling author" means that his book was very briefly the top seller in a sub-sub-category of Amazon. Further, he reports offering his book for free and encouraging friends and contractors to download and review it. In its first two weeks the book sold 50 copies at $3 each. Cumulatively it has sold 500 copies at $3 each, and been downloaded 3500+ times free. In contrast, NYT bestseller status requires thousands of sales over the first week. Amazon bestseller status is calculated hourly by category: in small categories three purchases in an hour can win the #1 bestselling author label.

Many of those giving the book 5 star reviews are social contacts of Tsipursky, some of them paid or volunteer Intentional Insights staff, but do not disclose this association (see section 2.5).

As of August 22, 2016 the book is ranked as follows:

In light of this, calling oneself a 'bestselling author' on this sort of performance is potentially misleading.

We note that the practice of claiming bestselling author status using bestseller lists that involve very small actual sales may be widespread. This does not, however, prevent it from being misleading or controversial. For example, when Brent Underwood attained Amazon best-seller status using a few dollars in less than an hour with a book that was simply a picture of his foot, media coverage generally suggested that this highlighted a problematic practice.

5. Inflated social media impact

5.1. Facebook

Tsipursky has cited social media engagement as evidence of impact. However, in many cases it appears that this engagement is illusory. In the case of Facebook, it appears to have resulted from paid Facebook post boosting, which led to hundreds of likes on posts from clickfarms, in a process described by Veritasium: clickfarm accounts like enormous numbers of things they have not been directly paid to like in order to manipulate Facebook’s algorithms. Facebook boosting systematically attracts these clickfarm accounts, a risk which is exacerbated by boosting to regions where clickfarms are located (although clickfarms also have fake accounts purporting to be from all around the world).

In the case of InIn posts, InIn paid for that boosting. In February 2016, Tsipursky argued that this was resulting in genuine engagement and reach:

For a number of InIn blog posts with large numbers of likes (for example 318 for this recent one) these likes appear to be primarily the result of clickfarms. Accounts liking this post like vast numbers of disparate things. Here are some random selections from the middle of the list of that post:

There is further circumstantial evidence: the likes are often from accounts in low-income countries with substantial clickfarm operations. Tsipursky defended this as coincidental overlap caused by Intentional Insight's targeting of low-income countries, however countries with similar demographics without large click farm operations are not well represented.

In arguing for the impact of his writing, Tsipursky cited a post on the TLYCS blog that got 500 likes in its first day on the TLYCS blog while typical posts got 100-200 likes:

However, this appears to also be a case of Facebook ad boosting eliciting engagement from clickfarms, this time by a former TLYCS employee (subsequently asked to stop by TLYCS) rather than InIn, according to this statement from TLYCS' Jon Behar:

The profiles contributing the likes and whose profiles show no other engagement with TLYCS, or with EA ideas:

After Jeff Kaufman raised concerns about the pattern of Facebook likes in February 2016, Tsipursky doesn't seem to have looked into the issue further prior to the August 2016 discussion, when outside observers provided indisputable evidence and explained the role of boosting in generating clickfarm likes. While the boosting-clickfarm link is counterintuitive, the lack of any other engagement by the clickfarmers was apparent both before and after the raising of concerns in February. Failure to examine the ineffectiveness of these social media channels, even after concerns were raised, raises questions about InIn's practices as an outreach and content marketing organization.

5.2. The Life You Can Save donations

In his "Effective Altruism impact of Intentional Insights" document (archived copy), Tsipursky claims that content he has published with The Life You Can Save is able to "regularly reach an audience of over 5,000, at least 12% of whom make a donation" suggesting over 600 donations per article, based on a reference letter from a former TLYCS employee. However, these figures were incorrect, and TLYCS estimates that the total number of visitors who landed on Tsipursky's blog posts at the TLYCS blog was ~3,000 (rather than tens of thousands), with donations directly from those page totalling likely 2-3 (rather than hundreds).

While the reference letter Tsipurksy cites could easily give that false impression, it is implausible in light of other information available to him about the impact of his pieces. For example, Tsipursky also cites an article in a major news outlet as producing two donations to GiveDirectly totalling $500:

Since two donations is far less than ~600, this "12% of 5,000 views" number was clearly not sanity checked before being used to argue the case for Intentional Insights to EAs and in a fundraising document aimed at EAs. It's possible that Tsipursky simply took a surprisingly good estimate from a partner organization at face value, but one might expect an expert in marketing to investigate why this channel was performing so much better than his other channels.

5.3. Twitter

Tsipursky implied that his 10k Twitter followers represent organic interest:

The InIn account is following approximately as many accounts as follow it, 11.7k to 11.4k. Oliver observed that many of these accounts have "100% follow-back" in their descriptions. It seems like they're offering an exchange: InIn adds these accounts as followers, and they add his back in return, or vice versa. This is not an indication of actual interest from fans, and these accounts have almost no organic engagement with InIn such as retweets:

5.4. Pinterest

InIn follows over 20,000 people on Pinterest, far more people than follow it. As on the InIn Facebook page and Twitter, follower engagement is extremely low, and dominated by persons affiliated with InIn, suggesting the vast majority of followers are not genuine.

Examining the profiles of followers, there appears to be a very high rate of clickfarm/advertising accounts. Here are 10 randomly selected InIn Pinterest follower accounts. 10 out of 10 appear to be spam/advertising/clickfarm accounts:

5.5. Presentations of media article traffic and reach

5.5.1. TIME article

In the InIn EA impact document we see this:

The document does not make clear that the article did not appear in the print magazine, so print readers would not be exposed to it there. Online, we are left to anchor on a figure of 65 million views, without any reference to the actual views of the article (which were tremendously lower).

Somewhat later in the document we see this:

As another example, here are numbers in a spreadsheet we set up recently to track clicks to EA nonprofit websites from the Time piece we published.

However, while the article made the case for GiveWell recommended charities and EA charity evaluators, only 132 clicks reached those organizations through the article, 70 of which did not immediately bounce, according to InIn's traffic figures. Specifically, in the original InIn spreadsheet the 'signed up to newsletter or converted in other ways'' column had a value of 13 for ACE, and 1 'clicked on donate button'

The corrected spreadsheet shows a value of 2 rather than 13 for 'signed up to newsletter.'

Thus InIn knew that the product of traffic and click-through was very low, suggesting some combination of low traffic for a piece on Time's website and low click-through rates. However this negative information was removed from the main text of the document while the 65 million figure (for all articles on the TIME website, including dubious traffic) was made prominent.

5.5.2. Huffington Post

The InIn EA impact document also included this discussion of a Huffington Post article:

However, he provided no evidence of reaching new audiences via the placement in the HuffIngton Post. Instead, he provided an example of an already supportive facebook friend, who apparently encountered the article from Tsipursky's Facebook page, not the Huffington Post.

6. Mistaken/Unfair accusations

6.1. Supposed linearity of Twitter follower increase

It was suggested that Tsipursky's twitter page shows surprisingly linear increases in followers over time (e.g. +8 followers a day for 10 days in a row, which may be indicative of click-farming. This piece of evidence is likely mistaken, as the tool used (sharecounter) probably linearly interpolates days where they do not record a user's Twitter followers, and thus the apparent linearity is an artifact.

6.2. Objections to Intensional Insights staff 'liking' Intentional Insights content

In the course of the original discussion of Jeff's post on Facebook, numerous people took exception to staff or volunteers 'liking' or supporting InIn content. This criticism is misguided: this is common practice both for nonprofits generally and within the EA community: many EAs affiliated to a given group 'like' or share content without disclosing their affiliation. Although issues around appropriate disclosure can be subtle, acts like this on social media do not on reflection seem significant enough to warrant disclosure of interests to the authors of this document.

6.3. 'Paid likes' from clickfarms

In the February 2016 discussion it was suggested the Tsipursky might be directly paying for likes from clickfarms. However, as discussed in section 5.1, while the likes in question appear to have resulted from paid Facebook boosting, and to be from clickfarms, they were not directly paid for. Instead, the boosting attracted clickfarm likes through an accidental process explained well the linked Veritasium video.

7. Positives

In the course of research into and discussion around InIn, some facts that reflect well on InIn were discovered. These are listed below. We don't think this comprises all evidence favourable of InIn: the impact document, Tsipursky's post on the EA forum, and the Intentional Insights website offer further evidence. (We have not looked at these closely enough to have a view on them.)

7.1. Jon Behar

One TLYCS employee who was worked with Tsipursky on Giving Games says Tsipursky has made helpful introductions:

Behar is also quoted in the InIn EA impact doc as saying:

7.2. Additional donations

TLYCS has information indicating that Tsipursky's posts combined drove about two or three donations, and the Huffington Post article resulted in to donations to GiveDirectly totaling $500. Tracking donations is hard, so this is definitely an underestimate.

7.3. Placement of articles in TIME and the Huffington Post

Tsipursky's articles in TIME and the Huffington Post got lots of exposure for EA ideas. Additionally, being able to get articles placed there is impressive.

8. Policy responses from InIn

During discussions with Tsipursky regarding drafts of this document he mentioned some InIn policy changes made in response to the criticisms. This section does not reflect any other changes InIn may have made, primarily because we haven't been able to put in the time to follow up on each practice and see whether it has continued. We also note that Tsipursky provided additional information regarding Amazon sales, contractor names, and payment practices upon request for this document.

8.1. Post-criticism conflict-of-interest policy

Following the discussion under Jeff Kaufman's post in August 2016, InIn created a conflicts of interest policy document:

8.2. Post-criticism Facebook boosting

Tsipursky now states:

Regarding InIn social media policy, we are making sure to avoid boosting any more posts to clickfarm countries. We're generally not boosting posts right now to anyone but fans of the page who live in the US and other rich countries. We found we couldn't ban identifiable clickfarm accounts from the FB page, unfortunately, so we're being really cautious about boosting posts.

9. Disclosures

Many people contributed to this document, some of them anonymously. Below are disclosures from people who contributed substantially and want to be clear about any potential conflicts of interest. None of the individuals below contributed on behalf of an employer or organization, and their contributions should not be taken to imply any stance on the part of any organization with which they are affiliated.

  • Jeff Kaufman has donated to the Centre for Effective Altruism (CEA), 80,000 Hours, and Giving What We Can. He has volunteered for Animal Charity Evaluators in a very minor capacity. His wife, Julia Wise, works for CEA and serves on the board of GiveWell.

  • Gregory Lewis has previously worked as a volunteer for Giving What We Can and 80,000 hours. He has donated to Giving What We Can and the Global Priorities Project.

  • Oliver Habryka currently works for CEA, and has been active in EA community organizing in a variety of roles.

  • Carl Shulman currently works for the Future of Humanity Institute, and consults for the Open Philanthropy Project. He previously worked for the Machine Intelligence Research Institute (MIRI). He has previously done some consulting and volunteering for the Center for Effective Altruism, especially 80,000 Hours. His wife is executive director of the Center for Applied Rationality and a board member of MIRI.

  • Claire Zabel works at the Open Philanthropy Project, and serves on the board of Animal Charity Evaluators. She has donated to a variety of EA organizations and has close ties with other people in the EA community.

10. Response comments from Gleb Tsipursky

Tsipursky has responded in the comments below: part one, part two, part three.


New Comment
182 comments, sorted by Click to highlight new comments since: Today at 3:50 PM
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings

My fellow contributors and I aimed in this document to have as little of an 'editorial line' as possible: we were not all in complete agreement on what this should be, so thought it better to discuss the appropriate interpretation of the data we provide in the comments. I offer mine below: in addition to the disclaimers and disclosures above, I stress I am speaking for myself, and not on behalf of any other contributor.

I believe InIn and Tsipursky are toxic to the EA community. I strongly recommend that EAs do not spend time or money on InIn going forward, nor any future projects Tsipursky may initiate. Insofar as there may be ways for EA organisations to insulate themselves from InIn, I urge them to avail themselves of these opportunities.

A key factor in this extremely adverse judgement is my extremely adverse view of InIn's product. InIn's material is woeful: a mess of misguided messaging (superdonor, the t-shirts, 'effective giving' versus 'effective altruism', etc. etc.), crowbarred in aspirational pop-psychology 'insights', tacky design and graphics, and oleaginous self-promotion seeping through wherever it can (see, for example, the free sample of Gleb's erstwhile 'amazon bes... (read more)

I suspect the reason InIn's quality is low is because, given their reputation disadvantage, they cannot attract and motivate the best writers and volunteers. I strongly relate to your concerns about the damage that could be done if InIn does not improve. I have severely limited my own involvement with InIn because of the same things you describe. My largest time contribution by far has been in giving InIn feedback about reputation problems and general quality. A while back, I felt demoralized with the problems, myself, and decided to focus more on other things instead. That Gleb is getting so much attention for these problems right now has potential to be constructive. Gleb can't improve InIn until he really understands the problem that's going on. I think this is why Intentional Insights has been resistant to change. I hope I provided enough insight in my comment about social status instincts for it to be possible for us all to overcome the inferential distance. I'm glad to see that so many people have come together to give Gleb feedback on this. It's not just me trying to get through to him by myself anymore. I think it's possible for InIn to improve up to standards with enough feedback and a lot of work on Gleb's part. I mean, that is a lot of work for Gleb, but given what I've seen of his interest in self-improvement and his level of dedication to InIn, I believe Gleb is willing to go through all of that and do whatever it takes. Really understanding what has gone wrong with Intentional Insights is hard, and it will probably take him months. After he understands the problems better, he will need a new plan for the organization. All of that is a lot of work. It will take a lot of time. I think Gleb is probably willing to do it. This is a man who has a tattoo of Intentional Insights on his forearm. Because I believe Gleb would probably do just about anything to make it work, I would like to suggest an intervention. In other words, perhaps we should ask him
9Gregory Lewis6y
Hello Kathy, I have read your replies on various comment threads on this post. If you'll forgive the summary, your view is that Tsipursky's behaviour may arise from some non-malicious shortcomings he has, and that, with some help, these can be mitigated, thus leading InIn to behave better and do more good. In medicalese, I'm uncertain of the diagnosis, strongly doubt the efficacy of the proposed management plan, and I anticipate a bleak prognosis. As I recommend generally, I think your time and laudable energy is better spent elsewhere. A lot of the subsequent discussion has looked at whether Tsipursky's behaviour is malicious or not. I'd guess in large part it is not: deep incompetence combined with being self-serving and biased towards ones org to succeed probably explain most of it - regrettably, Tsipursky's response to this post (e.g. trumped-up accusations against Jeff and Michelle, pre-emptive threats if his replies are downvoted, veiled hints at 'wouldn't it be bad if someone in my position started railing against EA', etc.) seem to fit well with malice. Yet this is fairly irrelevant. Tsipursky is multiply incompetant: at creating good content, at generating interest in his org (i.e. almost all of its social media reach is ilusory), at understanding the appropriate ambit for promotional efforts, at not making misleading statements, and at changing bad behaviour. I am confident that any EA I know in a similar position would not have performed as badly. I highly doubt this can all be traced back to a single easy-to-fix flaw. Furthermore, I understand multiple people approached Tsipursky multiple times about these issues; the post documents problems occurring over a number of months. The outside view is not favourable to yet further efforts. In any case, InIn's trajectory in the EA community is probably fairly set at this point. As I write this, InIn is banned from the FB group, CEA has officially disavowed it, InIn seems to have lost donors and prospective
I'm not completely sure what's going on with Gleb, but I feel a great deal of concern for people with Asperger's, and I think it made me overly sympathetic in this case. Thank you for this.

One thing to consider is that too much charity for Gleb is actively harmful for people with ASDs in the community.

If I am at a party of a trusted friend and know they've only invited people the trust, and someone hurts my feelings, I'm likely to ascribe it to a misunderstanding and talk it out with them. If I'm at a party where lots of people have been jerks to me before, and someone hurts my feelings, I'm likely to assume this person is a jerk too and withdraw.

By saying "I'm updating" and then committing the same problems again, Gleb is lessening the value of the words. He is teaching people it's not worth correcting others, because they won't change. This is most harmful to the people who most need the most direct feedback and the longest lead time to incorporate it.

Wow. More excellent arguments. More updates on my side. You're on fire. I almost never meet people who can change my mind this much. I would like to add you as a friend.
[This was originally a comment calling for Gleb to leave the EA community with various supporting arguments, but I've decided I don't endorse online discussions as a mechanism for asking people to leave EA. See this comment [http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8nl] of mine for more.]
He wrote [http://effective-altruism.com/ea/yg/an_ea_at_a_cfar_rationality_workshop_thoughts_and/] that he is a 'monthly donor' to CFAR. On the other hand a cynic might note that he has used his interactions with CFAR to promote himself and his organization, e.g. his linked favorable review of CFAR comes with a few plugs for Intentional Insights, and CFAR (or rather the erroneous acronym-unpacking 'Center for Advanced Rationality') appeared as a collaboration in InIn promotional documents. My understanding is that the impression that he was aligned with CFAR (and EA) had also made some CFAR donors more open to InIn fundraising pitches. He has also taken the Giving What We Can pledge, but I don't know what that means. He has said he and his wife fund most of InIn's budget (which would presumably be more than 10% of his income) and claims that it is highly effective, so might take that to satisfy his pledge. [Disclosure: my wife is the executive director of CFAR, but I am speaking only for myself.]

Note: I am socially peripheral to EA-the-community and philosophically distant from EA-the-intellectual-movement; salt according to taste.

While I understand the motivation behind it, and applaud this sort of approach in general, I think this post and much of the public discussion I've seen around Gleb are charitable and systematic in excess of reasonable caution.

My first introduction to Gleb was Jeff's August post, read before there were any comments up, and it seemed very clear that he was acting in bad faith and trying to use community norms of particular communication styles, owning up to mistakes, openness to feedback, etc. to disarm those engaging honestly and enable the con to go on longer. I don't think I'm an especially untrusting person (quite the opposite, really), but even if that's the case nearly every subsequent revealed detail and interaction confirmed this. Gleb responds to criticism he can't successfully evade by addressing it in only the most literal and superficial manner, and continues on as before. It is to the point that if I were Gleb, and had somehow honestly stumbled this many times and fell into this pattern over and over, I would feel I had to withdraw on... (read more)

I take your point as "aren't we being too nice to this guy?" but I actually really like the approach taken here, which seems extremely fair-minded and diligent. My suspicion is this sort of stuff is long-term really valuable because it establishes good norms for something that will likely recur in future. I'd be much more inclined to act with honesty if I believed people would do an extremely thorough public invesitigation into everything I'd said, rather than just calling me names and walking away.

I don't understand what you're claiming here. Are you saying you'd be honest in a community if you thought it would investigate you a lot to determine your honesty, but dishonest otherwise? Why not just be honest in all communities, and leave the ones you don't like?
I think he means that it is human behaviour to do that, not that he does it himself.
I literally still don't understand. I can understand the motivation to be an asshole in communities you think won't treat you fairly, but why be a lying asshole? I think the OP wrote "honesty" and meant something else.
1Ben Pace6y
I think the common point of intervention for people telling mis-truths, is not holding themselves back when they don't really have enough evidence. A person might be about to write of a quick reply, and in most communities, know that they're not going to be held accountable for any mischaracterisations of others' opinions, or referring inaccurately to studies and data. In those communities, the comments are awful. In communities where you know that, if you do this over a sustained period, Carl Shulman, Jeff Kaufman, Oliver Habryka, Gregory Lewis and more are gonna write tens of thousands of words documenting your errors, you'll be more likely to note when you haven't quite substantiated the comment you're about to hit 'send' on.
There's an important difference between repeatedly making errors, jumping to conclusions, or being attached to a preconceived notion (all of which which I've personally done in front of Carl plenty of times), and the sort of behavior described in the OP, which seems more like intentional misrepresentation for the sake of climbing a social status gradient.

I'd like to agree partially with MichaelPlant and Paul_Crowley, in so far as I'm glad that I'm part of a community that responds to problems in such a charitable and diligent manner. However, I feel they missed the most important point of shlevy's comment. Without arguing for a less fair-mined and thoughtful response, we can still ask the following: Gleb started InIn back in 2014; why did it take us two years to get to the point where we were able to call him out on his bad behaviour? This could've been called out much earlier.

I think the answer looks like this:

Firstly, Gleb has learned the in-group signals of communicating in good-faith (for example, at every criticism, he says he has "updated", and he says 'thank you' for criticism). This alone is not a problem - it would merely take a few people to realise this, call it out, and then he could be asked to leave the community.

There's a second part however, which is that once a person has learned (from experience) that Gleb is acting in bad faith, the next time that person comes to the discussion, everybody else sees the standard signals of good-faith communication, and as such the person may be hesitant to treat Gleb a... (read more)

I just want to highlight that I feel like part of this post is based on a false premise; you mention InIn was started in 2014. While that may be true, all of the incidents in EA (and Less Wrong) circles cited above date to November 2015 or later. Gleb's very first submission in the EA forum is in October 2015. By saying 'it took two years' and then talking about 'months rather than years' you give the impression that Gleb could have been excluded sometime back in 2015 and would have been elsewhere, which I think is pretty misleading (though presumably unintentionally so).

The truth is that it took a little over 9 months from Gleb's first post to Jeff's major public criticism. 9 months and a decent amount of time is not trivial. But let's not overstate the problem.

"There's a second part however, which is that once a person has learned (from experience) that Gleb is acting in bad faith, the next time that person comes to the discussion, everybody else sees the standard signals of good-faith communication, and as such the person may be hesitant to treat Gleb as they would treat someone else who was clearly acting in bad faith. This is because they would be seen as unnecessarily harsh by people without the background experiences - as was seen multiple times in the original Facebook thread, when people (who did not have the past experience with Gleb) were confused by the harshness of the criticism, and criticised the tone of the conversation."

I do strongly agree with this. I had some very frustrating conversations around that thread.

Pretty much agree with you and shlevy here, except that the wasting hundreds of collective hours carefully checking that Gleb is acting in bad faith seems more like a waste to me. If the EA community were primarily a community that functioned in person, it would be easier and more natural to deal with bad actors like Gleb; people could privately (in small conversations, then bigger ones, none of which involve Gleb) discuss and come to a consensus about his badness, that consensus could spread in other private smallish then bigger conversations none of which involve Gleb, and people could either ignore Gleb until he goes away, or just not invite him to stuff, or explicitly kick him out in some way. But in a community that primarily functions online, where by default conversations are public and involve everyone, including Gleb, the above dynamic is a lot harder to sustain, and instead the default approach to ostracism is public ostracism, which people interested in charitable conversational norms understandably want to avoid. But just not having ostracism at all isn't a workable alternative; sometimes bad actors creep into your community and you need an immune system capable of rejecting them. In many online communites this takes the form of a process for banning people; I don't know how workable this would be for the EA community, since my impression is that it's spread out across several platforms.

Seems worth establishing the fact that bad actors exist, will try to join our community, and engage in this pattern of almost plausibly deniable shamelessly bad behavior. I think EAs often have a mental block around admitting that in most of the world, lying is a cheap and effective strategy for personal gain; I think we make wrong judgments because we're missing this key fact about how the world works. I think we should generalize from this incident, and having a clear record is helpful for doing so.

3Ben Pace6y
Yes! But... you said your opening line as though it disagreed somehow? I said:
I may be misinterpreting you here; you wrote and while I think this behavior is in some sense admirable, I think it is on net not delightful, and the huge waste of time it represents is bad on net except to the extent that it leads to better community norms around policing bad actors.
3Ben Pace6y
Yup, we are in agreement. (I was just noting how sweet it was that we do this much more kindly than most other communities. It's totally not optimal though.)
Yes, insofar communities do that, but typically in emotive and highly biased ways. EA at least has more constructive norms for how these things are discussed. It's not perfect, and it's not fast, but here I see people taking pains to be as fair-minded as they can be. (We achieve that to different degrees, but the effort is expected.) My System 1 doesn't like this. Giving this power to a group of people and suggesting that we accept their guidance... that feels cultish, and not very compatible with a community of critical thinkers.

Scientific departments have ethics boards. Good online communities (e.g. Hacker News) have moderators. Society as a whole has a justice part of governance, and other groups that check on the decisions made by the courts. Suggesting that it feels cult-y to outsource some of our community norm-enfacement (so as to save the community as a whole significant time input, and make the process more efficient and effective) is... I'm just confused every time someone calls something totally normal 'cult-y'.

I deliberately said "My System 1 doesn't like this." and "that feels cultish" – on an intuitive level, I feel uncomfortable, and I'm trying to work out why. I do see value in having effective gatekeepers. I'm not even sure what it means to be "banned" from a movement consisting of multiple organisations and many individuals. It may be that if the process is clearly defined, and we know who is making the decision, on whose behalf, I'd be more comfortable with it.
3Ben Pace6y
Thanks for clarifying! Just in case you're interested: I think the word 'cultish' is massively overloaded (with negative connotations) and mis-used. I'd also point out that saying that a statement is one's gut feeling isn't equivalent to saying one doesn't endorse the feeling, and so I felt pretty defensive when you suggested my idea was cultish and not compatible with our community. I wrote this because I thought you might prefer to know the impacts of your comments rather than not hearing negative feedback. My apologies in advance if that was a false assumption.
Thanks – helpful feedback (and from Owen also). In hindsight I would probably have kept the word "cultish" while being much more explicit about not completely endorsing the feeling.
1Owen Cotton-Barratt6y
Something went wrong with the communication channel if you ended up feeling defensive. However, despite generally agreeing with you about problems with the world "cultish", I actually think this is a reasonable use-case. It has a lot of connotations, and it was being reported that the description was triggering some of those connotations in the reader. That's useful information that it may be worth some effort to avoiding it being perceived that way if the idea is pursued (your stack of examples make it pretty clear that it is avoidable).

I think being too nice is a failure mode worth worrying about, and your points are well taken. On the other hand, it seems plausible to me that it does a more effective job of convincing the reader that Gleb is bad news precisely by demonstrating that this is the picture you get when all reasonable charity is extended.

Shlevy, I think I might actually agree with everything you said here with the exception of the characterization of Intentional Insights as a "con". I can see the behavior on the outside very clearly. On the outside Gleb has said a list full of incorrect things. On the inside, the picture is not so clear. What's going on inside his head? If this is a con, what in the world does he want? He can't seem to make money off of this. Con artists have a tendency to do very, very quick things, with a very very low amount of effort, hoping to gain some disproportionate reward. Gleb is doing the opposite. He has invested an enormous amount of time (Not to mention a permanent Intentional Insights tattoo!) and (as far as I know) has been concerned about finances the whole time. He's not making a disproportionate amount of money off of this... and spreading rationality doesn't even look like one of those things which a con artist could quickly do for a disproportionate reward... so I am confused. If I thought Intentional Insights was a con, I'd be right with you trying to make that more obvious to everyone... but I launched my con detector and that test was negative. Maybe you use a different con detector. Maybe, to you, it is irrelevant whether Gleb is intentionally malicious or merely incompetent. Perhaps you would use the word "con" either way just as people use the word "troll" either way. For the same reasons that we should face the fact that there's a major problem with the inaccuracies Intentional Insights outputs, I think we aught to label the problem we're seeing with Intentional Insights as accurately as possible. Whether Gleb is incompetent or malicious is really important to me. If Gleb is doing this because of a learning disorder, I would really like to see more mercy. According to Wikipedia's page on psychological trauma, there are a lot of things about this post which Gleb may be experiencing as traumatic events. For instance: humiliation, rejection, and majo

I don't think incompetent and malicious are the only two options (I wouldn't bet on either as the primary driver of Gleb's behavior), and I don't think they're mutually exclusive or binary.

Also, the main job of the EA community is not to assess Gleb maximally accurately at all costs. Regardless of his motives, he seems less impactful and more destructive than the average EA, and he improves less per unit feedback than the average EA. Improving Gleb is low on tractability, low on neglectedness, and low on importance. Spending more of our resources on him unfairly privileges him and betrays the world and forsakes the good we can do in it.

Views my own, not my employer's.

That was a truly excellent argument. Thank you.
Thanks Kathy!
Witch hunting and attacks do nothing for anyone. Which is fine. People can look at clear and concise summaries like the one above and come to their own conclusion. They don't need to be told what to believe and they don't need to be led into a groupthink.
Attacking people who are bad protects other people in the community from having their time wasted or being hurt in other ways by bad people. Try putting yourself in the shoes of the sort of people who engage in witch hunts because they're genuinely afraid of witches, who if they existed would be capable of and willing to do great harm. To be clear, it's admirable to want to avoid witch hunts against people who aren't witches and won't actually harm anyone. But sometimes there really are witches, and hunting them is less bad than not. This approach doesn't scale. Suppose the EA community eventually identifies 100 people at least as bad as Gleb in it, and so generates 100 separate posts like this (costing, what, 10k hours collectively?) that others have to read and come to their own conclusions about before they know who the bad actors in the EA community are. That's a lot to ask of every person who wants to join the EA community, not to mention everyone who's already in it, and the alternative is that newcomers don't know who not to trust. The simplest approach that scales (both with the size of the community and with the size of the pool of bad actors in it) is to kick out the worst actors so nobody has to spend any additional time and/or effort wondering / figuring out how bad they are.
Yes, but Gleb isn't actively hurting anyone. You need an ironclad rationale before deciding to just build a wall in front of people who you think are unhelpful. Even if you could really have 100 people starting their own organizations related to EA... it's not relevant. Just because it won't scale doesn't mean it's not the right approach with 1 person. We might think that the time and investment now is worthwhile, whereas if there were enough questionable characters that we didn't have the time to do this with all of them, then (and only then) we'd be compelled to scale back.
The problem is that Gleb is manufacturing false affiliations in the eyes of outsiders, and outsiders who only briefly glance at lengthy, polite documents like this one are unlikely to realize that's what's happening.
Gleb did lots of things and the post describes them, so it's about more than just manufacturing false affiliations." The issue is not that the post is too long or contains too many details, that's a silly thing to complain about. The issue is whether the post should be adversarial and whether it should manufacture a dominant point of view. The answer to that is No.

In the original facebook thread I was highly critical of intentional insights, I have not read all the followup here yet, but I would like to note that after that thread the next "thing" I saw from Intentional Insights was this post about EA marketing. I thought that was a highly competent and interesting contribtuion to the EA community. All of the ongoing concerns about II may stand - but there is clearly a few people associated with the org who have valuable contributions to make to the future of the community,

The most embarrassing aspect of the exclusionary, witch hunt, no-due-diligence point of view which some people are advocating in the comments here is that it probably would have merited the early and permanent exclusion of the Singularity Institute/MIRI from the EA community. Holden wrote a blog on LessWrong saying that he didn't like their organization and didn't think they were worth funding. Some assorted complaints have been floating around the web for a long time complaining about them associating with neoreactionaries and about LessWrong being cultists as well as complaints about the way they communicate and write. There's been a few odd 'incidents' (if you can call them that) over the years between MIRI, LessWrong, and the rationalist sphere. It would be easy to jumble all of that together into some kind of meta-post documenting concerns, and there is certainly no shortage of people who are willing and able to write long impassioned posts expressing their feelings and saying that they want nothing to do with SIAI/MIRI and recommending others to adhere to that. We could have done that, lots of people would come out of the woodwork to add their own complaints, the conversation would reach critical mass, and boom - all of a sudden, half the steam behind AI safety goes down the tubes.

It's easy to find online communities today where people are mind-numbingly dismissive of anything AI-related due to a poorly-argued, critical-mass groupthink against everything LessWrong. Good thing that we're not one of them.

I agree that it's important that EA stay open to weird things and not exclude people solely for being low status. I see several key distinctions between early SI/early MIRI and Intentional Insights:

  • SI was cause focused, II a fundraising org. Causes can be argued on their merits. For fundraising, "people dislike you for no reason" is in and of itself evidence you are bad at fundraising and should stop.
  • I think this is an important general lesson. Right now "fundraising org" seems to be the default thing for people to start, but it's actually one of the hardest things to do right and has the worst consequences if it goes poorly. With the exception of local groups, I'd like to see the community norms shift to discourage inexperienced people from starting fundraising groups.
  • AFAIK, SI wasn't trying to use the credibility of the EA movement to bolster itself . Gleb is, both explicitly (by repeatedly and persistently listing endorsements he did not receive) and implicitly. As long as he is doing that the proportionate response is criticizing him/distancing him from EA enough to cancel out the benefits.
  • The effective altruism name wasn't worth as much when MIRI was getting started. There was no point in faking an endorsement because no one had heard of us. Now that EA has some cachet with people outside the movement there exists the possibility of trying to exploit that cachet, and it makes sense for us to raise the bar on who gets to claim endorsement.
Chronological nitpick: SingInst (which later split into MIRI and CFAR) is significantly older than the EA name and the EA movement, and its birth and growth are attributable in significant part to SingInst and CFAR projects.
My experience (as someone connected to both the rationalist and Oxford/Giving What We Can clusters as EA came into being) is that its birth came out of Giving What We Can, and the communities you mentioned contributed to growth (by aligning with EA) but not so much to birth.
You can equally draw a list of distinctions which point in the other direction: distinctions that would have made it more worthwhile to exclude MIRI than to exclude InIn. I've listed some already.

I don't think this comparison holds water. Briefly, I think SI/MIRI would have mostly attracted criticism for being weird in various ways. As far as I can tell, Gleb is not acting weird; he is acting normal in the sense that he's making normal moves in a game (called Promote-Your-Organization-At-All-Costs) that other people in the community don't want him playing, especially not in a way that implicates other EA orgs by association.

Whatever you think of that object-level point, an independent meta-level point: it's also possible that the EA movement excluding SI/MIRI at some point would have been a reasonable move in expectation. Any policy for deciding who to kick out necessarily runs the risk of both false positives and false negatives, and pointing out that a particular policy would have caused some false positive or false negative in the past is not a strong argument against it in isolation.

They've attracted criticism for more substantial reasons; many academics didn't and still don't take them seriously because they have an unusual point of view. And other people believe that they are horrible people who are in between neoreactionary racists and a Silicon Valley conspiracy to take people's money. It's easy to pick up on something being a little off-putting and then get carried down the spiral of looking for and finding other problems. The original and underlying reason people have been pissed about InIn this entire time is that they are aesthetically displeased by their content. "It comes across as spammy and promotional". An obvious typical mind fallacy. If you can fall for that then you can fall for "Eliezer's writing style is winding and confusing." Highly implausible. AI safety is a large issue. MIRI has done great work and has itself benefited tremendously from its involvement. Besides that, there have been many benefits to EA for aligning with rationalists more generally. Yes, but people are taking this case to be a true positive that proves the rule, which is no better.
Some of the criticisms I've read of MIRI are so nasty that I hesitate to rehash them all here for fear of changing the subject and side tracking the conversation. I'll just say this: MIRI has been accused of much worse stuff than this post is accusing Gleb of right now. Compared to that weird MIRI stuff, Gleb looks like a normal guy who is fumbling his way through marketing a startup. The weird stuff MIRI / Eliezer did is really bizarre. For just one example, there are places in The Sequences where Eliezer presented his particular beliefs as The Correct Beliefs. In the context of a marketing piece, that would be bad (albeit in a mundane way that we see often), but in the context of a document on how to think rationally, that's more like... egregious blasphemy. It's a good thing the guy counter-balanced whatever that behavior was with articles like "Screening Off Authority" and "Guardians of the Truth". Do some searches for web marketing advice sometime, and you'll see that Gleb might have actually been following some kind of instructions in some of the cases listed above. Not the best instructions, mind you... but somebody's serious attempt to persuade you that some pretty weird stuff is the right thing to do. This is not exactly a science... it's not even psychology. We're talking about marketing. For instance, paying Facebook to promote things can result in problems... yet this is recommended by a really big company, Facebook. :/ There are a few complaints against him that stand out as a WTF... (Then again, if you're really scouring for problems, you're probably going to find the sorts of super embarrassing mistakes people only make when they're really exhausted or whatever. I don't know what to make of every single one of these examples yet.) Anyway, MIRI / Eliezer can't claim stuff like "I was following some marketing instructions I read on the Internet somewhere.", which, IMO, would explain a lot of this stuff that Gleb did - which is not to say I think co

The most embarrassing aspect of the exclusionary, witch hunt, no-due-diligence point of view which some people are advocating in the comments here

I see insight in what Qiaochu wrote here:

If the EA community were primarily a community that functioned in person, it would be easier and more natural to deal with bad actors like Gleb; people could privately (in small conversations, then bigger ones, none of which involve Gleb) discuss and come to a consensus about his badness, that consensus could spread in other private smallish then bigger conversations none of which involve Gleb, and people could either ignore Gleb until he goes away, or just not invite him to stuff, or explicitly kick him out in some way.

But in a community that primarily functions online, where by default conversations are public and involve everyone, including Gleb, the above dynamic is a lot harder to sustain, and instead the default approach to ostracism is public ostracism, which people interested in charitable conversational norms understandably want to avoid. But just not having ostracism at all isn't a workable alternative; sometimes bad actors creep into your community and you need an immune system c

... (read more)

[ETA: a number of these comments are addressed to possible versions of this that John is not advocating, see his comment replying to mine.]

My attitude on this is rather negative, for several reasons:

  • The movement is diverse and there is no one to speak for all of it with authority, which is normal for intellectual and social movements
  • Individual fora have their moderation policies, individual organizations can choose who to affiliate with or how to authorize use of their trademarks, individuals can decide who to work with or donate to
  • There was no agreed-on course of action among the contributors to this document, let alone the wider EA community
  • Public discussion (including criticism) allows individual actors to make their own decisions
  • There are EAs collaborating with InIn on projects like secular Giving Games who report reaping significant benefits from that interaction, such as Jon Behar in the OP document; I don't think others are in a position to ask that they cut off such interactions if they find them valuable
  • I think the time costs of careful discussion and communication are important ones to pay for procedural justice and trust: I would be very uncomfortable with (and n
... (read more)
But controversial decisions will still need to be made--about who to ban from the forum, say. As EA gets bigger, I see advantages to setting up some sort of due process (if only so the process can be improved over time) vs doing things in an ad hoc way. Well, perhaps an official body would choose some kind of compromise action, such as what you did (making knowledge about Gleb's behavior public without doing anything else). I don't see why this is a compelling argument for an ad hoc approach. Without official means for dealing with bad actors, the only way to deal with them is by being a vigilante. The person who chooses to act as a vigilante will be the one who is the angriest about the actions of the original bad actor, and their response may not be proportionate. Anyone who sees someone else being a vigilante may respond with vigilante action of their own if they feel the first vigilante action was an overreach. The scenario I'm most concerned about is a spiral of vigilante action based on differing interpretations of events. A respected official body could prevent the commons from being burned in this way. I don't (currently) think it would be a good idea for an official body to make this kind of request. Actually, I think an official committee would be a good idea even if it technically had no authority at all. Just formalizing a role for respected EAs whose job it is to look in to these things seems to me like it could go a long way. OK, let's make it transparent then :) The question here is formal vs ad hoc, not transparent vs opaque. If I see a long post on the EA forum that explains why someone I know is bad for the movement, I need to read the entire post to determine whether it was constructed in a careful & transparent way. If the person is a good friend, I might be tempted to skip reading the post and just make a negative judgement about its authors. If the post is written by people whose job is to do things carefully and transparently (people who
This is a very good point. One reason I got involved in the OP was to offset some of this selection effect. On the other hand, I was also reluctant to involve EA institutions to avoid dragging them into it (I was not expecting Will MacAskill's post or the announcement by the EA Facebook group moderators, and mainly aiming at a summary of the findings for individuals). A respected institution may have an easier time in an individual case, but it may also lose some of its luster by getting involved in disputes. Regarding your other points, I agree many of the things I worry about above (transparency, nonbinding recommendations, avoiding boycotts and overreach) can potentially be separated from official vs private/ad hoc. However a more official body could have more power to do the things I mention, so I don't think the issues are orthogonal.
True, but I suspect the worst case scenario for an official body is still less bad than the worst case scenario for vigilantism. Let's say we set up an Effective Altruism Association to be the governing body for effective altruism. Let's say it becomes apparent over time that the board of the Effective Altruism Association is abusing its powers. And let's say members of the board ignore pressure to step down, and there's nothing in the Association's charter that would allow us to fix this problem. Well at that point, someone can set up a rival League of Effective Altruists, and people can vote with their feet & start attending League-sponsored events instead of Association-sponsored events. This sounds to me like an outcome that would be bad, but not catastrophic in the way spiraling vigalantism has been for communities demographically similar to ours devoted to programming, atheism, video games, science fiction, etc. If anything, I am more worried about the case where the Association's board is unable to do anything about vigilantism, or itself becomes the target of a hostile takeover by vigilantes. I suspect a big cause of disagreement here is that in America at least, we've lost cultural memories about how best to organize ourselves. From the essay Bowling Alone: America's Declining Social Capital [http://xroads.virginia.edu/~HYPER/DETOC/assoc/bowling.html] (15K citations on Google Scholar [https://scholar.google.com/scholar?cites=12164601463352883151&as_sdt=2005&sciodt=0,5&hl=en] ). You can read the essay for info on big drops in participation for churches, unions, PTAs, and civic/fraternal organizations.
I don't think formal procedures are likely to be followed and I don't think it's generally sensible to go to all the trouble of building an explicit policy to kick people out of EA. It's a terrible idea that contributes to the construction of a flawed social movement which obsessively cares about weird drama that, to those on the outside, looks silly. Outside view sanity check: which other social movements have a formal process for excluding people? None of them. Except maybe scientology. I'm not against online discussions on a structural level. I think they're fine. I'm against the policy of banding together, starting faction warfare, and demanding that other people refrain from associating with somebody.
The impression I get from Jeff's post is that the people involved took great pains to be as reasonable as possible. They don't even issue recommendations for what to do in the body of the post--they just present observations. This after ~2000 edits over the course of more than two months [http://www.jefftk.com/p/details-behind-the-inin-document]. This makes me think they'd have been willing to go to the trouble of following a formal procedure. Especially if the procedure was streamlined enough that it took less time than what they actually did. My recommendations are about how to formally resolve divisive disputes in general. If divisive disputes constitute existential threats to the movement, it might make sense to have a formal policy for resolving them, in the same way buildings have fire extinguishers despite the low rate of fires. Also, I took in to account that my policy might be used rarely or never, and kept its maintenance cost as low as possible. Drama seems pretty universal--I don't think it can be wished away. There are a lot of other analogies a person could make: Organizations fire people. States imprison people. Online communities ban people. Everyone needs to deal with bad actors. If nothing else, it'd be nice to know when it's acceptable to ban a user from the EA forum, Facebook group, etc. I'm not especially impressed with the reference class of social movements when it comes to doing good, and I'm not sure we should do a particular thing just because it's what other social movements do. I keep seeing other communities implode due to divisive internet drama, and I'd rather this not happen to mine. I would at least like my community to find a new way to implode. I'd rather be an interesting case study [https://www.jefftk.com/p/scientific-charity-movement] for future generations than an uninteresting one. So what's the right way to take action, if you and your friends think someone is a bad actor who's harming your movement?
I mean for the community as a whole, to say, "oh, look, our thought leaders decided to reject someone - ok, let's all shut them out." There's the normal kind of drama which is discussed and moved past, and the weird kind of drama like Roko's Basilisk which only becomes notable through obsessive overattention and collective self-consciousness. You can choose which one you want to have. Those groups can make their own decisions. EA has no central authority. I moderate a group like that and there is no chance I'd ban someone just because of the sort of thing which is going on here, and certainly not merely because the high chancellor of the effective altruists told me to. We're not following their lead on how to change the world. We're following their lead on how to treat other members of the community. That's something which is universal to social movements. Is this serious? EA is way more important than yet another obscure annal in Internet history. Tell it to them. Talk about it to other people. Run my organizations the way I see fit.
I think the second kind of drama is more likely in the absence of a governing body. See the vigilante action paragraph in this comment of mine [http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8r1]. If the limiting factor for a movement like Effective Altruism is being able to coordinate people via the Internet, then coordinating people via the Internet ought to be a problem of EA interest. I see your objections to my proposal as being fundamentally aesthetic. You don't like the idea of central authority, but not because of some particular reason why it would lead to bad consequences--it just doesn't appeal to you intuitively. Does that sound accurate?
The second kind of drama was literally caused by the actions of a governing body. Specifically, one that was so self-absorbed in its own constellation of ideas that it forgot about everything that outsiders considered normal. So you're trying to say that the worst case scenario of setting up an official EA panel is not as bad as the worst case scenario of vigilantism. That's a very limited argument. First, merely comparing the worst case scenarios is a very limited approach. Firstly because by definition these are events at the extreme tail ends of our expectations which implies that we are particularly incapable of understanding and predicting them, secondly because we also need to take probabilities into account, and thirdly because we need to take average, median, best case, etc. expectations into account. Furthermore, it's not clear to me that the level of witch hunting and vigilantism currently present in programming, atheist, etc. communities, is actually worse than having a veritable political rift between EA organizations. Moreover, you're jumping from Roko's Basilisk type weird drama and controversy to vigilantism, when the two are fairly different things. And finally, you're shifting the subject of discussion from a panel that excommunicates people to some kind of big organization that runs all the events. Besides that, the fact that there has been essentially no vigilantism in EA except for a small number of people in this thread suggests that you're jumping far too quickly to enormous solutions for vague problems. That's way too simplistic. Communities don't hit a ceiling and then fail when they run into a universal limiting factor. Their actions and evolution are complicated and chaotic and always affected by many things. And hardly any social movements are led by people who look at other social movements and then pattern their own behavior based on others'. I prefer the term 'common sense'. It rings lots of lots of alarm bells.
If selection of leadership is an explicit process, we can be careful to select people we trust to represent the EA movement to the world at large. If the process isn't explicit, forum moderators may be selected in an incidental way, e.g. on the basis of being popular bloggers. Governance in general seems like it's mainly about mitigation of worst case scenarios. Anyway, the evidence I presented doesn't just apply to the tail ends of the distribution. This is an empirical question. I don't get the impression that competition between organizations is usually very destructive. It might be interesting for someone to research e.g. the history of the NBA and the ABA (competing professional basketball leagues in the 1970s) or the history of AYSO and USYSA (competing youth soccer leagues in the US that still both exist--contrast with youth baseball, where I don't believe Little League has any serious rivals). I haven't heard much about destructive competition between rival organizations of this type. Even rival businesses are often remarkably civil towards one another. I suspect the reason competition between organizations is rarely destructive is because organizations are fighting over mindshare, and acting like a jerk is a good way to lose mindshare. When Google released its Dropbox competitor Google Drive, the CEO of Dropbox could have started saying nasty things about Google's CEO in order to try & discredit Drive. Instead, he cracked a joke [https://twitter.com/drewhouston/status/194837482490179584]. The second response makes me much more favorably inclined toward Dropbox's product. Vigilantes don't typically think like this. They're not people who were chosen by others to represent an organization. They're people who self-select on the basis of anger. They want revenge. And they often do things that end up discrediting their cause. The biggest example I can think of re: organizations competing in a nasty way is rival political parties, and I think there are incen
Doesn't seem like that to me. And just because "governance in general" does something doesn't mean we should. Yeah, and it's unclear. I don't see why it is relevant anyway. I never claimed that creating an EA panel would lead to a political divide between organizations. We're not paranoid about growth and we're not being deliberately elitist. People won't change their recruiting efforts just because a few people got officially kicked out. When the rubber hits the road on spreading EA, people just busy themselves with their activities, rather than optimizing some complicated function. Yeah, EA, which is not a typical social movement. I've not heard of others doing this. Hardly any. Saying that you want to experiment with EA because risking the stability of a(n unusually important) social movement just because it might benefit random people with unknown intentions who may or may not study our history is taking it a little far. Well most of them are relatively ineffective and most of them don't study histories of social movements. As for the ones that do, they don't look up obscure things such as this. When people spend significant time looking at the history of social movements, they look at large, notable, well documented cases. They will not look at a few people's online actions. There is no shortage of stories of people doing online things at this low level of notability and size.
That's fair.
That's what we did for a year+. The problem didn't go away.
Not much of a problem except the time you wasted going after it. Few people in the outside world knew about InIn; fewer still could have associated it with effective altruism. Even the people on Reddit who dug into his past and harassed him on his fake accounts thought he was just a self-promoting fraud and appeared to pick up nothing about altruism or charity. I'm done arguing about this, but if you still want an ex post facto solution just to ward off imagined future Glebs, take a moment to go to people in the actual outside world, i.e. people who have experience with social movements outside of this circlejerk, and ask them "hey, I'm a member of a social movement based on charity and altruism. We had someone who associated with our community and did some shady things. So we'd like to create an official review board where Trusted Community Moderators can investigate the actions of people who take part in our community, and then decide whether or not to officially excommunicate them. Could you be so kind as to tell us if this is the awful idea that it sounds like? Thanks."
So here's your proposal for dealing with bad actors in a different comment: You've found ways to characterize other proposals negatively without explaining how they would concretely lead to bad consequences. I'll note that I can do the same for this proposal--talking to them directly is "rude" and "confrontational", while talking about it to other people is "gossip" if not "backstabbing". Dealing with bad actors is necessarily going to involve some kind of hostile action, and it's easy to characterize almost any hostile action negatively. I think the way to approach this topic is to figure out the best way of doing things, then find the framing that will allow us to spend as few weirdness points [http://effective-altruism.com/ea/bg/you_have_a_set_amount_of_weirdness_points_spend/] as possible. I doubt this will be hard, as I don't think this is very weird. I lived in a large student co-op with just a 3-digit number of people, and we had formal meetings with motions and elections and yes, formal expulsions. The Society for Creative Anachronism is about dressing up and pretending you're living in medieval times. Here's their organizational handbook [http://www.sca.org/docs/pdf/govdocs.pdf] with bylaws. Check out section X, subsection C, subsection 3 where "Expulsion from the SCA" is discussed:
Sure I did. I said it would create unnecessary bureaucracy taking up people's time and it would make judgements and arguments that would start big new controversies where its opinions wouldn't be universally followed. Also, it would look ridiculous to anyone on the outside. Is it not apparent that other things besides 'weirdness points' should be factored into decisionmaking? You found an organization that excludes people from itself. So what? The question here is about a broad social movement trying to kick people out. If all the roleplayers of the world decided to make a Roleplaying Committee whose job was to ban people from participating in roleplaying, you'd have a point.
That's fair. Here are my responses: * Specialization of labor has a track record of saving people time that goes back millenia. The fact that we have police, whose job it is to deal with crime, means I have to spend a lot less time worrying about crime personally. If we got rid of the police, I predict the amount of crime-related drama would rise. See Steven Pinker on why he's no longer an anarchist [https://fistfulofscience.wordpress.com/2010/08/23/why-steven-pinker-gave-up-on-anarchism/] . * A respected neutral panel whose job is resolving controversies has a better chance of its opinions being universally followed than people whose participation in a discussion is selected on the basis of anger--especially if the panel is able to get better at mediation over time, through education and experience. * With regard to ridiculousness, I don't think what I'm suggesting is very different than the way lots of groups govern themselves. Right now you're thinking of effective altruism as part of the "movement" reference class, but I suspect in many cases a movement or hobby will have one or more "associations" which form de facto governing bodies. Scouting is a movement. The World Organization of the Scout Movement is an umbrella organization of national Scouting organizations, governed by the World Scout Committee. Chess is a hobby. FIDE is an international organization that governs competitive chess and consists of 185 member federations. One can imagine the creation of an umbrella organization for all the existing EA organizations that served a role similar to these. I'm feeling frustrated, because it seems like you keep interpreting my statements in a very uncharitable way. In this case, what I meant to communicate was that we should factor in everything besides weirdness points, then factor in weirdness points. Please be assured that I want to do whatever the best thing is, I consi
Sounds great, but it's only valuable when people can actually specialize. You can't specialize in determining whether somebody's a true EA or not. Being on a committee that does this won't make you wiser or fairer about it. It's a job that's equally doable by people already in the community with their existing skills and their existing job titles. It's trivially true that the majority opinion is most likely to be followed. Sure it is. You're suggesting that the FIDE start deciding who's not allowed to play chess. I don't think the order in which you factor things will make a difference in how the options are eventually ranked, assuming you're being rational. In any case, there are large differences. For one thing, the SCA does not care about how it is perceived by outsiders. The SCA is often rewarded for being weird. The SCA is also not necessarily rational. Then you're suggesting something far larger and far more comprehensive than anything that I've heard about, which I have no interest in discussing.
I actually think being on a committee helps some on its own, because you know you'll be held accountable for how you do your job. But I expect most of the advantages of a committee to be in (a) identifying people who are wise and fair to serve on it (and yes, I do think some people are wiser and fairer than others) (b) having those people spend a lot of time thinking about the relevant considerations (c) overcoming bystander effects and ensuring that there exists some neutral third party to help adjudicate conflicts. If there's no skill to this sort of thing, why not make decisions by flipping coins? Well naturally, the committee would be staffed by people who are already in the community, and it would probably not be their full-time job. Do you really think chess federations will let you continue to play at their events if you cheat or if you're rude/aggressive?
Looking at the links [https://www.reddit.com/r/UKskeptic/comments/3cqqnu/skepticthemed_book_free_through_july_13_and_thank/] you shared [https://www.reddit.com/r/agnosticism/comments/3cql5u/agnosticthemed_book_free_through_july_13/csy1mcw/] it looks like these accounts weren't so much 'fake' but just new accounts from Gleb that were used for broadcasting/spamming Gleb's book on Reddit. That attracted criticism for the aggressive self-promotion (both by sending to so many reddits, and the self-promotional spin in the message). The commenters call out angela_theresa [https://www.google.com/webhp?sourceid=chrome-instant&ion=1&espv=2&ie=UTF-8#q=site%3Areddit.com%20angel_theresa] for creating a Reddit account just to promote the book. She references an Amazon review, and there is an Amazon review from the same time period by an Angela Hodge (not an InIn contractor). My judgment is that is a case of genuine appreciation of the book, perhaps encouraged by Gleb's requests for various actions to advance the book. In one of the reviews she mentions that she knows Gleb personally, but says she got a lot out of the book. At least one other account [https://www.reddit.com/user/FahadCe/] was created to promote the book, but I haven't been able to determine whether it was an InIn affiliate. Gleb says he
Ok my goal was not to launch accusations, I just wanted to point out that even when people were saying this (they thought they were fake accounts) and looking into his personal info they didn't say anything about altruism or charity, so the themes behind the content weren't apparent, meaning that there was little or no damage to EA. Because most of the content on the site and book isn't about charity or altruism, it's not clear how well this promotes people to actually donate and stuff, but it can't be very harmful.
Right, I just wanted to diminish uncertainty about the topic and reduce speculation, since it had not been previously mentioned.
3Ben Pace6y
Kbog, I think your general mistake on this thread as a whole is assuming a binary between "either we act charitably to people or we ostracise people whenever members of the community feel like outgrouping them". Thus your straw-man characterisation of an Which was exactly what I disavowed at the bottom of my long comment here [http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8n2]. Examples of why your dichotomy is false: we could have very explicit and contained rules, such as "If you do X, Y or Z then you're out" and this would be different from the generic approach of "if anyone tries to outgrip them then support that effort". Or if we feel that it is too hard to put into a clear list, perhaps we could outsource our decision-making to a small group of trusted 'community moderators' who were asked to make decisions about this sort of thing. In an case, these are two I just came up with, the landscape is more nuanced than you'r accounting for.
To be more clear, I'm against both (a) witch hunts and (b) formal procedures of evicting people. The fact that one of these things can happen without the other does not eliminate the fact that both of them are still stupid on their own. As a counterexample to the dichotomy, sure. As something to be implemented... haha no. The more rules you make up the more argument there will be over what does or doesn't fall under those rules, what to do with bad actions outside the rules, etc. Maybe you shouldn't outsource my decision about who is kosher to "trusted community moderators". Why are people not smart enough to figure it out on their own? And is this supposed to save time, the hundreds of hours that people are bemoaning here? A formal group with formal procedures processing random complaints and documenting them every week takes up at least as much time.
The system of everyone keeping track of everything works ok in small communities, but we're so far above Dunbar's number that I don't think it's viable anymore for us. As you point out, a more formal process wouldn't have time for "processing random complaints and documenting them every week", so they'd need a process for screening out everything but the most serious problems.
Everyone doesn't have to keep track of everything. Everyone just needs to do what they can with their contacts and resources. Political parties are vastly larger than Dunbar's Number and they (usually) don't have formal committees designed to purge them of unwanted people. Same goes for just about every social movement that I can think of. Except for churches excommunicating people, of course. This is the only time that there's been a problem like this where people started calling for a formal process. You have no idea if it actually represents a frequent phenomenon. Make bureaucracy more efficient by adding more bureaucracy...
The Democrats have the Democratic National Committee, and the Republicans have the Republican National Committee.
Do they kick people out of the party? More specifically, do they kick people out of 'conservatism' and 'liberalism'?
In the US, and elsewhere, they use incentives to keep people in line, such as withholding endorsements or party funds, which can lead to people losing their seat, this effectively kicking them out of the party. See whips [https://en.m.wikipedia.org/wiki/Whip_(politics]) for what this looks like in practice. Also, in parliamentary systems, often times you can also kick people out of the party directly, or at the very least take away their power and position.
Yes, if you're in charge of an organization or resources, you can allocate them and withhold them how you wish. Nothing I said is against that. In parties and parliaments you can remove people from power. You can't remove people from associating with your movement. The question here is whether a social movement and philosophy can have a bunch of representatives whose job it is to tell other people's organizations and other people's communities to exclude certain people.
Your party leadership can publicly denounce a person and disinvite them from your party's convention. That amounts to about the same thing. Quoting myself [http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8r1]:
Good question - not really sure, I just meant to directly answer that one question. That being said, Social movements have, to varying degrees of success, managed to distance evenhanded from fringe subsets and problematic actors. How, exactly, one goes about doing this is unknown to me, but I'm sure that it's something that we could (and should) learn from leaders of other movements. Of the top of my head, the example that is most similar to our situation is the expulsion of Ralph Nader from the various movements and groups he was a part of after the Bush election.
The issue in this case is not that he's in the EA community, but that he's trying to act as the EA community's representative to people outside the community who are not well placed to make that judgment themselves.
That's an important distinction, and acting against that (trying to act as the EA community's representative) doesn't automatically mean banning from the movement.

Here are some details on how this post came together: jefftk.com/p/details-behind-the-inin-document

Thank you - this represents a very conscientious follow-up to serious concerns and a very complicated discussion. I appreciate the presentation of considered evidence and the opportunity given for a) members of the community pool their concerns and b) InIn to give their response.

Gleb, Intentional Insights board meeting, 9/21/16 at 22:05:

"We certainly are an EA meta-charity. We promote effective giving, broadly. We will just do less activities that will try to influence the EA movement itself. This would include things like writing articles for the EA forum about how to do more effective marketing. We will still do some of that, but to a lesser extent because people are right now triggered about Intentional Insights. There's a personalization of hostility associated with Intentional Insights, so we want to decrease some of our... (read more)

See 53:10-57:30 for discussion of social media.

A questioner asks about the concerns raised about InIn's social media presence. Tsipursky gives the raw numbers for social media including Facebook, Twitter, and Pinterest. He admits to the presence of clickfarms in facebook likes (although not the massive scale), but denies problems for Twitter and Pinterest while presenting them as good news about social media impact.

He conveys this by saying that the precise mechanism in Facebook is not known to apply to the other channels, failing to mention the evidence regarding them. There is even an exchange with Agnes Vishnekvin about how great it is to have so many Pinterest followers, since there are more women on Pinterest.

This meeting took place Sept 21st, but Tsipursky had been informed about the Twitter and Pinterest problems (lack of engagement, InIn following thousands of people, etc) discussed in the doc in August. He only addressed the Facebook problem mentioned by the questioner, while sweeping problems with the other channels under the rug and strongly implying they were fine.

23:50-25:40 A questioner asks about the controversy with InIn and the EA movement. It is said a few existing and potential donors/pledges withdrew from supporting InIn after the controversy. Also Tsipursky and Vishnevkin say that 2 or 3 people at EA Global had considered 4-figure donations to InIn, and these may have fallen through in light of the subsequent revelations and discussion.

Gleb's problems seem due to important differences in social status instincts. For example, Eliezer once wrote that he doesn't experience the "status smackdown emotions" that other people experience, but he didn't realize it until a lot of people complained that his Harry Potter character comes across as insufferably arrogant to them. Readers wanted to smack down his Harry Potter character but this possibility did not occur to Eliezer at the time. So, Eliezer could not have written a Harry Potter character that people did not want to smack down.

I ... (read more)

I see a lot of examples of people investing a lot of energy giving Gleb feedback to no result. What do you think should be done differently that would lead to a different result?

I don't want to shame anyone for things they can't control, but if Gleb does not have the abilities that are necessary for outreach and fundraising, it is correct for him to not do outreach and fundraising. This is in some sense discrimination based on ability, but calling it "behaving like an ableist" seems like a really bad framing to me. First, it frames it as an issue of identity rather than individual actions. It would be more helpful to say "expecting Gleb to X unfairly discriminates on ability" than "Expecting X is behaving like an ableist"

Second, ableist is a vague word that includes both "judging moral worth based on ability", "discrimination based on lack of abilities that have nothing to do with the question at hand" and "different abilities lead to different outcomes". If Gleb doesn't have the abilities to succeed in his chosen field that is very sad. I mourn for the things I would like to do but lack the ability for. But that does not change the outcome of his actions.

You have a great point that I agree with: if a person is incompetent at a particular task, they should not be doing that particular task (or should learn first rather than making a mess). IMO, Gleb should not write his own promotional materials himself and should not be the decision maker regarding methods of promotion (or he should invest the time to learn to do it well first). However, in my view, what Gleb does at Intentional Insights is not merely promotion. That is just the most visible thing that Gleb does. What Gleb actually does at InIn includes a lot of uncommon and valuable abilities like: Gleb has a really intense level of dedication to the cause of spreading rationality. Gleb is brave enough to stick his neck out and take a risk while most people are terrified just to speak in front of an audience (Though I believe someone else aught to write his speeches. Delegating speech writing is common anyway.). He is also taking large risks financially in order to make InIn happen, and not everyone can do that. Gleb cares a lot about helping the world and being kind to others and is very dedicated to that. He is educated and knowledgeable as a professor and as a rationalist, though I realize this doesn't show very well in the articles written by some of his writers. In his own articles, the quality is much higher. So, I believe his main quality problem is not that he doesn't understand quality but that his awkward promotion behaviors are repelling the good writers and/or attracting poor ones so that he is left trying to make the best of it. I've actually seen this repelling effect happening first hand. I believe that if he proved that Intentional Insights can do promotion well, good writers would want the benefit of being promoted by InIn. Most importantly, Gleb actually wants the truth while some "rationalists" are motivated by other things (ego, status, loving to argue, wanting to hang out with smart people, etc.), so cannot actually practice rationality, nor
I think you're doing the thing shlevy described about being way too charitable to Gleb here. Outside view, the simplest hypothesis that explains essentially everything observed in the original post is that Gleb is an aggressive self-promoter who takes advantage of EA conversational norms to milk the EA community for money and attention. It might be useful to reflect a little on what being manipulated feels like from the inside. An analogous dynamic in a relationship might be Alice trying very hard to understand why Bob sometimes behaves in ways that makes her uncomfortable, hypothesizing that maybe it's because Bob had a difficult childhood and finds it hard to get close to people... all the while ignoring that outside view, the simplest hypothesis that explains all of Bob's behavior is that he is manipulating her into giving him sex and affection. It's in some sense admirable for Alice to try to be charitable about Bob's behavior, but at some point 1) Alice is incentivizing terrible behavior on Bob's part and 2) the personal cost to Alice of putting up with Bob's shit is terrible and she shouldn't have to pay it.

I think Kathy's perspective is probably overly optimistic, and yours is probably overly pessimistic, Qiaochu. There are a lot of grey-area options in between being a scrupulously honest and responsive-to-criticism altruist who just has a poor model of status dynamics, and being an "aggressive self-promoter" who just want "money and attention". If I were forced to guess, I'd guess what's probably going on is some thought process like:

  1. "I'm convinced that EA outreach has massive potential upside if done well enough, and minimal downside even if done poorly."

  2. "I thinks I have a lot of good outreach skills and know-how, and while I'm not perfect, I'm sufficiently good at 'updating' and accepting criticism that I'm likely to improve a lot over time."

  3. "Therefore InIn's long-run value is huge no matter how many small hiccups there are at the moment."

  4. "The upside is so large and the need so great that some amount of dishonesty is justified for the greater good. Or, if not dishonesty: emphasizing the good over the bad; not always being fully forthcoming; etc. Not being too stringent about which exact means you use, as long as you ar

... (read more)
I agree with nearly all of this and I'm glad to see that you described these things so clearly! The behavior I keep observing in people with social status instinct differences actually matches the four thought patterns you described pretty well (written out below). My more specific explanation is that Gleb models minds differently when status is involved, so does not guess the same consequences that we do, and because he fails to see the consequences, he cannot total up the potential damage. So, he ends up underestimating the risk and makes different decisions from people who estimate the risk as being much higher. I explained why I chose this explanation from the others with Occam's razor (some of the others are in my written out response to your numbered thoughts), described what I think would solve this problem in a testable prediction and linked to the comment where my pessimism is located. I hope my solution idea, my supports for my beliefs and my pessimism link explain my view better because I think there is hope for the many people in our social network who have issues similar to what we're seeing with Gleb. This could be valuable, so I really would like to test it. :) Occam's razor: It possible that each of your four your points has a completely different cause from the others (I offered a few, Qiaochu offered a few). However, my explanation that Gleb underestimates reputation issues due to social status instinct differences makes fewer assumptions than that because it explains all four at once. (Explained in "My take on each of your 4 points" below.) It's possible that Qiaochu_Yuan is correct that Gleb is an aggressive self-promoter, with an intent to take advantage of EA conversational norms, with a goal of milking the EA community for money and attention and that Gleb intends to be manipulative. Other information I have about Gleb does not match this. He sacrifices a lot of money and financial security for InIn, so if he were motivated by greed, that w
I am actually in favor of a shape up or ship out policy with stuff like this. I replied to Gregory_Lewis with: "I strongly relate to your concerns about the damage that could be done if InIn does not improve. I have severely limited my own involvement with InIn because of the same things you describe. My largest time contribution by far has been in giving InIn feedback about reputation problems and general quality. A while back, I felt demoralized with the problems, myself, and decided to focus more on other things instead. That Gleb is getting so much attention for these problems right now has potential to be constructive." ... "Perhaps I didn't get the memo, but I don't think we've tried organizing in order to demand specific constructive actions first before talking about shutting down Intentional Insights and/or driving Gleb out of the EA movement." (Perhaps you didn't read all of my comments because this thread has too many to read but that one is located here: http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8o8 [http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8o8]) One of the main reasons I have hope is because I've given this specific class of problem, social status instinct differences, a lot of thought. I have seen people improve. I think I am able to explain enough to Gleb to help get him on the right track. I have decided to give it a shot. We'll see if it works.
True, I don't have a very good perception of social status instincts. I focus more on the quality of someone's contributions and expertise rather than their status. I despise status games. Also, there's a basic inference gap between people who perceive InIn and me as being excessively self-promotional. I am trying to break the typical and very unhelpful humility characteristic of do-gooders. See more about this in my piece here [https://www.givingwhatwecan.org/post/2015/12/why-you-should-be-public-good-deeds/] .

FWIW, I read quite a bit of the self-promotional stuff as being status-gamey. I expect I'm not all that unusual in this.

That it gets read this way is a challenge here, and indeed a challenge to the general problem of trying to dial back humility re. good deeds. I think some humility about good deeds is instrumentally pretty important for sending the right signals and encouraging others to be attracted to the idea (not of course to the point of keeping them all private).

I observe that people seem to evaluate a very large number of things in terms of status. It's actually ridiculously hard to write something that contains absolutely no status message about anybody whatsoever. If you don't believe me, try writing something that's both interesting or useful, but does not contain a single line or other element that can be interpreted in terms of status. Ironically, I think it's the people who are worst at conveying status messages who are most often accused of playing status games. Not to say that you're accusing anyone! I can see that you are not! :) The people who are very good at making status messages simply receive status. Part of what popular people do is to be smooth enough that most people don't think about the fact that they're even presenting status messages. To be unskilled with status messages is awkward, which attracts attention to the fact that status messages are present. So, from what I have observed, it seems like the people who are best at actually playing status games are rarely called out for it (Even though their skill level suggests that they may, in fact, practice that on purpose!), while the people who are terrible at it can't seem to avoid making status messages all together, nor manage to consistently craft smooth status messages that don't stick out like a sore thumb. It makes things a bit confusing for someone who doesn't do status things the stereotypical way. Do you "stop" playing status games so people do not complain? How do you get around the major limitations on expression you'd impose onto yourself by being unable to say anything that anyone might possibly interpret as a status message? Do you just swallow the irony, dive in, and intentionally practice playing status games smoothly so that nobody complains to you about status games anymore? Perhaps you agree about Gleb's intentions, or have no opinion on this, but I just wanted to say that if Gleb appears to be playing status games, he probably i
4Owen Cotton-Barratt6y
I agree that Gleb appears to be bad at status games. I don't have a view about whether he is deliberately engaging in them (I'd kind of expect him to be better if he conceived of himself as engaging in them, but I observe that he has generated status among some group of supporters of InIn). I think he should take a break from EA promotion and try to learn how to do better in this domain, in a way that doesn't take up large slices of time and attention from the EA community. It seems possible that he could come to be a productive member of the community, although I'm a bit pessimistic on the basis of the amount of feedback he has received without apparently fixing the important issues. 'Learning to do better' means not necessarily getting very good at status games, but getting good enough to recognise what might be construed as engaging in them, and avoiding that. I also think it's crucial that he moves from a position of trying to avoid saying strictly-false things to trying to avoid saying things that could lead people to take away false impressions. (Views my own, not my employer's.)
One of the things I'm trying to do, as I noted above, is a meta-move to change the culture of humility about good deeds. I generally have an attitude of trying to be the change that I want to see in the world and leading by example. It's a long-term strategy that has short-term costs, clearly :-)

I understand the long-term goal. I'm claiming that this strategy is actually instrumentally bad for that long-term goal, as it is too widely read as negative (hence reinforcing cultural norms towards humility). More effective would be to embody something which is superior to current cultural norms but will still be seen as positive.

I will think about this further, as I am not in a good space mentally to give this the consideration it deserves
I think liberating altruists to talk about their accomplishments has potential to be really high value, but I don't think the world is ready for it yet. I think promoting discussions about accomplishments among effective altruists is a great idea. I think if we do that enough, then effective altruists will eventually manage to present that to friends and family members effectively. This is a slow process but I really think word of mouth is the best promotional method for spreading this cultural change outside of EA, at least for now. I totally agree with you that the world should not shut altruists down for talking about accomplishments, however we have to make a distinction between what we think people should do and what they are actually going to do. Also, we cannot simply tell people "You shouldn't shut down altruists for talking about accomplishments." because it takes around 11 repetitions for them to even remember that. One cannot just post a single article and expect everyone to update. Even the most popular authors in our network don't get that level of attention. At best, only a significant minority reads all of what is written by a given author. Only some, not all, of those readers remember all the points. Fewer choose to apply them. Only some of the people applying a thing succeed in making a habit. Additionally, we currently have no idea how to present this idea to the outside world in a way that is persuasive yet. That part requires a bunch of testing. So, we could repeat the idea 11 times, and succeed at absolutely no change whatsoever. Or we could repeat it 11 times and be ridiculed, succeeding only at causing people to remember that we did something which, to them, made us look ridiculous. Then, there's the fact that the friends of the people who receive our message won't necessarily receive the message, too. Friends of our audience members will not understand this cultural element. That makes it very hard for the people in our audience to practi
Both of these statements sound right! Most of my theater friends from university (who tended to have very good social instincts) recommend that, to understand why social conventions like this exist, people like us read the "Status" chapter of Keith Johnstone's Impro, which contains this quote: Emphasis mine. Of course, a large fraction of EA folks and rationalists I've met claim to not be bothered by others bragging about their accomplishments, so I think you're right that promoting these sorts of discussions about accomplishments among other EAs can be a good idea.
This makes sense for spreading the message among EAs, which is why we have the Effective Altruist Accomplishments Facebook group [https://www.facebook.com/groups/EAaccomplishments/]. I'll have to think further about the most effective ways of spreading this message more broadly, as I'm not in a good mental space to think about it right now.
I don't believe you.

EDIT: Comment here was about a video by InIn, where I incorrectly speculated that they might've misused trademarks to signal affiliation with several other EA orgs. At least one of those orgs has confirmed that they did review the video prior to publication, so in fact there was not an issue. I apologize; it was wrong to speculate about that when it wasn't true, and without adequately investigating first.

7Jeff Kaufman6y
The video description does say "All the organizations involved in the video reviewed the script and provided a high-resolution copy of their logo. Their collaboration in the production of this video does not imply their specific support for any other organizations involved in the video."
You're right, I missed that. I'll edit the parent post to fix the error. (Given the history, I'm curious to find out what "reviewed the script and provided a high-resolution copy of their logo" means, and in particular whether they saw the entire script, and therefore knew they were being featured next to InIn, or whether they only reviewed the portion that was about themselves.)

Thanks for this. I volunteer for The Life You Can Save and I am checking in on this for the organization. I will get back to you shortly.

An update from The Life You Can Save: we saw and approved this particular video for publication. We did not check with other non-profits as we assumed that was not our responsibility.

Hope that helps.

Jim, in light of the statement in the video description I think you should edit this post more to reduce snark based on a questionable hypothesis (and put the edits on top). I think this is also a good example of the value of careful and cautious approach to these things. Also, while the GiveWell pronunciation is not that usually used by GiveWell staff, pronouncing the words separately actually makes it easier to understand.
If the organizations concerned give permission, I am happy to share documentary evidence in my email of them reviewing the script and giving access to their high-quality logo images. I am also happy to share evidence of me running the final video by them and giving them an opportunity to comment on the wording of the description below the video, which some did to help optimize the description to suit their preferences. I would need permission from the orgs before sharing such email evidence, of course.
I am confident this is true. And at least some of the orgs have been contacted (see Neela's comment) and have the opportunity to disclaim if they wish. [ETA: and have said this was true in their own case, see Neela's second comment.]

I'm half wondering how much upset was influenced by a general suspicion of or aversion about advertising and persuasion in general.

From one perspective, it's almost as if Gleb used to be one of the "advertising/persuasion is icky" people, and decided to bite the bullet and just do this thing, even if it seemed whacked out and icky...

At first I thought maybe part of the problem was Gleb didn't have any vision of how it could be done better. Now, I think it might actually be part of a systemic problem I keep noticing. Our social network generally ... (read more)

Huh, this is a good point. Having a clear sense of what to do with advertising (both within the community and without) would be really helpful.

In 5.3. Twitter:

The question asked of Gleb is "How many of those are payed [sic] and how many organic?"

I double checked and some Internet sources define the term "organic" as "unpaid". Following other accounts that will, in turn, follow your account is not the same thing as giving people money to follow you. I understand that this question was intended to inquire about how many Twitter followers actually genuinely want to follow the Intentional Insights account. This is a perfectly valid question.

What I'm saying is that the 5.... (read more)

My stance is currently that Gleb most likely has a learning disorder (perhaps he is on the spectrum) and is also ignorant about marketing, resulting in a low skill level with promotion. Some people here are claiming things that make it seem like they believe Gleb intends to do something bad, like a con. It's also possible Gleb was following marketing instructions to the letter which were written by people who are less scrupulous than most EAs (perhaps because he thought it was necessary to follow such instructions to be effective). I wouldn't be surprised ... (read more)

I don't care if it is intentionally a con or not. Given that cons exist, the EA community needs an immune system that will reject them. The immune system has to respond to behavior, not intentions, because behavior is all we can see, and because good intentions are not protection from the effects of behavior.

I no longer believe things Gleb says. In the Facebook thread he made numerous statements that turned out to be fundamentally misleading. Maybe he wasn't intentionally lying; I don't know, I'm not psychic. But the immune system needs to reject people when the things they say turn out to be consistently misleading and a certain number of attempts to correct fail.

I don't think everyone needs to draw the line in the same place, I approve of people helping others after some people have given up on them as a category, even if I think it's not going to work in this case. But before you invest, I encourage you to write out what would make you give up. It can't be "he admits he's a scam artist", because scam artists won't do that, and because that may not be the problem. What amount of work, lack of improvement from him, and negative effects from his work and interactions would convince you helping was no longer worth your time?

These are some really strong arguments, Elizabeth. This has a good chance to change my mind. I don't know whether I agree or disagree with you yet because I prefer to sleep on it when I might update about something important (certain processing tasks happen during sleep). I do know that you have made me think. It looks like the crux of a disagreement, if we have one, would be between one or both of the first two arguments vs. the third argument: 1.) EA needs a set of rules which cannot be gamed by con artists. 2.) EA needs a set of rules which prevent us from being seen as affiliated with con artists. vs. 3.) Let's not ban people and organizations who have good intentions. A possible compromise between people on different sides would be: Previously, there had been no rule about this. (Correct me if I'm wrong about this!) Therefore, we cannot say InIn had broken any rule. Let's make a rule to limit dishonesty and misleading mistakes to a certain number in a certain time period / number of promotional pieces / volunteers / whatever. * If InIn breaks the new rule after it is made, then we'll both agree they should be banned. If you think they should be banned right now, whether there was an existing rule or not, please tell me why. /* Specifying a time period or whatever would prevent discrimination against the oldest, most prolific, or largest organizations simply because they made a greater total number of mistakes due to having a greater volume of output. The ratio between mistakes and output seems really important to me. Thirty mistakes in ninety articles is really egregious because that's a third. Three mistakes in three hundred articles is only 1%, which is about as close to perfection as one can expect humans to get. Comparing 1 / 3 vs. 1 / 100 is comparing apples to oranges. I'm not sure what the best limit is, but I hope you can see why think this is an important factor. Maybe this was obvious to everyone who may read this comment. If so, I apolog
I have bunch of different unorganized thoughts on this. One, absolute number is obviously the incorrect thing to use. Ratio is an improvement, but I feel loses a lot of information. "Better wrong than vague" is a valuable community norm, and how people respond to criticism and new information is more important than whether they were initially correct. It also matters how public and formal the statement was- an article published in a mainstream publication is different than spitballing on tumblr. I'm unsure what you mean by "ban". There is no governing body or defined EA group. There are people clustering around particular things. I think banning him from the FB group should be based on the expected quality of his contribution to the FB group, incorporating information from his writing elsewhere. Whether people give him money should depend on their judgement about how well the money will be used. Whether he attends or speaks at EAG should be based on his expected contribution. None of these are independent, but they can have different answers. I don't think any hard and fast rule would work, even if there was a body to choose and enforce it, because anything can be gamed. What I want is for people to feel free to make mistakes, and other people to feel free to express concerns, and for proportionate responses to occur if the concerns aren't addressed. I think immune system is exactly the right metaphor. If a foreign particle enters your body, a lot of different immune molecules inspect it. Most will pass it by. Maybe one or two notice a concern. They attack to it and alert other immune molecules they should maybe be concerned. This may go nowhere, or it may cause a cascading reaction targeting the specific foreign particle. If a lot of foreign particles show up you may get an organ wide reaction (runny nose) or whole body (fever). The system coordinates without a coordinator. Every time an individual talked to Gleb privately (which I'm told happened a lot), th
3Jeff Kaufman6y
There isn't currently one, but Will is proposing setting up a panel: Setting Community Norms and Values: A response to the InIn Open Letter [http://effective-altruism.com/ea/132/setting_community_norms_and_values_a_response_to/] . The panel wouldn't have any direct power, but it would "assess potential egregious violations of those principles, and make recommendations to the community on the basis of that assessment."
I'm glad we agree that the absolute number of mistakes is obviously an incorrect thing to use. :) I like your addition of "better wrong than vague", (though I am not sure exactly how you would go about implementing it as part of an assessment beyond "If they're always vague, be suspicious." which doesn't seem actionable.). Considering how people respond to criticism is important for at least two reasons. If you can communicate to the person, and they can change, this is far less frustrating and far less risky. A person you cannot figure out how to communicate with, or who does not know how to change the particular flaw, will not be able to reduce frustration or risk fast enough. People are going to lose their patience or total up the cost-benefit ratio and decide that it's too likely to be a net negative. This is totally understandable and totally reasonable. I think the reason we don't seem to have the exact same thoughts on that is because of my main goal in life, understanding how people work. This has included tasks like challenging myself to figure out how to communicate with people when that is very hard, and challenging myself to figure out how to change things about myself even when that is very hard. By practicing on challenging communication tasks, and learning more about how human minds may work through my self-experiments, I have improved both my ability to communicate and also my ability to understand the nature of conflicts between people and other people-related problems. I think a lot of people reading these comments do feel bad for Gleb or do acknowledge that some potential will be lost if EA rejects InIn despite the high risk that their reputation problems may result in a net negative impact. Perhaps the real crux of our apparent disagreement is something more like differing levels of determination / ability to communicate about problems and persuade people like Gleb to make all the specific necessary changes. The way some appear to be seeing

Just a thought on the big picture: EAs have tended to be more comfortable with EAs doing things that many would consider unethical (like being a lawyer or banker) as long as those people use their money or influence for the greater good. But here it appears that EAs want to hold other EAs to higher ethical standards than society does. I understand that this is not a great analogy because an EA organization (especially an outreach one) gets more scrutiny. Still, I think that marketing to a broad audience almost implies a certain amount of exaggeration in order to be competitive. And even though that makes many EAs (myself included) uncomfortable, might it be for the greater good?

  • My sense is that honest and accurate evaluation of opportunities to do good, and high standards that enable that, has been a core value of EA
  • I disagree that exaggeration is more effective in broad outreach, e.g. GiveWell's reputation for honesty and care was central to letting it reach its current large scale (and its astroturfing scandal hurt badly because of that)
  • Accurate communication tends to work better for things that actually are better, and thus has good incentive properties as a standard
  • In any case, the focus in the document is mostly on InIn's interactions with the EA community rather than the general public, and it was precipitated by InIn's self-promotion and fundraising directed at the EA community
  • Thinking people are sometimes mistaken about how they assess different impacts of a job (e.g. most jobs result in increased carbon emissions, pay for the employee, consumer surplus) is not the same as lower ethical standards
Fair enough - just thought I would ask.

Note – I will make separate responses as my original comment was too long for the system to handle. This is part two of my comments.

Some of you will be tempted to just downvote this comment because I wrote it. I want you to think about whether that’s the best thing to do for the sake of transparency. If this post gets significant downvotes and is invisible, I’ll be happy to post it as a separate EA Forum post. If that’s what you want, please go ahead and downvote.

I disagree with other aspects of the post.

1) For instance, the points about affiliation, of wh... (read more)

Also, worrying about the acceptability of policies towards contractors and volunteers acting of their own free choice, for a movement that is all about the consequentialist big picture, is a red herring.

9Jeff Kaufman6y
* He claims to have 1000+ hours per week (25 people full-time-equivalent) of volunteers and contractors working on InIn projects, with very little to show for it in terms of output. * When you read comments by contractors/volunteers, even longtime ones, they don't show anywhere near the understanding of InIn material you would expect from people spending this much time reading InIn writing. Examples: John [https://disqus.com/by/disqus_NBiOyKqZYa/], Beatrice [https://disqus.com/by/beatricesargin/], Cha [https://disqus.com/by/chaarenas/]. * InIn appears to have developed a culture where whenever Gleb posts something it's expected that members will show up to comment with vacuous praise. * I'm not convinced that the contractors are acting on their own, as opposed to because Gleb is paying them or because they hope to be paid in the future, even for things that are nominally unpaid.
Doesn't change the point of my post. Whether they're paid or not is beside the point.
2Jeff Kaufman6y
Let's take a pair of examples: 1) Person A respects person B deeply, reads everything B writes, upvotes B's posts, comments to say how insightful they find B's writing, etc. 2) Person C is an employee of person D, who is paid to read everything D writes, upvote D's posts, comment to say how insightful they find D's writing etc. Persons B's actions are fine, person A's actions are fine but maybe annoying, person C's actions are kind of scummy, and person D's actions are very scummy. Sometimes people do things because they want them to happen, sometimes they do things because someone else is paying them to, sometimes it's in between: it's a continuum. Situation (1) is the sort of thing you expect at the unpaid end, (2) at the paid end.
I don't think those are good actions. I was just talking about whether he was treating them appropriately. The post implied that people were not being paid enough. I'm using the same reasoning as in the GWWC's position on fair trade.
3Jeff Kaufman6y
It sounds like I misunderstood your objection. Are you saying that if InIn had an explicit rule like "we pay 1/3 of the Upwork minimum wage, but we cast this as a 2:1 volunteering:working policy in order to get around their requirements" you would be fine with it? The idea being that minimum wages are harmful because they keep people from making mutually beneficial exchanges? So, first, I think EA organizations should pay at least the legal minimum wage as part of a general work-within-the-law system. Here we're talking about an Upwork policy, though, which is weaker than a law and it's more debatable whether to violate it. But if it were just that I agree this piece of things would be much more minor. The problem is Gleb is insisting that this is not what's going on, and that all the unpaid work is fully voluntary. And further, that actions they take in their allegedly fully-voluntary time shouldn't be attributable at all to Gleb/InIn.

Note – I will make separate responses as my original comment was too long for the system to handle. This is part three of my comments.

Now that we got through the specifics, let me share my concerns with this document.

1) This document is a wonderful testimony to bikeshedding, motte-and-bailey, and confirmation bias.

It’s an example of bikeshedding because the much larger underlying concerns are quite different from the relatively trivial things brought up in this document: see link

Consider the disclosures. Heck, even one of the authors of this document who... (read more)

Regarding point #2, Gleb writes above:

2) This document engages in unethical disclosures of my private messages with others. When I corresponded with Michelle, I did so from a position as a member of GWWC and the head of another EA organization. Neither was I asked nor did I implicitly permit my personal email exchange to be disclosed publicly. In other words, it was done without my permission in an explicit attempt to damage InIn.

Here is the entirety of section 1.2, which does not cite or quote any statement from Gleb's email to Michelle, but rather cites Michelle regarding her own statements:

Gleb has taken the Giving What We Can pledge, and contributed an article on the Giving What We Can blog on December 23, 2015. He also mentioned and linked to GWWC in his articles elsewhere. Michelle Hutchinson, Executive Director of Giving What We Can, wrote to Tsipursky in May 2016 asking him to cease “claiming to be supported by Giving What We Can.” However, the use of Giving What We Can’s name as an ‘active collaboration’ was not removed from Intentional Insights’ website, and remained in both of the above InIn documents as of August 19, 2016.

I had emailed GWWC after seeing it ment... (read more)

Additionally, Gleb has done himself exactly what he's accusing Michelle of doing! In a comment in the megathread from August he included a screenshot (archived copy) of an email I had sent him.


Regarding Gleb's point #1 I would like to agree in particular that harsh hyperbole like "Gleb made the experience of almost all EAs significantly worse" is objectionable, and Oliver should not have used it.

Also it's worth signal-boosting and reiterating to all commenters on this thread that public criticism on the internet, particularly with many critics and one or a few people being criticized, is very stressful, and people should be mindful about that and empathize with Gleb's difficult situation. I will also add that my belief is that Gleb is genuinely interested in doing good, and that one can keep this in mind even while addressing recurring problems. And further that people should separately address individual and organization specific issues from the general issue of popularization.

Regarding the point or lack thereof of the document, I agree that this exercise has been costly in several ways. I have been personally frustrated at spending so much time on it at the expense of valuable work, and dislike getting involved in such a controversy. I don't think the document will instantly solve all problems with InIn and its relationship to EA. However, it documents a patt... (read more)

"Regarding Gleb's point #1 I would like to agree in particular that harsh hyperbole like "Gleb made the experience of almost all EAs significantly worse" is objectionable, and Oliver should not have used it."

I agree, and am aware that I tend towards hyperbole in discourse in general. I apologize for that tendency, and am working on finding a communication style that successfully communicates all the aspects of a message that I want to convey, without distorting the accuracy of its denotative message. I am sorry for both the potentially false implications of using such hyperbole, as well as the negative contribution to the conversational climate.

Replacing the fairly vague, and somewhat hyperbolic "almost all" with a more precise "about 70-90%" seems like a strict improvement, and I think captures my view on this more correctly. I do think that something in the 70% - 90% space is accurate, and mostly leaves the core of the argument intact (though I do still think that using the kind of hyperbole I am prone to use creates an unnecessarily adverse conversational style, that I think generally isn't very productive).

8Jeff Kaufman6y
I have more or less two kinds of concerns: * Gleb/InIn acting unethically, overstating impact, manufacturing the illusion of support * InIn content turning people off of EA and EA ideas by presenting them badly While I think the second category is more serious, the first category is much easier to document and communicate. And, crucially, the concerns in the first category are bad enough that we can just focus there. When I originally started writing this document I included quite a bit about my concerns in the second category, as you can see in this early draft [https://docs.google.com/document/d/1KxPSpc5GFefUIH8Fh4hD6isk1zENbDRJe76JuwxvQTA/edit] . Carl and Gregory convinced me [http://www.jefftk.com/inin-types-of-concerns.png] that we should instead focus just on the first category. (Also, the section of conversation [https://www.facebook.com/jefftk/posts/805642967912?comment_id=805689434792] you cite doesn't show that I didn't care about the first category, just that I thought the second category was even more serious.)
I don't have much interest in engaging much further in this discussion, since I think most things are covered by other people, and I've already spent far more time than I think is warranted on this issue. I mostly wanted to quickly reply to this section of your comment, given that it directly addresses me: "I find it hard to fathom how Oliver can say what he said, as all three comments and the upvotes happened before Oliver’s comment. This is a clear case of confirmation bias – twisting the evidence to make it agree with one’s pre-formed conclusion: see link To me Oliver right now is fundamentally discredited as either someone with integrity or as someone who has a good grasp of the mood and dynamics of EAs overall, despite being a central figure in the EA movement and a CEA staff member." I've responded to Carl Shulman's comment below regarding my thoughts on the hyperbole used in the linked comment, which I do think muddled the message, and for which I do apologize. I do also think that your strict dismissal here of my observation is worrying, and I think misses the point that I was trying to make with my comment. I do agree with Gregory's top comment on this post, in that I think your engagement with Effective Altruism has had a large negative impact on the community, and I do also think that you worsened the experience of being a member of the EA community for at least 70% of its members, and more likely something like 80%. If you disagree, I am happy to send Facebook messages to a random sample of 10-20 people who were recently active on the EA Facebook group, and ask them whether they felt that the work of InIn had a negative impact on their experience as an EA, and bet with you on the outcome. I think your judgement of me as someone "fundamentally discredited", "without integrity" or as someone out of touch with the EA community would be misguided, and that the way you wrote it, feels like a fairly unjustified social attack to me. I am happy to have
I see the less hyperbolic claim (worsening rather than significant worsening of experience as an EA, 70% rather than almost) and still doubt it. Online fora where InIn can post are only a subset of experience as an EA, it's still a small minority of content on those forums, readers who find InIn content unwelcome can and do scroll past it, and some like it or parts thereof. I expect a large portion of people don't know or care either way about InIn's effect on their EA experience. I would still be interested to see the results of such a mini-poll on attitudes toward InIn content from a random sample of some kind (posters/commenters vs group members is a significant distinction for that).
I'll be happy to take that bet. So if I understand correctly, we'd choose a random 10 people on the EA FB group - ones who are not FB friends with you or I to avoid potential personal factors getting into play - and then ask them if their experience of the EA community has been "significantly worsened" by InIn. If 8 or more say yes, you win. I suggest 1K to a charity of the choice of the winning party? We can let a third party send messages to prevent any framing effects.
Since the majority of the FB group is inactive, I propose that we limit ourselves to the 50 or 100 most recently active members on the FB group, which will give a more representative sample of people who are actually engaging with the community (and since I don't want to get into debates of what precisely an EA is). Given that I am friends with a large chunk of the core EA community, I don't think it's sensible to exclude my circle of friends, or your circle of friends for that matter. Splitting this into two questions seems like a better idea. Here is a concrete proposal: 1. Do you identify as a member of the EA community? [Yes] [No] 2. Do you feel like the engagement of Gleb Tsipursky or Intentional Insights with the EA community has had a net negative impact on your experience as a member of the EA community? [Yes] [No] I am happy to take a bet that chosen from the top 50 most recent posters on the FB group (at this current point in time), 7 out of 10 people who said yes to the first question, will say yes to the second. Or, since I would prefer a larger sample size, 14 out of 20 people. (Since I think this is obviously a system of high noise, I only assign about 60% probability to winning this bet.) I sadly don't have $1000 left right now, but would be happy about a $50 bet.
[Posting to note I have agreed to bet against Oliver on his proposed terms above.]
Why do people keep betting against Carl Shulman???

No shame if you lose, so much glory if you win

I wasn't super-confident, and so far it looks neck-and-neck (albeit on a smaller and noisier dataset than we had hoped for, 10 instead of 20).
2Jeff Kaufman6y
Any outcome yet?
And the results are in! The bet was resolved with 6 yes votes, and 4 no votes, which means a victory for Carl Shulman. I will be sending Carl $10, as per our initial agreement.

I should note that this provided the maximum possible evidence for Oliver's hypothesis given that outcome, and that as a result I update in his direction (although less so because of the small sample).

We had 8/10 responses, and just sent out messages to another batch to get the last two responses. Should be resolved soon.
What will you do about people who don't reply to your messages?
(I haven't run this by Carl yet, but this is my current plan for how to interpret the incoming data) Since our response rates where somewhat lower than expected (mostly because we chose an account that was friends with only one person from our sample, and so messages probably ended up in people's secondary Inbox), we decided to only send messages until we get 10 responses to (1), since we don't want to spam a ton of people with a somewhat shady looking question (I think two people expressed concern about conducting a poll like this). Since our stopping criteria is 10 people, we will also stop if we get more than 7 yes responses, or more than 3 no responses, before we reach 10 people.
I agree to this.
1Jeff Kaufman6y
I'm interpreting this as "go until you get 20 'yes' responses to (1) and then compare their responses to (2)".
I am unwilling to take "active members of the EA group" as representative of the EA community, since your actual claim was that I made the experience of the EA community significantly worse, and that includes all members, not simply activists. On average, only 1% [https://en.wikipedia.org/wiki/1%25_rule_%28Internet_culture%29] of any internet community contribute, but the rest are still community members. Instead, I am fine taking the bet than Benito describes - who is clearly far from friendly [http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8nt] to InIn. I am even fine with going with your lower estimate of 14 out of 20. I am fine including friends. I am fine with the two questions, although I would insist the second question be "significantly worse" not simply "negative impact," since that is the claim we are testing, and the same for "significant preference for Gleb or InIn to not have engaged." Words matter. I am fine with having a pledge of $1K to be contributed as either of us has the money to do so in the future. I presume you will eventually have $1K.
I read "active" to mean actually involved in things, whether socially, online, finding, or campaigning. The word "activist" has a stronger connotation in spite of the same root.
Fair enough
4Ben Pace6y
Actually, I'd suggest just taking a random sample from the FB group. My guess is that your positive connections should be taken into account in this bet Gleb - if you've personally had a significant positive impact on many people's lives in the movement (and helped them be better effective altruists) then that's something this is trying to measure. Also, 10 seems like a small sample, 20 seems better.
I'm fine taking a random sample of 20 people. Regarding positive connections, the claim made by Oliver is what we're trying to measure - that I made "significantly worse" the experience of being a member of the EA community for "something like 80%" of the people there. I had not made any claims about my positive connections.
After some private conversation with Carl Shulman, who thinks that I am miscalibrated on this, and whose reasoning I trust quite a bit, I have updated away from me winning a bet with the words "significantly worse" and also think it's probably unlikely I would win a bet with 8/10, instead of 7/10. I have however taken on a bet with Carl with the exact wording I supplied below, i.e. with the words "net negative" and 7/10. Though given Carl's track record of winning bets, I feel a feeling of doom about the outcome of that bet, and on some level expect to lose that bet as well. At this point, my epistemic status on this is definitely more confused, and I assign significant probability to me overestimating the degree to which people will report that have InIn or Gleb had a negative impact on their experience (though I am even more confused whether I am just updating about people's reports, or the actual effects on the EA community, both of which seem like plausible candidates to me).
4Ben Pace6y
FYI my initial reaction was that people in the community would feel very averse to being so boldly critical, and want to be charitable to InIn (as they've been doing for years).
6Ben Pace6y
Unfortunately, you and InIn have lost all credibility. There may be nuance to be had, there may be a few errors in the document, there may even be additional deeper reasons for why Carl Shulman, Jeff Kaufman, and the other excellent members of our community have spent so much of their time trying to explain their discomfort with you; however, when the core community has wasted this much time on you, and has shouted this strongly about their discomfort, I simply will not engage further. I'll not be reading any comment or post by yourself in future, or continuing any conversation with you. This is where the line is drawn in the sand.
2Jeff Kaufman6y
I would like to strongly encourage you to keep posting in this thread, and I̶ ̶w̶o̶u̶l̶d̶ ̶l̶i̶k̶e̶ ̶t̶o̶ ̶e̶n̶c̶o̶u̶r̶a̶g̶e̶ ̶o̶t̶h̶e̶r̶s̶ ̶t̶o̶ ̶u̶p̶v̶o̶t̶e̶ ̶y̶o̶u̶r̶ ̶p̶o̶s̶t̶s̶ ̶h̶e̶r̶e̶ ̶t̶o̶ ̶s̶h̶o̶w̶ ̶t̶h̶a̶t̶ ̶y̶o̶u̶r̶ ̶c̶o̶n̶t̶i̶n̶u̶e̶d̶ ̶p̶a̶r̶t̶i̶c̶i̶p̶a̶t̶i̶o̶n̶ ̶i̶n̶ ̶t̶h̶i̶s̶ ̶d̶i̶s̶c̶u̶s̶s̶i̶o̶n̶ ̶i̶s̶ ̶v̶a̶l̶u̶e̶d̶. Having this dialog out in the open helps keep everyone on the same page. EDIT: Rob has convinced me [http://effective-altruism.com/ea/12z/concerns_with_intentional_insights/8o9] that my recommendation that people upvote Gleb's responses was not a good idea. Instead, also per Rob's suggestion, I've added links to Gleb's three response comments at the end of the top-level post.

Upvoting can also be construed as community endorsement. (Gleb himself just cited "a number of EAs have upvoted the following comments supportive of InIn/myself..." as an important line of evidence in his denunciation of Oliver Habryka.)

I think people should upvote comments if they think they're sufficiently good/helpful, and downvote comments if they think they're sufficiently bad/unhelpful. Rather than trying to artificially inflate upvote totals (as Gleb also does when he says that downvotes = 'I'll repost this as a top-level thread'), just edit the OP to link directly to Gleb's reply.

I mention this partly because the top-level comment here is seriously concerning. "InIn's content is so low-quality that it's doing more harm than good" and "InIn regularly engages in dishonest promotional techniques" are both really, really serious charges. Using the fact that people have made one serious substantive criticism to try to discredit any other serious substantive criticism they raise is really bad at the community-norms level. More generally, responding to fair, correct, relevant criticisms in large part by trying to discredit the critics is super bad form and shouldn't be seen as normal or OK. Repeatedly accusing people raising (basically fair) concerns of 'costing lives' because they took the time to fix your mistakes for you is also super bad form and definitely shouldn't be seen as normal or OK. I really don't want casual readers to skim through the comments here, see a highly upvoted comment, and assume that the comment therefore reflects EA's community standards / beliefs / etc.

"a number of EAs have upvoted the following comments supportive of InIn/myself..."

This is especially rich given the accusations (which have been proved to my satisfaction) of astroturfing. At a minimum it's another example of behaving very responsively towards criticism in the moment but making no changes to core beliefs.

4Jeff Kaufman6y
Good idea. Done, and edited my comment above.

Note – I will make separate responses as my original comment was too long for the system to handle. This is part one of my comments.

Some of you will be tempted to just downvote this comment because I wrote it. I want you to think about whether that’s the best thing to do for the sake of transparency. If this post gets significant downvotes and is invisible, I’ll be happy to post it as a separate EA Forum post. If that’s what you want, please go ahead and downvote.

I’m very proud of and happy with the work that Intentional Insights does to promote rational ... (read more)

I have down-voted this comment because I think as a community we should strongly disapprove of this sort of threat

"If this post gets significant downvotes and is invisible, I’ll be happy to post it as a separate EA Forum post. If that’s what you want, please go ahead and downvote."

The criticisms have been raised in an exceptionally transparent manner: Jeff made a public post on Facebook, and Gleb was tagged in to participate. Within that thread the plans to make this document were explained and even linked to: anybody (Gleb included) could read and contribute to that document while it was under construction.

This statement - that all criticism in the form of down-voting is likely to be driven by personal animosity or an attempt to hide negative feedback - is both baseless and appears to be an attempt to ward off all criticism. While I feel that Gleb is currently in a very difficult position, this framing of the conversation makes engagement impossible, hence downvoting.