Hide table of contents

Added Sep 26 2019: I'm not going to do an analysis or summary of these responses – but I and others think it would be interesting to do so. If you'd like to do so, I'd welcome that and will link your summary/analysis in the top of this post here. All the data is accessible in the Google Spreadsheet below.

Submit your answers anonymously here: https://docs.google.com/forms/d/e/1FAIpQLSfiUmvT4Z6hXIk_1xAh9u-VcNzERUPyWGmJjJQypZb943Pjsg/viewform?usp=sf_link

See the results here: https://docs.google.com/forms/d/e/1FAIpQLSfiUmvT4Z6hXIk_1xAh9u-VcNzERUPyWGmJjJQypZb943Pjsg/viewanalytics?usp=form_confirm

And you can see all responses beyond just the first 100 here: https://docs.google.com/spreadsheets/d/1D-2QX9PiiisE2_yQZeQuX4QskH57VnuAEF4c3YlPJIA/edit?usp=sharing

Inspired by: http://www.paulgraham.com/say.html

Let's start with a test: Do you have any opinions that you would be reluctant to express in front of a group of your peers?

If the answer is no, you might want to stop and think about that. If everything you believe is something you're supposed to believe, could that possibly be a coincidence? Odds are it isn't. Odds are you just think what you're told.

Why this is a valuable exercise

Some would ask, why would one want to do this? Why deliberately go poking around among nasty, disreputable ideas? Why look under rocks?
I do it, first of all, for the same reason I did look under rocks as a kid: plain curiosity. And I'm especially curious about anything that's forbidden. Let me see and decide for myself.
Second, I do it because I don't like the idea of being mistaken. If, like other eras, we believe things that will later seem ridiculous, I want to know what they are so that I, at least, can avoid believing them.
Third, I do it because it's good for the brain. To do good work you need a brain that can go anywhere. And you especially need a brain that's in the habit of going where it's not supposed to.
Great work tends to grow out of ideas that others have overlooked, and no idea is so overlooked as one that's unthinkable. Natural selection, for example. It's so simple. Why didn't anyone think of it before? Well, that is all too obvious. Darwin himself was careful to tiptoe around the implications of his theory. He wanted to spend his time thinking about biology, not arguing with people who accused him of being an atheist.

Thanks to Khorton for the suggestion to do it as a Google form.

64

0
0

Reactions

0
0
Comments62
Sorted by Click to highlight new comments since: Today at 6:36 AM

One of the form responses said:

Long-term future fund literally just gave their friends money for no reason / because one of the decision makers was hooking up with an applicant.

As a person who cares a lot about the long-term future fund, I wanted to mention that I am unaware of anything like this happening (I don't think any grants we've made have been to anyone who has ever been romantically involved with any of the fund members). If the person has additional evidence for this, I am interested in hearing it, and would offer full confidentiality and promise to investigate this extensively if sent to me. I think it is quite important for any member of the Long-Term-Future Fund to disclose any conflicts of interests of this kind, and would take any violation of that very seriously.

Obviously if the person who wrote this seems uncomfortable reaching out to me for any reason, reaching out to any of the trustees of CEA, or Julia Wise is likely also an option, who I also expect to be open to commit to full confidentiality on this.

It is admirably honest of you to highlight and address this, rather than hoping no-one notices.

I don't think any grants we've made have been to anyone who has ever been romantically involved with any of the fund members

Perhaps you could get the other judges to join you in a joint explicit declaration that you've never had any romantic or sexual relationships with any of the recipients? Would be good to put this at the bottom of the writeups.

edit: surprised people have downvoted this. To be clear, I was genuinely impressed that OP directly addressed this, even at the cost of drawing attention to it.

I haven't voted on this post, but if I had to guess I'd expect it got downvoted because it strikes me that it would seem strange to have at the bottom of write-ups 'none of us have romantic/sexual relationships with the recipients'. I wouldn't expect that to be the kind of thing that ordinary grant makers would put on their write ups. I'd have thought it would have the following problems:

  • Personally, I'd find it intrusive to have to make repeated specific comments about people I was not romantically or sexually involved with (despite, in fact, being longtime married and monogamous and therefore always being trivially able to do so)
  • This would seem to either rule out a class of people for grants for whom it might be unfair to do so or force even more intrusive specific public comments ('none of the judges have had relationships with any of the recipients except that judge A once slept with recipient B'). A better alternative seems to be to ask people to declare internally their conflicts of interest and have the person with the conflict of interest step out of the decision on that particular grant. (Incidentally, this harm would presumably disproportionately affect women, since the majority of the grant makers are men.)

I would expect the usual way to handle this would be to have a clear conflict of interest policy, stating in advance what constituted conflicts of interest (presumably family members, for example, would also be included) and what should be done in those cases.

You're definitely right that most grant-making organisations do not make much use of such disclaimers. However, I think this mainly because it just doesn't come up - most grantmaking occurs between people who do not know each other much socially, and are often older and married anyway.

In contrast the EA community, especially in the bay area, is extremely tight socially, and also exhibits a high level of promiscuity. As such the risk for decisions being unduly influenced by personal relationships is significantly higher. For example, back in 2016 OpenPhil revealed that they had advisors living with people they were evaluating, and evaluatees in relationships with OpenPhil staff (source). OpenPhil no longer seem to publish their conflicts of interest, but I suspect similar issues still occur. Separately, I have been told that some people in the bay area community explicitly use sexual relationships to make connections and influence the flow of funds from donors to workers and projects, which seems to raise severe concerns about objectivity and bias, as well as the potential for abuse (in both directions). I would be very concerned by either of these in the private sector, and see little reason to hold EAs to a lower standard.

Donors in general are subject to a significant information asymmetry and have few defenses against improper behaviour from organisations, especially in areas where concrete outputs are scarce. Explicit declarations that specific suspect conduct has not taken place represents a minimum level of such protection.

With regard your bullet points, I think a good analogy would be disclaimers in financial research. Every piece of financial research comes with multiple pages of disclaimers at the end, including a promise from the authors that the piece represents their true opinions and various sections about financial conflicts of interest. Perhaps the first analysts subject to these requirements found them intrusive - however by now they are a totally automated and unremarked-upon part of the process. I would expect the same to apply here, partly because every disclosure should ideally say the same thing: "None of the judges were in a relationship with anyone they evaluated."

Indeed, the disclosure requirements in the financial sector cover cases like these quite directly. For example the CFA's Ethical and Professional Standards (2016):

"... requires members and candidates to fully disclose to clients, potential clients and employers all actual and potential conflicts of interest"

and from 2014:

"Members and Candidates must make full and fair disclosure of all matters that could reasonably be expected to impair their independence and objectivity or interfere with respective duties to their clients, prospective clients, and employer. Members and Candidates must ensure that such disclosures are prominent, are delivered in plain language, and communicate the relevant information effectively.

In this case, donors and potential donors to an EA organisation are the equivalent of clients and potential clients of an investment firm, and I think a personal relationship with a grantee could reasonably be expected to impair judgement.

A case I personally came across involved two flatmates who both worked for different divisions in the same bank (Research and Sales&Trading). Because the bank (rightfully) took the separation of these two functions very seriously, HR applied a lot of pressure to them and they found alternative living arrangements.

Another example is lotteries, where the family members of employees are not allowed to participate at all, because their winning would risk bringing the lottery into disrepute:

In most cases the employee's immediate family and employees of lottery suppliers are also not allowed to play. In practice, there is no way that employees could alter the outcome of a game in their favor, but lottery officials generally believe that public confidence would be damaged should an employee win a large prize. (source)

This is perhaps slightly unfair, as they did not choose the employment of their family members, but this seems to be a small cost. The number of lottery family members is very small compared to the lottery-ticket-buying public, and there are other forms of gambling open to them. And the costs here should be smaller still, as all I am suggesting is disclosure, a much milder policy than prohibition.

I did appreciate that the fund's most recent write-up does take note of potential conflicts of interest, along with a wealth of other details. I could not find the sort of conflict of interest policy you suggested on their website however.

At the level of generality where the disclaimer you're asking for is more like 'this represents our true views and we followed our conflict of interest policy on it', I agree that sounds very reasonable.

I'm not convinced that the conflict of interest policy should be modelled on large financial institutions and lotteries though, given their huge sizes. As a small intellectual community, it seems that it would be far more limiting to simply rule anyone with any kind of relationship to any judge out of receiving a grant. I imagine this is more analogous to academic procedures around sub-disciplines. For example, in deciding who to invite to a normative ethics conference on contractualism, it seems unwise to say that no views can be given on people you have a personal relationship with, since there are few people doing tonnes of work on contractualism, and they would naturally often interact with each other and so are likely to friends in some form. It seems better to have some clear idea of when you have to declare what relationships and what measures should be taken in cases of which relationships.

I'm not sure what I think of this particular suggestion yet. I want to mention that I have principles that pull in two opposite directions - the large part of me that is strongly pro transparency, honesty and openness, and also the smaller (but still important!) part of me that is against disclaimers.

I think Stefan is basically correct, and perhaps we should distinguish between Disclaimers (where I largely agree with Robin's critique) and Disclosure (which I think is very important). For example, suppose a doctor were writing an article about how Amigdelogen can treat infection.

Disclaimers:

  • Obviously, I'm not saying Amigdelogen is the only drug that can treat infection. Also, I'm not saying it can treat cancer. And infection is not the only problem; world hunger is bad too. Also you shouldn't spend 100% of your money on Amigdelogen. And just because we have Amigdelogen doesn't mean you shouldn't be careful about washing your hands.

This is unnecessary because no reasonable person would assume you were making any of these claims. Additionally, as Robin points out, by making these disclosures you add pressure for others to make them too.

Disclosure:

  • I received a $5,000 payment from the manufacturer of Amigdelogen for writing this article, and hope to impress their hot sales rep.

This is useful information, because readers would reasonably assume you were unbiased, and this lets them more accurately evaluate how much weight to put on your claim, given that as non-experts they do not have the expertise to directly evaluate the evidence.

My current read is that the Fund is currently abiding by such disclosure norms, but that you were asking for repeated disclaimers. Like, it might make more sense in one place on the EA LTF Fund page for it to say what the disclosure policy is, and then for the Fund to continue to abide by that disclosure policy. This is different to repeatedly saying at the end of the writeups (4 times per year) "not only is it our public policy to disclose such info, but I want to repeat that we definitely disclosed all the things above and didn't hide anything". Which is a request that I think is important to have a schelling fence to not simply make every time people request it. Pretty sure the potential list of disclaimers it could be reasonable to make is longer than this round's writeup, which is already 19k words.

I'm not sure Robin Hanson's argument against disclaimers is relevant here. It seems to have more to do with disclaimers (whose purpose is to defeat possible implicatures) being stylistically objectionable and communicationally inefficient in blog posts and similar contexts (cf. his support of Classic style of writing). The grant writeup context seems quite different. (As a side-note, I'm not sure Hanson is right; he frequently argues people misattribute views to him, and I think that could in part be avoided if he included more disclaimers.)

Note that these considerations are not relevant to Michelle's comment above; her arguments are quite different.

I think it would be helpful to establish a norm that people would remove themselves from investigations involving people they have a personal or professional relationship with (which to me means from being on first-name terms upwards or where there is a conflict of interest). Where that is not possible (eg because there would not be enough competent people to do the work) then it ought to be stated what personal or professional relationships exist - but I don't think we need to know whether that relationship is going for the occasional drink or co-hosting weekly orgies...

I downvoted the post for the reason Michelle specified, though I will second that I'm glad Oli took the time to point out the comment.

A non-trivial fraction of the responses seem to me like widely-held beliefs ('popular unpopular opinions'), at least in my particular EA cluster (UK, mostly London). Some of them and other perhaps less widely-held ones I have expressed to other people, and there at least weren't any immediate obvious social repercussions.

Of course there are also many responses I completely disagree with.

"We should evaluate reducing abortions as an EA cause."

I once even wrote a research proposal on this for the CEA Summer Research Fellowship 2017. I was then invited to the programme.

I once even wrote a research proposal on this for the CEA Summer Research Fellowship 2017. I was then invited to the programme.

Could you link to the research by any chance?

Two people mentioned the CEA not being very effective as an unpopular opinion they hold; has any good recent criticism of the CEA been published?

It would be impossible to summarize my opinion of everything that's been written here. However, I'll second some commenters by noting that many of these views, if published, would likely be net-positive for the author (assuming that the full explanation was well-reasoned and specific).

Examples of posts I think could lead to reasonable discussion (there are many others):

"We should evaluate reducing abortions as an EA cause."
"In the grand scheme of things, chickens don't really matter."
"I'm pretty skeptical about polyamory." (This has limited relevance to EA outside of the Bay Area, so I'm not sure how much sense it makes as a Forum post. That said, if you wanted to try it out, I know someone who might want to be your co-author.)
"There's a lot of Girardian mimesis going on in EA cause prioritization." (I'm not sure what you mean by that, but I'd be eager to find out!)
"CEA has been a long-term counterfactual net negative to the movement and might be better off disbanding or radically reforming (except for Julia Wise's dept, who seem to be doing great work)." (Given that I work at CEA, I'd really like to hear more details about your views; we collect a lot of feedback on our various projects, but we're always open to more.)

I understand why someone might be reluctant to post about their views; good criticism is hard to write, and it's hard to predict how Forum users might respond to your ideas.

That's why I (one of the Forum's moderators) offer feedback and editing services to anyone who wants to publish a post. I'm not a perfect oracle, but I'll do my best to predict how people might react to your arguments, and suggest ways you might clarify your points. If you don't wind up publishing, your views will be safe with me.

(You can also use an anonymous email address or Forum account to send me a Google doc; I will encourage you to use a name, but I'll provide feedback either way.)

Overall, while collecting anonymous feedback has benefits, it seems much better to me that these points be expressed in full posts that can be discussed in detail, and I'd like to facilitate that process.

(If you're uncertain about the value of this process, I can refer you to others who have sent me work, and they can give [hopefully] unbiased feedback.)


*Of course, some of the views expressed on that form would invite widespread disapproval without strong evidence (e.g. personal attacks) or seem like trolling and would likely be received in that spirit (e.g. "right wingers should be euthanized").

I'm very glad that people feel reluctant to express some of those opinions, especially in the unexplained, offensive format that they were expressed in those answers.

Also, some of the comments have very similar wording, which makes me suspect that someone/some people inputted multiple entries.

The way this was presented I would expect that people state their thesis but not the explanation. It's "what is unpopular opinion" thread, not a "explain your unpopular opinion" thread.

Even though it may be presented this way, I think it would be valuable if people explained their statements more. E.g., three people wrote that "we should evaluate reducing abortions as an EA cause" or something along those lines, but none of them explained why they think it's promising. If someone could write an elevator pitch for it as an answer to the form (or this comment), I'd be interested to read.

The case for abortion in EA was defended in this Reddit post.

I would expect that apart from contraception global health interventions to be most helpful in reducing deaths of unborn humans. Miscarriages and stillbirths are a much bigger deal than abortions, and in developing countries there is still a lot of room for health interventions to help for little money.

I would be surprised if other interventions to reduce unborn deaths were very cost-effective, even if you have a worldview which values embryos as much as newborns.*

I'd just be curious to see a writeup, especially of the impact of contraception access. Unborn humans don't feature in traditional QALY-based effectiveness analyses and I'd be interested how the results would change if they were included, even if at a discounted rate. I am not expecting this to be a promising area for most people interested in effective altruism.

*An exception might be if you value pre-implantation blastocysts as much as born humans, in which case your priority could well be to sterilize everyone. See also Toby Ord's paper The Scourge.

Yeah I think you have to view this exercise as optimizing for one end of the correctness-originality spectrum. Most of what is submitted is going to be uncomfortable admitting in public because it's just plain wrong, so if this exercise is to have any value at all, it's in sifting through all the nonsense, some of it pretty rotten, in the hope of finding one or two actually interesting things in there.

I think there are more than "one or two" interesting things there.

Also, some of the comments have very similar wording, which makes me suspect that someone/some people inputted multiple entries.

The post also didn't specify one submission per person, so I wouldn't really count this against anyone. In some cases, it's clear that the submitter was making no attempt to hide it; at least one even referenced their previous submission.

[anonymous]5y21
0
0

This post seems to me clearly net-negative for EA for PR reasons, so I would argue against running a poll like this in the future. If you got a load of Labour or Conservative voters to express opinions they wouldn't be happy to express in public, you would end up with a significant subsection being offensive/beyond the usual pale, which would be used to argue against the worth of those social movements. The same applies here.

[update: i'm not saying EA should self-censor like political parties. This was an example to illustrate the broader point]

I’m sad to hear you think users of the Forum should censor our conversations in a fashion similar to mainstream political parties - groups not especially known for their free thinking or original ideas. Personally I think most of the value from EA comes from (and will continue to be) its novel intellectual frameworks for working on important problems, and not it’s political presence. (For example, OpenPhil says its main advantage over other actors in the space is its strategic cause selection, not its superior political sway.)

In that vein I think it’s incredibly valuable for a small intellectual community to understand what views it is silencing/punishing, and I found the above very valuable. I think there’s issues of adversarial bias with it being fully public (e.g. people writing inaccurate/false-flag entries out of spite) and it could be better in future to do a version with Forum users with >100 karma.

[anonymous]5y55
0
0

Hi Ben,

I see little upside in knowing almost all of what is said here, but see lots of downside.

(1) For some (most?) of these opinions, there isn't any social pressure not to air them. Indeed, as several people have already noted, some of these topics are already the subject of extensive public debate by people who like EA. (negative utilitarianism is plausible, utilitarianism is false, human enhancement is good, abortion is bad, remote working might lead to burnout, scepticism about polyamory, mental health is important etc). No value is added in airing these things anonymously.

(2) Some seem to be discussed less often but it is not clear why. eg if people want to have a go at CFAR publicly, I don't really see what is stopping them as long as their arguments are sensible. It's not as though criticising EA orgs is forbidden. I've criticised ACE publicly and as far as I know, this hasn't negatively affected me. People have pretty brutally criticised the long-term future fund formation and grants. etc.

(3) A small minority of these might reveal truths about flaws in the movement that there is social pressure not to air. (this is where the positive value comes from).

(4) For the most important subset of beyond the pale views, there is a clear risk of people not wholly bought into EA seeing this and this being extremely offputting. This is a publicly published document which could be found by the media or major philanthropists when they are googling what effective altruism is. It could be shared on facebook by someone saying "look at all the unpleasant things that effective altruists think". In general, this post allows people to pass on reputational damage they might personally bear from themselves to the movement as a whole.

Unfortunately, I can speak from first hand experience on the harm that this post has done. This post has been shared within the organisation I work for and I think could do very large damage to the reputation of EA within my org. I suspect that this alone makes the impact of this poll clearly net negative. I hope the person who set up this post sees that and reconsiders setting up a similar poll in the future.

Hey John,

For some (most?) of these opinions, there isn't any social pressure not to air them. Indeed, as several people have already noted, some of these topics are already the subject of extensive public debate by people who like EA.

First: Many positions in the public discourse are still strongly silenced. To borrow an idea from Scott Alexander, the measure of how silenced something is is not how many people talk publicly about it, but the ratio of people who talk publicly about it to the people who believe it. If a lot of people in a form say they believe something but are afraid to talk about it, I think it's a straightforward sign that they do feel silenced. I think you should indeed update that, to borrow some of your examples, when someone makes an argument for negative utilitarianism, or human enhancement, or abortion, or mental health, that several people are feeling grateful that the person is stepping out and watching with worry to see whether the person gets attacked/dismissed/laughed at. I'm pretty sure I personally have experienced seeing people lose points socially for almost every single example you listed, to varying degrees.

Second: Even for social and political movements, it's crucial to know what people actually believe but don't want to say publicly. The conservative right in the US of the last few decades would probably have liked to know that many people felt silenced about how much they liked gay marriage, given the very sudden swing in public opinion on that topic; they could then have chosen not to build major political infrastructure around the belief that their constituents would stand by that policy position. More recently I think the progressive left of many countries in Europe, Australia and the US would appreciate knowing when people are secretly more supportive of right wing policies, as there has been (IIRC) a series of elections and votes where the polls predicted a strong left-wing victory and in reality there was a slight right-wing victory.

Third: I think the public evidence of the quality of the character of people working on important EA projects is very strong and not easily overcome. You explain that it's important to you that folks at your org saw it and they felt worried that EA contains lots of bad people, or people who believe unsavoury things, or something. I guess my sense here is that there is a lot of strong, public evidence about the quality of people who are working on EA problems, about the insights that many public figures in the community have, and about the integrity of many of the individuals and organisation.

  • You can see how Holden Karnofsky went around being brutally honest yet rigorous in his analysis of charities in the global health space.
  • You can see how Toby Ord and many others have committed to giving a substantial portion of their lifetime resources to altruistic causes instead of personal ones.
  • You can see how Eliezer Yudkowsky and Nick Bostrom spent several decades of their lives attempting to lay out a coherent philosophy and argument that allowed people to identify a key under-explored problem for humanity.
  • You can read the writings of Scott Alexander and see how carefully he thinks about ethics, morality and community.
  • You can listen to the podcast of and read the public writings by Julia Galef and see how carefully she thinks about complex and controversial topics and the level of charity she gives to people on both sides of debates.
  • You can read the extensive writing of The Unit Of Caring by Kelsey Piper and see how much she cares about both people and principles, and how she will spend a great deal of her time trying to help people figure out their personal and ethical problems.
  • I could keep listing examples, but I hope the above gets my point across.

I am interested in being part of a network of people who build trust through costly (yet worthwhile) acts of ethics, integrity, and work on important problems, and I do not think the above public Form is a risk to the connections of that network.

Fourth: It's true that many social movements have been able to muster a lot of people and political power behind solving important problems, and that this required them to care a lot about PR and hold very tight constraints on what they can be publicly associated with (and thus what they're allowed to say publicly). I think however, that these social movements are not capable of making scientific and conceptual progress on difficult high-level questions like cause prioritisation and the discovery of crucial considerations.

They're very inflexible; by this I don't merely mean that they're hard to control and can take on negative affect (e.g. new atheism is often considered aggressive or unkind), but that they often cannot course correct or change their minds (e.g. environmentalism on nuclear energy, I think) in a way that entirely prohibits intellectual progress. Like, I don't think you can get 'environmentalism, but for cause prioritisation' or 'feminism, but for crucial considerations'. I think the thing we actually want here is something much closer to 'science', or 'an intellectual movement'. And I think your points are much less applicable to a healthy scientific community.

I hope this helps to communicate where I'm coming from.

[anonymous]5y36
0
0

Hi Ben,

Thanks for this, this is useful (upvoted)

1. I think we disagree on the empirical facts here. EA seems to me unusually open to considering rational arguments for unfashionable positions. People in my experience lose points for bad arguments, not for weird conclusions. I'd be very perplexed if someone were not willing to discuss whether or not utilitarianism is false (or whether remote working is bad etc) in front of EAs, and would think someone was overcome by irrational fear if they declined to do so. Michael Plant believes one of the allegedly taboo opinions here (mental health should be a priority) and is currently on a speaking tour of EA events across the Far East.

2. This is a good point and updates me towards the usefulness of the survey, but I wonder whether there is a better way to achieve this that doesn't carry such clear reputational risks for EA.

3. The issue is not whether my colleagues have sufficient public accessible reason to believe that EA is full of good people acting in good faith (which they do), but whether this survey weighs heavily or not in the evidence that they will actually consider. i.e. this might lead them not to consider the rest of the evidence that EA is mostly full of good people working in good faith. I think there is a serious risk of that.

4. As mentioned elsewhere in the thread, I'm not saying that EA should embrace political level self-restraint. What I am saying is that there are sometimes reasons to self-censor holding forth on all of your opinions in public when you represent a community of people trying to achieve something important. The respondents to this poll implicitly agree with that given that they want to remain anonymous. For some of these statements, the reputational risk of airing them anonymously does not transfer from them to the EA movement as a whole. For other statements, the reputational risk does transfer from them to the community as a whole.

Do you think anyone in the community should ever self-censor for the sake of the reputation of the movement? Do you think scientists should ever self-censor their views?



"People in my experience lose points for bad arguments, not for weird conclusions."

I just want to note that in my experience this only happens if you're challenging something that's mainstream in EA. If I tell an EA "I'm a utilitarian," that's fine. If I say, "I'm not a utilitarian," I need to provide arguments for why. That's scary, because I've never studied philosophy, and I'm often being stared down by a room full of people with philosophy degrees.

So basically, some of us are not smart enough to make good arguments for everything we believe - and we'll only lose social points for that if we mention that we have weird beliefs.

I might have more to say later. On (1), I want to state that, to me, my position seems like the conservative one. If certain views are being politically silenced, my sense is that it's good for people to have the opportunity to state that. In the alternative, people are only allowed to do this if you already believe that they're subject to unfair political pressure. Looking over the list and thinking "Hm, about 100 people say they feel silenced or that their opinions feel taboo, but I think they're wrong about being silenced (or else I think that their opinions should be taboo!), so they shouldn't have this outlet to say that" seems like a strong case for a potential correlated failure. Like, I don't fully trust my own personal sense of which of the listed positions actually is and isn't taboo in this way, and would feel quite bad dictating who was allowed to anonymously say they felt politically pressured based on who I believed was being politically pressured.

[anonymous]5y7
0
0

There are two issues here. The less important one is - (1) are people's beliefs that many of these opinions are taboo rational? I think not, and have discussed the reasons why above.

The more important one is (2) - this poll is a blunt instrument that encourages people to enter offensive opinions that threaten the reputation of the movement. If there were a way to do this with those opinions laundered out, then I wouldn't have a problem.

This has been done in a very careless way without due thought to the very obvious risks

If there were a way to do this with those opinions laundered out, then I wouldn't have a problem.

I interpret [1] you here as saying "if you press the button of 'make people search for all their offensive and socially disapproved beliefs, and collect the responses in a single place' you will inevitably have a bad time. There are complex reasons lots of beliefs have evolved to be socially punished, and tearing down those fences might be really terrible. Even worse, there are externalities such that one person saying something crazy is going to negatively effect *everyone* in the community, and one must be very careful when setting up systems that create such externalities. Importantly though, these costs aren't intrinsically tied up with the benefits of this poll -- you *can* have good ways of dispelling bubbles and encouraging important whistle-blowing, without opening a Pandora's box of reputational hazards."

1) Curious if this seems right to you?

2) More importantly, I'm curious about what concrete versions of this you would be fine with, or support?

Someone suggested:

a version with Forum users with >100 karma

Would that address your concerns? Is there anything else that would?


[1] This is to a large extent: "the most plausible version of something similar to what you're saying, that I understand from my own position", rather than than "something I'm very confident you actually belief".

Thanks John, really useful to hear specifically how this has been used and why that was problematic. I certainly wouldn't have predicted this would be the kind of thing that would be of interest to your org such that it got shared around and commented on, and it makes me aware of a risk I wouldn't have considered.

Just as a sign of social support: I am grateful to whoever organized this poll, and would be deeply saddened to be part of a community where we punish people who organize polls like this. Obviously it's fine for Halstead to have his perspective, but it seemed valuable to provide a counterpoint to communicate that I would be happy to defend anyone who organizes polls like this, and put a significant fraction of my social capital behind our collective ability to do things like this.

[anonymous]5y18
0
0

I respect your view Oli, but I don't think the person organising it put sufficient thought into the downsides of doing a poll such as this. They didn't discuss any of the obvious risks in the 'why this is a valuable exercise' section.

I do think that I am quite hesitant to promote a norm that you are no longer allowed to ask people questions about their honest opinion in public, without having written a whole essay about the possible reasons for why that might be bad. I don't think this is the type of question that one should have to justify, it's the type of question that our community should make as easy as possible.

There exist risks, of course, but I think those risks should be analyzed by core members of the community and then communicated via norms and social expectations. I don't think it's reasonable to expect every member of the community to fully justify actions like this.

[anonymous]5y8
0
0

Hi, you start with a straw man here - I'm not requesting that they write a whole essay, I'm just requesting that they put some thought into the potential downsides, rather than zero thought (as occurred here). As I understand your view, you think the person has no obligation to put any thought into whether publishing this post is a good idea or not. I have to say I find this an implausible and strange position.

It is unclear whether the author has put thought into the downsides, all we know is that the author did not emphasize potential downsides in the writeup.

I don't think the person doesn't have to put any thought into whether publishing a post like this is a good idea or not, only that they don't have to put significant effort into publicly making a case for the benefits outweighing the cost. The burden of making that case is much larger than the burden of just thinking about it, and would be large enough to get rid of most people just asking honest questions of others in public.

[anonymous]5y12
0
0

They have a section on 'why do this?' and don't discuss any of the obvious risks which suggests they haven't thought properly about the issue. I think a good norm to propagate would be - people put a lot of thought into whether they should publish posts that could potentially damage the movement. Do you agree?

Suppose I am going to run a poll on 'what's the most offensive thing you believe - anonymous public poll for effective altruists'. (1) do you think I should have to publicly explain why I am doing this? (2) do you think I should run this poll and publish the results?

I do indeed generally think that whether their writings will "damage the movement" should not be particularly high in their list of considerations to think about when asking other people questions, or writing up their thoughts. I think being overly concerned with reputation has a long history of squashing intellectual generativity, and I very explicitly would not want people to feel like they have to think about how every sentence of theirs might reflect on the movement from the perspective of an uncharitable observer.

I prefer people first thinking about all the following type of considerations, and if the stakes seem high-enough, maybe also add reputation concerns, though the vast majority of time the author in question shouldn't get that far down the list (and I also note that you are advocating for a policy that is in direct conflict with at least one item on this list, which I consider to be much more important than short-term reputation concerns):

  • Are you personally actually interested in the point you are making or the question you are asking?
  • Does the answer to the question you are asking, or answering, likely matter a lot in the big picture?
  • Is the thing that you are saying true?
  • Are you being personally honest about your behavior and actions?
  • Are you making it easier for other people to model you and to accurately predict your behavior in the future?
  • Does your question or answer address a felt need that you yourself, or someone you closely interacted, with actually has?
  • Are you propagating any actually dangerous technological insights, or other information hazards?

I would strongly object to the norm "before you post to the forum, think very hard about whether this will damage the reputation of the movement", which I am quite confident would ensure that very little of interest would be said on this forum, since almost all interesting ideas that have come out of EA are quite controversial to many people, and also tended to have started out in their least polished and most-repugnant form.

I also remember the closing talk of EAG 2017, with the theme being "stay weird", that explicitly advocated for being open and welcoming to people who say things that might sound strange or unpopular. I think that reflected an understanding that it is essential for EA to be very welcoming of ideas that sound off putting and heretical at first, in particular if they are otherwise likely to be punished or disincentivized by most of society.

From a blogpost by Scott Alexander:

But I got a chance to talk to [Will MacAskill] – just for a few minutes, before he had to run off and achieve something – and I was shocked at how much he knew about all the weirdest aspects of the community, and how protective he felt of them. And in his closing speech, he urged the attendees to “keep EA weird”, giving examples of times when seemingly bizarre ideas won out and became accepted by the mainstream.

I think a key example in this space would be a lot of the work by Brian Tomasik, whose writing I think is highly repugnant to large fractions of society, but has strongly influenced me in my thinking, and is what I consider to be one of the most valuable bodies of work to come out of the community (and to embody its core spirit, of taking ethical ideas seriously and seeing where they lead you), even though I strongly disagree with him on almost every one of his conclusions.

So no, I don't think this is a good norm, and would strongly advise against elevating that consideration to the short list of things that people actually have the mental energy for to do when posting here. Maybe when you are writing an article about EA in a major newspaper, but definitely not for this forum, the most private space for public discourse that we have, and the primary space in which we can evaluate and engage with ideas in their early stages.

[anonymous]5y7
0
0

What do you make of my 'offensive beliefs' poll idea and questions?

I think an anonymous poll of that type is probably fine, though just asking for offensive ideas is probably less likely to get valuable responses than the OP, so I feel less strongly about people being able to make that type of poll happen.

I do however still think that knowing the answers to that poll would be reasonably useful, and I still expect this to help me and others build better models of what others believe, and also think there is a good chance that a poll like this can break an equilibrium in which a silent majority is unwilling to speak up, which I think happens quite a bit and is usually bad.

So yeah, I think it would be fine to organize that poll. It's a bit of a weird filter, so I would have some preference for the person adding an explicit disclaimer that this is an anonymous internet poll and ultimately this is primarily a tool for hypothesis generation, not a representative survey, but with that it seems likely reasonably positive to me. I don't feel like that survey is as important as the type of survey that the OP organized, but I wouldn't want to punish a person for organizing it, or filling it out.

[anonymous]5y7
0
0

ok cheers. I disagree with that but feel we have reached the end of productive argument

*nods* seems good.

This post has been shared within the organisation I work for and I think could do very large damage to the reputation of EA within my org.

Would you mind sharing, at least in general terms, which organisation you work for? I confess that if I knew I have forgotten.


(This is publicly available information, so I hope it's fine if I share this. I noticed some people had downvoted this comment earlier on, so I am a bit hesitant, but after thinking more about it, I can't think of any particular reason why this question should go unanswered.)

Halstead works at Founders Pledge.

I think there’s issues of adversarial bias with it being fully public (e.g. people writing inaccurate/false-flag entries out of spite) and it could be better in future to do a version with Forum users with >100 karma.

Indeed. Anon open forms are maximally vulnerable to this: not only can detractors write stuff (for example, this poll did show up on reddits that are archly critical of EA etc.), but you can signal-boost your own renegade opinion if you're willing to make the trivial effort to repeatedly submit it (e.g. "I think Alice sucks and people should stop paying attention to her", "I completely agree with the comment above - Alice is just really toxic to this community", "Absolutely agreed re. Alice, but I feel I can't say anything publicly because she might retaliate against me", etc.)

On detractors writing: Given some of the comments on the survey, I would be surprised if quite a few answers hadn't come from people who have no connection to the EA community save as critics. For example:

EA is a waste of money and time. Another example of tech minded people trying to reinvent the wheel.

This doesn't seem like someone who actually spends time on the EA Forum (or, if they do, I wish they'd do something they found more enjoyable).

This set-up does seem like it could be exploitable in an adversarial manner... but my impression from reading the poll results, is that this is weak evidence against that actually being a failure mode -- since it doesn't seem to have happened.

I didn't notice any attempts to frame a particular person multiple times. The cases where there were repeated criticism of some orgs seemed to plausibly come from different accounts, since they often offered different reasons for the criticism or seemed stylistically different.

Moreover, if asked beforehand about the outcomes of something that can be read as "an open invitation to anonymous trolling that will get read by a huge amount of people in the movement"... I would have expected to see things way, way worse than what I actually saw. In fact, I've seen many public and identifiable comments sections on Facebook, YouTube or Twitter that were much worse than this anonymous poll.

(I claim these things weakly based on having read through all the responses in the sheet. I didn't analyse them in-depth with an eye to finding traces of adversarial action, and don't expect my approach here would have caught more sophisticated attempts.)

I don't object to this activity. I found it really interesting to read what others think and can't say. Still, I think there are times when it's in a community's best interest to self-censor, or at least not to post their least acceptable views online.

[anonymous]5y10
0
0

This post actively encourages people to post their least acceptable views online, so seems bad by this argument.

I agree with you; I just want to point clearly toward the end of the spectrum that is "a healthy intellectual community" rather than "a unified voting block that doesn't allow its members to step out of line".

[anonymous]5y13
0
0

The political analogy was an example; it was not meant to say that standard political constraints should apply to EA. The thought applies to any social movement, e.g. for people involved in environmentalism, radical exchange or libertarianism. If I were a libertarian and someone came to me saying "why don't we run a poll of libertarians on opinions they are scared to air publicly and then publish those opinions online for the world to see", I think it would be pretty obvious that this would be an extremely bad idea.

Do you have any opinions that you would be reluctant to express in front of a group of your peers? If the answer is no, you might want to stop and think about that. If everything you believe is something you're supposed to believe, could that possibly be a coincidence? Odds are it isn't. Odds are you just think what you're told.

Not necessarily! You might just be less averse to disagreement. Or perhaps you (rightly or wrongly) feel less personally vulnerable to the potential consequences of stating unpopular opinions and criticism.

Or, maybe you did quite a lot of independent thinking that differed dramatically from what you were "told", and then gravitated towards one or more social circles that happen to have greater tolerance for the things you believe, which perhaps one or more of your communities of origin did not.

There are now 120 responses, but only the first 100 can be accessed from the URL above. The remaining 20 should be accessible by following the "Other (20)" link at the bottom of the form, but the link appears to be broken (a friend of mine also reports being unable to open it, so I conclude it's not a problem specific to my setup).

If the OP (arikr) can't fix the problem, here's a possible workaround:

1. Open the form used to collect the responses from Google Drive.

2. Create a spreadsheet to store the collected responses, by clicking on the green icon to the left of the three vertical dots.

3. Generate a public link to this spreadsheet, by clicking on the green button on the top right, then on 'Get shareable link' on the top right of the pop-up window.

4. Share this link with us.

Thanks! I've followed this and have added a link to the full spreadsheet in the OP now.

For what it's worth, if I could choose between this form existing or not existing, I would prefer that it exists. But we can also try to think about something in-between. Like:

(1) We agree in advance that there will be some clean-up of the form before release. We clarify what this means, I suppose that we will want to say that offensive or ad hominem content will be removed. Maybe we propose a list of made-up examples to explain what we want to be removed. This will be subject to some debate, but we can figure out something reasonable.

(2) We collect all the answers without disclosing them.

(3) We ask for a pool of people to volunteer for cleaning up the form.

(4) We select a small subset of these volunteers at random and they do the job. They check on each other for the cleaning, and then release the cleaned-up form.

I suppose that the simple fact of having this structure in place will already essentially clean up the form, whatever we mean by that.

More simply, we can also ask for what we want. We could say, for example, "This form is about ideas you'd be reluctant to share. If you have a concern about a specific person, please talk to Julia Wise rather than posting about it here."

Or we could ask, "What's the title of an EA Forum post that you'd like to see written, but that you think would receive negative karma overall?"

I know it sounds trite, but the question you ask really affects the answers you receive.

I also think the form should exist. I would agree that attacks on individuals should be removed (with a comment left explaining why). I'm uneasy about screening the comments more than that, as then people may not trust that no bias has come in. For negative comments about organisations, perhaps people could be encouraged to briefly explain their thoughts and link to evidence. I would hope that people reading the comments would know to take criticism of organisations with no evidence given with a very big pinch of salt, since there will be people around with gripes due to rejected applications etc.

There are a few comments shitting on different cause areas and in a manner sounding like statement of fact and with little explanation (others were critical but at least explained), but these disagreements are largely based on differing ethical values, priors or weight given to evidence. Given the impression of confidence I get from these comments, I wonder if the submitters actually understand the arguments for why some EAs prioritize these causes over the ones the submitters prioritize. Or maybe they didn't feel the need to qualify their statements further, because the post is only asking for opinions.

Also, maybe there's some retaliation here, since each of AI risk, global health and poverty, and animal protection have been shat on.

More from arikr
Curated and popular this week
Relevant opportunities