That may be the case with those in your social circle, but several existing EA donors have been vocally upset on the EA subreddit. One question we've had to answer there multiple times is whether GiveWell had anything to do with this, because, if so, they would cease their donations there.
People on the EA subreddit are generally more casual than the people that come here to the EA Forum. But comments like this are typical in the last few days:
..."I really have trouble believing that someone who has even vaguely brushed up against the insights of EA would buy
It depends. In isolation, that statement does seem concerning to me, like they may have been overestimating the potential negative optics.
What matters to me here is whether sufficient thought was put into all the different aspects. Clearly, they thought a lot about the non-optics stuff. I have no way of easily evaluating those kinds of statements, as I have very little experience organizing conferences. But I’m concerned that maybe there wasn’t sufficient thought given to just how bad the optics can get with this sort of thing.
My career has been in communi...
FWIW, as someone who also works in communications, I strongly disagree here and think EA spends massively too much of its mental energy thinking about optics.
More specifically:
I tend to criticize virtue ethics and deontology a lot more than I praise them -- IMO these are approaches that often go badly wrong. But I think PR (for a community like EA) is an area where deontology-like adherence to "behave honestly and with integrity" and virtue-ethics-like focus on "be the sort of person internally who you would find most admirable and virtuous" tends to have ...
I did feel a little nervous about the optical effects, but think it’s better to let decisions be guided less by what we think looks good, and more by what we think is good — ultimately this was a decision I felt happy to defend.
While I understand this sentiment, optics can sometimes matter much more than you may at first expect. In this specific case, the kneejerk response of many people on social media of this seeming incongruity (a seemingly extravagant purchase by a main EA org) can potentially cement negative sentiment. ...
The main problem with lavishness, IMHO, is not optics per se, but rather that it's extremely easy for people to trick themselves into believing that spending money on their own comfort/lifestyle/accommodations is net-good-despite-looking-bad (for productivity reasons or whatever). This generalizes to the community level.
(To be clear, this is not to say that we should never follow such reasoning. It's just a serious pitfall. This is also not original—others have certainly brought this up.)
Also, I imagine having communicated the reasoning behind the purchase publicly before the criticisms would have gone some way in reducing the bad optics, especially for onlookers who were inclined to spend a little bit of time to understand both perspectives. So thinking more about the optics doesn't necessarily lead you to not do the thing.
I can see this point, but I'm curious - how would you feel about the reverse? Let's say that CEA chose not to buy it, and instead did conferences the normal way. A few months later, you're talking to someone from CEA, and they say something like:
Yeah, we were thinking of buying a nice place for these retreats, which would have been cheaper in the long run, but we realised that would probably make us look bad. So we decided to eat the extra cost and use conference halls instead, in order to help EA's reputation.
Would you be at all concerned by this statement, or would that be a totally reasonable tradeoff to make?
This actually may affect whether Wikipedia policies will count certain EA Forum articles as "reliable sources" (in the Wikipedia sense). I'll be taking this back to WikiProject: Effective Altruism to see whether this may allow us to cite specific EA Forum posts for claims where no secondary source is yet available.
Unfortunately, I don't have a firm launch date that we can publicly share yet. It will be no later than Q4 2022, but we're aiming for earlier than that.
Effective Giving Quest hasn't quite officially launched — when we do, we will be certain to post an announcement on the EA Forum. But EffectiveGivingQuest.org did somewhat quietly go live today.
Unfortunately, we're not yet ready to host a full list of games either about EA or created by EAs, but I can share a smaller list of non-commercial games if you're interested.
Right now, Effective Giving Quest is in the process of working with several game developers and publishers that are receptive to EA...
Note that the hyperlink given no longer works. GWWC now has the deworming study response post at this updated link.
Note that the location has changed to (1906,587). Details on why/how are on the subreddit.
EDIT: After relocating to (1906,587), we have successfully been able to put up the EA logo plus the EffectiveAltruism.org url! I'm impressed by the effort that some of our redditors have.
While it may not exactly be the best use of any one person's time, and there was somewhat significant effort (by 2–3 redditors) put into negotiating with nearby subreddits, it appears that a discord channel of relatively few EAs has been successful at this communications project...
While the OP's effort didn't end up working, a much smaller alternate effort has been somewhat successful. The logo is at (1644,570) and coordination is at r/EffectiveAltruism.
Note that I don't actually believe this is a good use of our time; it is unlikely to result in any sort of outreach without text next to the logo, and not only does it seem unlikely that text will persist, but also the small size means it will likely get no play when the final r/Place image gets shared around online. Nevertheless, it seems like a number of EAs are interested in doing...
At Effective Giving Quest, we’re aware of several video and board games that are relevant to EA, either because they deal with the topic of EA directly or because the developers behind the games are EAs themselves.
We have not yet launched, and so are still in the process of standardizing a way for EA-friendly developers to commit to giving a set percentage of their profits toward EA causes. Once we do this, we should have a much more comprehensive list of relevant-to-EA video and board games that we can share with the EA community.
I'll come back to edit this post with a list of EA-relevant games after EGQ launches.
I just wanted to note that the link to Gleb's post from six years ago is an example of how not to do this sort of thing. His organization, Intentional Insights, actively harmed the EA movement rather than helped it.
I still think EA outreach via t-shirts is good — my favorite shirts are from GWWC, GW, ACE, and EA Global! — but communications is hard, and the way Gleb went about it was not good at all.
Thank you so much for all your work managing the EA Forum. You’ve done an excellent job, and I’m sure that you’ll do many and varied good things at OpenPhil.
Also, we’re so excited to have you join Effective Giving Quest as our first partnered streamer! I’m really looking forward to what we can accomplish in the gaming space for effective altruism. (c:
it is more important to build a robust movement of people collaboratively and honestly maximizing impact than it is to move additional money to even very good charities in the short term.
I generally agree with your sentiment here. However, I think the analysis changes when it comes to introducing people to EA.
Giving Multiplier doesn't target existing EAs with this offer. They're targeting people outside the movement who very likely are not yet familiar with EA ideas. Because the standard in the industry is that donation matching is a norm, reaching t...
This is not how I understand the term. What you're describing is how I would describe the word "commitment". But a "precommitment" is more strict; the idea is that you have to follow through in order to ensure that you can get through a Newcomb's paradox situation.
You can use precommitments to take advantage of time-travel shenanigans, to successfully one-box Newcomb, or to ensure that near-copies of you (in the multiverse sense) can work together to achieve things that you otherwise wouldn't.
With that said, it may make sense to say that we humans can't re...
Don't forget that this is iterated, though. In order to save the site from going down a year from now, we might want to follow through on a tit-for-tat strategy this year.
I'm not certain that this is the correct play, but it is an important distinction from the usual MAD theorizing.
On the one hand, in order for MAD to work, decision-makers on both sides must be able to give credible threats for a retaliatory strike scenario. This is also true in this experiment’s case: if we assume that this will be iterated on future Petrov Days, then we must show that any tit-for-tat precommitments made are followed through.
But at the same time, if LessWrong takes down the EA Forum, it just seems like wanton destruction to similarly take it down, too. I know that, as a holder of the codes, I should ensure that I’m making a fully credible threat by ...
Were you selected to have the codes for both LessWrong and the EA Forum? I see you made a similar post on LW.
There was good reason back then to believe that overpopulation was a real problem whose time would come relatively soon. If it wasn't for technological breakthroughs with dwarf wheat and IR8 rice variants, spearheaded by Norman Borlaug and others, our population would have seriously passed our ability to grow food by this point -- the so-called Malthusian trap.
Using overpopulation as an example here would be akin to using something like global climate change as an example in the present, if it turns out that a technological breakthrough in the next 5-10 ye...
I am concerned that although I explained that the views I put forward here are my own, they are being taken as though it is some official response from ACE. This is not the case. To eliminate any potential further misunderstanding on this, I will not be engaging further on this thread.
Clarifying [the specifics of the CARE conference decision] would be helpful.
Regarding the CARE conference decision, I want to give a disclaimer that I was not closely involved in this decision, so I’m not clear on what the exact reasoning was. I’ve shared my opinions above on the situation, but as this conference took place eight months ago and isn’t related to the core of ACE’s work, I don’t expect ACE to make any further official statements on the matter.
In my response to Wei Dai, I explain that it’s just not possible for me nor for ACE to share confiden...
A lot goes into ACE’s evaluation decisions. ACE’s charity evaluation process is extremely transparent; I welcome you to read ACE’s official description of that process if you want more detail generally on this kind of thing.
Regarding specifics, I’m unfortunately not at liberty to discuss any confidential details of this case beyond what is already explained in our 2020 evaluation of Anima International.
I am familiar with ACE's charity evaluation process. The hypothesis I expressed above seems compatible with everything I know about the process. So alas, this didn't really answer my question.
In general, organizations should respect confidentiality of communications with other groups. This is especially important for ACE, as they rely on animal advocacy organizations feeling comfortable enough to share their internal data with them. While transparency is one of its primary goals, and ACE is already extremely transparent about its charity evaluation process and the reasoning that led to the selection of each of the recommended charities in its reviews, I hope you can understand why ACE also values the importance of retaining a collaborative rela...
The only thing of interest here is what sort of compromise ACE wanted. What CARE said in response is not of immediate interest, and there's certainly no need to actually share the messages themselves.
Perhaps you can understand why one might come away from this conversation thinking that ACE tried to deplatform the speaker? To me at least it feels hard to interpret "find a compromise" any other way.
Although I am on the board of Animal Charity Evaluators, everything I say on this thread is my own words only and represents solely my personal opinion of what may have been going on. Any mistakes here are my own and this should not be interpreted as an official statement from ACE.
I believe that the misunderstanding going on here might be a false dilemma. Hypatia is acting as though the two choices are to be part of the social justice movement or to be in favor of free open expression. Hypatia then gives evidence that shows that ACE is doing things like th...
I am concerned that although I explained that the views I put forward here are my own, they are being taken as though it is some official response from ACE. This is not the case. To eliminate any potential further misunderstanding on this, I will not be engaging further on this thread.
While your words here are technically correct, putting it like this is very misleading. Without breaking confidentiality, let me state unequivocally that if an organization had employees who had really bad views on DEI, that would be, in itself, insufficient for ACE to downgrade them from top to standout charity status. This doesn't mean it isn't a factor; it is. But the actions discussed in this EA forum thread would be insufficient on their own to cause ACE to make such a downgrade.
Just to clarify, this currently sounds to me like you are saying "the act...
Thanks for your thoughtful comment, Eric.
I agree that being part of the social justice movement can be compatible with supporting free expression, and I added a note in my post to clarify that.
Speaking as an insider, I can explicitly say that ACE, as an organization, did not intend in any way to cancel this speaker in the sense that you mean here.
That's a relief to hear, but it also seems hard to reconcile with the public Facebook post. ACE wrote (emphasis mine):
...In fact, asking our staff to participate in an event where a person who had m
Can you explain more about this part of ACE's public statement about withdrawing from the conference:
We took the initiative to contact CARE’s organizers to discuss our concern, exchanging many thoughtful messages and making significant attempts to find a compromise.
If ACE was not trying to deplatform the speaker in question, what were these messages about and what kind of compromise were you trying to reach with CARE?
The issue is that the html is missing both the id attributes that show where the links should take you. What you want is something like:
In-page anchor links: <a id="ref1" href="#fn1">¹</a>
Linked footnote: <p id="fn1">¹ <small>Footnote text.</small></p></code>
Footnote link back to article text: <a href="#ref1">↩</a></code>
But what you currently have is <a href="#fn-E9Mru3HsHeKH8GsTZ-1"><sup>[1]</sup></a> for the in-page link and <a href="#fnref-E9Mru3HsH...
While I think the practice of sharing purchasing recommendations can be good (I love the concept of crowdsourcing research into great purchases!), I am concerned about some of the items that you've recommended here.
The diet books and health supplements you've listed are not items that I would personally endorse, and I don't believe that the EA community as a whole would uncritically endorse them either. While I'm comfortable with EA forum posts that argue for their effectiveness, I am not comfortable with EA posts that give the impression that these are no...
I would strongly argue against this, primarily because it is against Reddit's rules. Although subreddits do get to choose many of the policies in their own space, vote manipulation is a rule that is enforced site-wide.
I anticipate these lesson plans being very useful! Thank you for sharing.
My siblings (aged 13, 17, & 25) and I have a twice-yearly event where I will pick a topic and teach them about it in depth. I plan to use one of these lesson plans in my next meeting with them this summer.
The problem isn't that people with aphantasia can't visualize; it's that these people are generally unaware that they have it in the first place. (People who know they have it will correct for it in the same way that handicapped people will automatically correct for 'ableist' language.) Because of this, I'm not sure what kind of notice would suffice. I think saying to skip the sensory detail step if it doesn't resonate may work for people with poor phantasia; but for pure aphants like me, the phrase "struggle with sensory detail" won't pattern match to wha...
For the Effective Planning section, when trying to get across the idea of a Murphyjitsu inner sim, you explained the process by using visualizations. "Imagine biting into this apple. ... Picture a good friend, and imagine talking to them. This is something that's familiar, that you're good at."
I, as well as 3% of the population, have a condition called aphantasia where visualizing a scene like this is impossible for us to do. Another 5-10% of the population have "poor" phantasia; they can imagine a scene like this, but not well at all, and certainly not in...
One of the things Max recommends are mobilizations and stretching. While the links he provides explain why mobilizations may be important, they don't actually do that good a job of directly showing you the stretches in a video.
For that, you may want to watch Day[9] demonstrate the stretches people in the StarCraft community use before playing in a professional eSports match or before starting any serious practice session.
In addition to the other points brought up, I wanted to add that "probably good" has ~4 million google search results, and the username/url for "ProbablyGood" has already been taken on Facebook, Twitter, Instagram, etc. This may make the name especially difficult to effectively market.
Also related is the idea that the moral value of additional information is high when there is relatively low resilience in your credence that the current intervention is best. This leads to the (to me) rather unintuitive conclusion that if you have two research paths that both look to be equally good to look into for potentially improving the world, then, ceteris paribus, it may be better to invest in the research path for which you have less evidence that it is a good research path to follow. From Amanda Askell in the link:
...[I]f the expected concrete value
I suppose I just assumed that scale ups happened regularly at big NGOs and I never bothered to look closely enough to notice that it didn't. I find this very surprising.
No, the OP's argument is assuming that the lives of farmed animals is net negative. It's saying that farmed animal welfare might at most be neutral, which would mean that, on expectation, farmed animal welfare is harmful. Nevertheless, it would be less harmful than ignoring farmed animal welfare would be, which means farmed animal welfare is still net positive.
Meanwhile, the argument in your link argues that farmed animal welfare may be net negative, but it relies on the opposite assumption that the lives of farmed animals may be net positive.
Before answering your question, it may help to get a little more context about why ACE Movement Grants exist, and how they differ from the charity evaluation work that ACE does. This is important because you may be overestimating the relative importance of individual ACE Movement Grants compared to ACE's Recommended Charity Fund.
ACE’s overall goal is to find and support the most effective approaches to animal advocacy. The main way ACE accomplishes this is through ACE’s top and standout charities, which receive the highest (by far) influ...
In July, Charity Navigator announced their new nonprofit rating system that they call Encompass. This system looks at four “beacons” to determine their rating of each charity. One of these beacons is Impact & Results. At the time, they did not specify how they would evaluate this beacon. Yesterday's latest post from them finally sets down the initial methodology they will use.
Some basic takeaways:
Hi Eric, thanks for your note! Happy to provide some more context on a few things:
It is a (perhaps unfortunate) fact that many true conclusions alienate a lot of people. And it is much more important that we are able to identify those conclusions than that we find more people to join our ranks, or that our ranks are more ethnically / culturally / etc. diverse.
We are agreed that truth is of paramount importance here. If a true conclusion alienates someone, I endorse not letting that alienation sway us. But I think we disagree on two points:
If there are Facebook threads that drain your ability to focus for hours, it seems pretty reasonable for that person to avoid such facebook threads. ... [It]seems way better to have that responsibility be on the individual.
We agree here that if something is bad for you, you can just not go into the place where that thing is. But I think this is argument in favor of my position: that there should be EA spaces where people like that can go and discuss EA-related stuff.
For example, some people have to go to the EAA facebook thread as a part of their job. They...
Animal Charity Evaluators suspended their paid internship program for the second half of 2020, but plans to resume it in early 2021. This didn't result in anyone losing a job; rather, it meant that temporary intern positions were not filled that otherwise likely would have been, had COVID-19 not happened. There are more details about this in ACE's Room for More Funding blogpost.
If you’re correct that the harms that come from open debate are only minor harms, then I think I’d agree with most of what you’ve said here (excepting your final paragraph). But the position of bipgms I’ve spoken to is that allowing some types of debate really does do serious harm, and from watching them talk about and experience it, I believe them. My initial intuition was closer to your point of view — it’s just so hard to imagine how open debate on an issue could cause such harm — but, in watching how the...
Improving signaling seems like a positive-sum change. Continuing to have open debate despite people self-reporting harm is consistent with both caring a lot about the truth and also with not caring about harm. People often assume the latter, and given the low base rate of communities that actually care about truth they aren't obviously wrong to do so. So signaling the former would be nice.
Note: you talked about systemic racism but a similar phenomenon seems to happen anywhere laymen profess expertise they don't have. E.g. if someone tells you that they thi...
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriate for u...
Surely there exists a line at which we agree on principle. Imagine that, for example, our EA spaces were littered with people making cogent arguments that steel manned holocaust denial, and we were approached by a group of Jewish people saying “We want to become effective altruists because we believe in the stated ideals, but we don’t feel safe participating in a space where so many people commonly and openly argue that the holocaust did not happen.”
In this scenario, I hope that we’d both agree that it would be appropriat...
It is, as you said, entirely possible that it is due to ignorance or misinformation, and it may even be that the truth of the matter is that there is indeed no systemic racism in today's society, but none of this changes the fact that, in saying these things, we are being racist.
I've quoted the above because I think it provides a decent summary of what you're saying.
I would say this is the first time I've come across the idea that someone who (hypothetically) correctly says that systemic racism doesn't exist would then correctly b...
I don't think this is pivotal to anyone, but just because I'm curious:
If we knew for a fact that a slippery slope wouldn't occur, and the "safe space" was limited just to the EA Facebook group, and there was no risk of this EA forum ever becoming a "safe space", would you then be okay with this demarcation of disallowing some types of discussion on the EA Facebook group, but allowing that discussion on the EA forum? Or do you strongly feel that EA should not ever disallow these types of discussion, even on the EA Facebook group?
(by "disallowing discussion", I mean Hansonian level stuff, not obviously improper things like direct threats or doxxing)
On Q1: You mention only being aware of a few research orgs that have public ToC diagrams. I wanted to bring your attention to Animal Charity Evaluators, which uses ToC diagrams as a way of better communicating with the public how ACE thinks that a given recommended organization might be causing one of its animal advocacy outcomes.
ACE also uses a ToC diagram in its strategic plan, but this might not be easily searchable because it exists publicly only on pdf documents. (The webpage hosting the strategic plan doesn't use the phrase "theory of chang...
I agree that that was definitely a step too far. But there are legitimate middle grounds that don't have slippery slopes.
For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.
I refuse to defend something as ridiculous as the idea of cancel culture writ large. But I sincerely worry about the lack of racial representativeness, equity, and inclusiveness in the EA movement, and there needs to be some sort of way that we can encourage more people to join the movement without them feeling like they are not in a safe space.
I think there is a lot of detail and complexity here and I don't think that this comment is going to do it justice, but I want to signal that I'm open to dialog about these things.
For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.
On the face of it, this seems like a bad idea to me. I don't want "introductory" EA spaces to have ...
For example, allowing introductory EA spaces like the EA Facebook group or local public EA group meetups to disallow certain forms of divisive speech, while continuing to encourage serious open discussion in more advanced EA spaces, like on this EA forum.
You know, this makes me think I know just how academia was taken over by cancel culture. They must have allowed "introductory spaces" like undergrad classes to become "safe spaces", thinking they could continue serious open discussion in seminar rooms and journals, then those undergrads became graduate ...
No need to apologize. It's just a shortform, and I have enough cognitive dissonance on the topic to not be really sure what I think about it myself.
I agree with you that the phrase "people of the global majority" sounds weird and naively seems to divide people into unintuitive groups unnecessarily. But in my post I was talking about friends that I personally know who have been hurt by things the EA movement has said in some introductory social media spaces, and their preferred name as a group is "people of the global majority". By ...
No, I don't think the discussion on the Hanson thread in this forum involved casual bigotry. In general, I think discussion here on the EA forum tends to be more acceptable than in other EA social media spaces. (Maybe this is because nested threads are supported here, or maybe it's because people consider this space more prestigious and so act more respectfully.) But much of the discussion of the Hanson incident on Twitter would certainly qualify as casual bigotry, and I've witnessed a few threads on EA Facebook groups that also involved cle...
I incorrectly (at 4a.m.) first read this as saying "Would this include making EA apparel…for views like nativism and traditionalism?", and my mind immediately started imagining pithy slogans to put on t-shirts for EAs who believe saving a single soul has more expected value than any current EA longtermist view (because ∞>3^^^3).