Someone filled out my anonymous contact form earlier this week asking to talk, but didn't leave their contact info. If this was you, please let me know how to reach you!
There’s an asymmetry between people/orgs that are more willing to publicly write impressions and things they’ve heard, and people/orgs that don’t do much of that. You could call the continuum “transparent and communicative, vs locked down and secretive” or “recklessly repeating rumors and speculation, vs professional” depending on your views!
When I see public comments about the inner workings of an organization by people who don’t work there, I often also hear other people who know more about the org privately say “That’s not true.” But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their org’s communications staff, and then follow whatever discussion comes from it.
A downside is that if an organization isn’t prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. That’s frustrating, but it’s a standard way that many organizations operate, both in EA and in other spaces.
There are some good reasons to be slower and more coordinated about communications. For example, I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if they’d all waited a day or two to get on the same page and write a response with the correct facts. This process is worth doing for some important discussions, but few organizations will prioritize doing this every time someone is wrong on the internet.
So what’s a reader to do?
When you see a claim that an org is doing some shady-sounding thing, made by someone who doesn’t work at that org, remember the asymmetry. These situations will look identical to most readers:
Epistemic status: strong opinions, lightly held
I remember a time when an org was criticized, and a board member commented defending the org. But the board member was factually wrong about at least one claim, and the org then needed to walk back wrong information. It would have been clearer and less embarrassing for everyone if they’d all waited a day or two to get on the same page and write a response with the correct facts.
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds… not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round, and far better than them never writing a comment at all. (I guess the exception to this is if the incorrect claims had legal ramifications that couldn’t be undone. But I don’t think that’s true of the case you refer to?)
A downside is that if an organization isn’t prioritizing back-and-forth with the community, of course there will be more mystery and more speculations that are inaccurate but go uncorrected. That’s frustrating, but it’s a standard way that many organizations operate, both in EA and in other spaces.
I don’t think the fact that this is a standard way for orgs to act in the wider world says much about whether this should be the way EA orgs act. In the wider world, an org’s purpose is to make money for its shareholders: the org has no ‘teammates’ outside of itself; no-one really expects the org to try hard to communicate what it is doing (outside of communicating well being tied to profit); no-one really expects the org to care about negative externalities. Moreover, withholding information can often give an org a competitive advantage over rivals.
Within the EA community, however, there is a shared sense that we are all on the same team (I hope): there is a reasonable expectation for cooperation; there is a reasonable expectation that orgs will take into account externalities on the community when deciding how to act. For example, if communicating some aspect of EA org X’s strategy would take half a day of staff time, I would hope that the relevant decision-maker at org X takes into account not only the cost/benefit to org X of whether or not to communicate, but also the cost/benefit to the wider community. If half a day of staff time helps others in the community better understand org X’s thinking,[1] such that, in expectation, more than half a day of (quality-adjusted) productive time is saved (through, e.g., community members making better decisions about what to work on), then I would hope that org X chooses to communicate.
When I see public comments about the inner workings of an organization by people who don’t work there, I often also hear other people who know more about the org privately say “That’s not true.” But they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their org’s communications staff, and then follow whatever discussion comes from it.
I would personally feel a lot better about a community where employees aren’t policed by their org on what they can and cannot say. (This point has been debated before—see saulius and Habryka vs. the Rethink Priorities leadership.) I think such policing leads to chilling effects that make everyone in the community less sane and less able to form accurate models of the world. Going back to your example, if there was no requirement on someone to get their EAF/LW comment checked by their org’s communications staff, then that would significantly lower the time and effort barrier to publishing such comments, and then the whole argument around such comments being too time-consuming to publish becomes much weaker.
All this to say: I think you’re directionally correct with your closing bullet points. I think it’s good to remind people of alternative hypotheses. However, I push back on the notion that we must just accept the current situation (in which at least one major EA org has very little back-and-forth with the community)[2]. I believe that with better norms, we wouldn’t have to put as much weight on bullets 2 and 3, and we’d all be stronger for it.
I guess it depends on the specifics of the situation, but, to me, the case described, of a board member making one or two incorrect claims (in a comment that presumably also had a bunch of accurate and helpful content) that they needed to walk back sounds… not that bad? Like, it seems only marginally worse than their comment being fully accurate the first time round...
I agree that it depends on the situation, but I think this would often be quite a lot worse in real, non-ideal situations. In ideal communicative situations, mistaken information can simply be corrected at minimal cost. But in non-ideal situations, I think one will often see things like:
Fwiw, I think different views about this ideal/non-ideal distinction underlie a lot of disagreements about communicative norms in EA.
they have other things to do with their workday than write a correction to a comment on the Forum or LessWrong, get it checked by their org’s communications staff, and then follow whatever discussion comes from it.
I think anonymous accounts can help a bit with this. I would encourage people to make an anonymous account if they feel like it would help them quickly share useful information and not have to follow the discussion (while keeping in mind that no account is truly anonymous, and it's likely that committed people can easily deanonymize it)
Sometimes people mention "expanding the moral circle" as if it's universally good. The US flag is an item that has expanded and contracted in how much care it gets.
The US Flag Code states: "The flag represents a living country and is itself considered a living thing." When I was a child, my scout troop taught us that American flags should never touch the ground, and a worn-out flag should be disposed of respectfully by burial (in a wooden box, as if it were a person) or burning (while saluting the flag and reciting the Pledge of Allegiance) and then burying. Example instructions. People from most countries find this hard to believe!
One explanation is that the veneration for this physical object is symbolic of respect for military troops and veterans, but my scout troop sure put more effort into burning the flag properly than we ever did to helping troops or veterans in any more direct way.
Which beings / objects / concepts are worthy of special care can be pretty arbitrary. Expansion isn't always good, and contraction of the moral circle isn't always bad.
Further reading: https://gwern.net/narrowing-circle
Good point and good fact.
My sense, though, is that if you scratch most "expand the moral circle" statements you find a bit of implicit moral realism. I think generally there's an unspoken "...to be closer to its truly appropriate extent", and that there's an unspoken assumption that there'll be a sensible basis for that extent. Maybe some people are making the statement prima facie though. Could make for an interesting survey.
Cross-posting Georgia Ray's / @eukaryote's "I got dysentery so you don't have to," a fascinating read on participating in a human challenge trial.
Is Robert Burns' poem "To a Mouse, on Turning Her Up in Her Nest With the Plough, November, 1785" one of the earliest writings on wild animal welfare?
Maybe he meant it mostly as a joke. (Poetry is a medium for fancy people, he's a not-fancy guy plowing a field, addressing an even-less fancy-being: a mouse.) But I kind of think he meant it? He also wrote about "poor people are good, actually," and I like that he was thinking about the even-less-powerful creature he'd just rendered homeless.
"I'm truly sorry man's dominion,
Has broken nature's social union,
An' justifies that ill opinion,
Which makes thee startle
At me, thy poor, earth-born companion,
An' fellow-mortal!"
Wikipedia provides an English translation for those of us who find the Scots difficult.
I really like that poem. For what it's worth, I think a number of older texts from China, India, and elsewhere have things that range from depictions of care towards animals to more directly philosophical writing on how to treat animals (sometimes as part of teaching yourself to be a better person).
Some links:
I added these examples to the LessWrong tag: https://www.lesswrong.com/tag/wild-animal-welfare
Fun note that this is where the title of "Of Mice and Men" comes from:
But, Mousie, thou art no thy-lane,
In proving foresight may be vain;
The best-laid schemes o' mice an' men
Gang aft agley,
An' lea'e us nought but grief an' pain,
For promis'd joy!
Translation:
But Mouse, you are not alone,
In proving foresight may be vain:
The best-laid schemes of mice and men
Go oft awry,
And leave us nothing but grief and pain,
For promised joy!
That's a nice example!
I mention a few other instances of early animal welfare concern in this post:
Curiously, lots of them seem to come from the Anglo-Saxon sphere (though there's definitely selection bias since I looked mostly through English-speaking sources; also, we have older examples of concern for animals by e.g. Jains and Buddhists).
I love The Mower by Philip Larkin - it captures a deep instinct for kindness, especially towards animals.
Write roundup posts!
The posts I've made that I think yielded the most value for the amount of work I put in were essentially lists of other people's work.
EA Syllabi and teaching materials
Giving now vs. later: a summary
There are other formats that may make sense, like tags for material on this forum, or wikis. But the general principle is that you can do something really useful by making it easy for people to find existing material on a topic.
Someone filled out my anonymous contact form earlier this week asking to talk, but didn't leave their contact info. If this was you, please let me know how to reach you!