EG

Evan_Gaensbauer

@ N/A
2277 karmaJoined Sep 2014Working (6-15 years)Pursuing other degree/diploma

Participation
3

  • Attended an EA Global conference
  • Attended more than three meetings with a local EA group
  • Received career coaching from 80,000 Hours

Sequences
3

Setting the Record Straight on Effective Altruism as a Paradigm
Effective Altruism, Religion and Spirituality
Wild Animal Welfare Literature Library

Comments
833

Welcome to the EA Forum! Thanks for sharing!

I've known Kat Woods for as long as Eric Chisholm has. I first met Eric several years before either of us first got involved in the EA or rationality communities. I had a phone call with him a few hours ago letting him know that this screencap was up on this forum. He was displeased you didn't let him know yourself that you started this thread.

He is extremely busy for the rest of the month. He isn't on the EA Forum either. Otherwise, I don't speak for Eric. I've also made my own reply in the comment thread Eric started on Eliezer's Facebook post. I'm assuming you'll see the rest of that comment thread on Facebook too. 

You can send me a private message to talk or ask me about whatever, or not, as you please. I don't know who you are. 

For anyone else curious, here is a Google Doc I've started writing up about the origins of the EA and rationality groups in Vancouver.
https://docs.google.com/document/d/1p8MPC5j2aZrVX_ugBSHy8-N9HSHWiulR5GHJBfKhQe8/edit?usp=sharing

I just posted on the Facebook wall of another effective altruist:

 Hey, I really appreciate everything you do for the effective altruism community! Happy birthday! 

We would all greatly benefit from expressing our gratitude like this to each other more often.

I've had a half-finished draft post about how effective altruists shouldn't be so hostile to newcomers to EA from outside the English-speaking world (e.g., primarily the United States and Commonwealth countries). In addition to English not being a first language, especially for younger people or students who don't have as much experience, there are the problems of mastering the technical language for a particular field, as well as the jargon unique to EA. That can be hard for even many native English speakers.

LessWrong and the rationality community are distinct from EA, and even AI safety has grown much bigger than the rationality community. There shouldn't be any default expectation posters on the EA Forum will conform to the communication style of rationalists. If rationalists expect that because they consider their communication norms superior, the least they should do is make more effort to educate or others how to get up to speed, like with style guides, etc.  Some rationalists have done that, though rationalists at large aren't entitled to expect others will do all the work by themselves without help to write just like they do.

I read almost all of the comments on the original EA Forum post linking to the Time article in question. If I recall correctly,Will made a quick comment that he would respond to these kinds of details when he would be at liberty to do so. (Edit: he made that point even more clearly in this shortform post he wrote a few months ago. https://forum.effectivealtruism.org/posts/TeBBvwQH7KFwLT7w5/william_macaskill-s-shortform?commentId=ACDPftuESqkJP9RxP)

I assume he will address these concerns you've mentioned here at the same time he provides a fuller retrospective on the FTX collapse and its fallout.

Upvoted. Thanks for clarifying. The conclusion to your above post was ambiguous to me, though I now understand.

The rest of us can help by telling others Will MacAskill is seeking to divest himself of this reputation whenever we see or hear someone talking about him as if he still wants to be that person (not that he ever did, as evidenced by his above statement, a sentiment I've seen him express before in years past).

Please send me links to posts with those arguments you've made, as I've not read them, though my guess would be that you haven't convinced anyone because some of the greatest successes in EA started out so small. I remember the same kind of skepticism being widely expressed some projects like that.

Rethink Priorities comes to mind as one major example. The best example is Charity Entrepreneurship. It was not only one of those projects with potential scalability that was doubted. It keeps incubating successful non-profit EA startups for across almost every EA-affiliated cause. CE's cumulative track record might the best empirical argument against the broad applicability to the EA movement of your own position here.

How my perspective has changed on this during the last few years is to advise others not to give much weight to a single point of feedback. Especially for those who've told me only one or two people have discouraged them from be(com)ing a researcher, I tell them not to stop trying in spite of that. That's even when the person giving the discouraging feedback is in a position of relative power or prestige.

The last year seems to have proven that the power or prestige someone has gained in EA is a poor proxy for how much weight their judgment should be given on any, single EA-relsted topic. If Will MacAskill and many of his closest peers are doubting how they've conceived of EA for years in the wake of the FTX collapse, I expect most individual effective altruists confident enough to judge another's entire career trajectory are themselves likely overconfident.

Another example is AI safety. I've talked to dozens of aspiring AI safety researchers who've felt very discouraged An illusory consensus thrust upon them that their work was essentially worthless because it didn't superficially resemble the work being done by the Machine Intelligence Research Institute or whatever other approach was in vogue at the time. For years, I suspected that was bullshit.

Some of the brightest effective altruists I've met were being inundated by personal criticism harsher than any even Eliezer Yudkowsky would give. I told those depressed, novice AIS researchers to ignore those dozens of jerks who concluded the way to give constructive criticism, like they presumed Eliezer would, was to emulate a sociopath. These people were just playing a game of 'follow the leader' not even the "leaders" would condone. I distrusted their hot takes based on clout and vibes about who was competent and who wasn't

Meanwhile, increasingly over the last year or two, more and more of the AIS field, including some of its most reputed luminaires, have come out of the woodwork more and more to say, essentially, "lol, turns out we didn't know what we were doing with alignment the whole time, we're definitely probably all gonna die soon, unless we can convince Sam Altman to hit the off switch at OpenAI." I feel vindicated in my skepticism of the quality of the judgement of many of our peers.

Load more