Evan_Gaensbauer

2018Joined Sep 2014

Sequences
3

Setting the Record Straight on Effective Altruism as a Paradigm
Effective Altruism, Religion and Spirituality
Wild Animal Welfare Literature Library

Comments
749

Even if someone considers the impact of a post to be high, regardless of how much karma it has, it doesn't necessarily mean much because that doesn't discern the worth of the post.

"Impact" needs to be operationalized and it remains infeasible to achieve a consensus on how it should be operationalized. It might be possible in theory to achieve agreement on what counts as an objectively valuable post, though in practice the criteria anyone applies for judging the value of any posts might as well be completely arbitrary.

For example, if my preferred cause is shared as a top priority only by a small minority of the effective altruism community, like invertebrate suffering, every post about that cause will receive way less karma than most other posts. That doesn't change the facts that:

  1. I'll consider the most valuable posts to be ones that have a below-average karma score among all posts.

  2. the posts most effective altruists consider the most impactful may not matter to me much at all.

Once someone becomes a more independent-minded effective altruist, how much post a karma receives matters way less.

Ok, either SBF is actually a complete moron, or this was a very calculated ploy.

he comes off as bizarre and incompetent, rather than as a evil super-villain

Why not both?

To be fair, you have to have a very high IQ to understand dank EA memes. The humor is extremely subtle, and without a solid grasp of theoretical memetics most of the jokes will go over a typical effective altruist's head. There's also the nihilistic outlook, which is deftly woven into their characterisation - the aesthetic draws heavily from Narodnaya Volya literature, for instance. The fans understand this stuff; they have the intellectual capacity to truly appreciate the depths of these jokes, to realize that they're not just funny- they say something deep about LIFE. As a consequence EAs who don't understand dank memes truly ARE normies- of course they wouldn't appreciate, for instance, the humour in the catchphrase "thank machine doggo," which itself is a cryptic reference to Apinis' original classic I'm smirking right now just imagining one of those guileless geeks scratching their heads in confusion as genius trolley problems reveal themselves on their smartphone screens. How I pity them. 😂 And yes by the way, I DO have a dank EA memes tattoo. And no, you cannot see it. It's for the ladies' eyes only- And even they have to demonstrate that they're within 5 IQ points of my own (preferably lower) beforehand.

The question of to what extent more effective altruists should return to earning to give during the last year as the value of companies like Meta and FTX has declined has me pondering whether that's worthwhile, given how nobody in EA seems to know how to spend well way more money per year on multiple EA causes. 

I've been meaning to write a post about how there has been a lot of mixed messaging about what to do about AI alignment. There has been an increased urgency to onboard new talent, and launch and expand projects, yet there is an apparently growing consensus that almost everything everyone is doing is either pointless or making things worse. Setbacks the clean meat industry faces have been mounting during the last couple years. There aren't clear or obvious ways to make significant progress on overcoming those setbacks mainly by throwing more money at them in some way. 

I'm not as familiar with how much room for more funding before diminishing marginal returns are hit for other priority areas for EA. I expect that other than a few clear-cut cases like maybe some of Givewell's top recommended charities, there isn't a strong sense of how to spend well more money per year than a lot of causes are already receiving from the EA community. 

It's one thing for smaller numbers of people returning to give to know the best targets for increased marginal funding that might fall through after the decline of FTX. It seems like it might be shortsighted to send droves of people rushing back into earning to give when there wouldn't be any consensus for what interventions they should earning to give to. 

None of Musk's projects are by themselves bad ideas. None of them are obviously a waste of effort either. I agree the impacts of his businesses are mostly greater than the impact of his philanthropy, while the opposite is presumably the case for most philanthropists in EA. 

I agree his takeover of Twitter so far doesn't strongly indicate whether Twitter will be ruined. He has made it much harder for himself to achieve his goals with Twitter, though, through a series of many mistakes he has made during the last year in the course of buying Twitter.

The problem is that he is someone who is able to have an impact that's neither based strictly in business nor philanthropy. A hits-based approach based on low-probability, high-consequence events will sometimes include a low risk of highly negative consequences. The kind of risk tolerance associated with a hits-based approach doesn't work when misses could be catastrophic:

  • His attempts in the last month to intervene in the war in Ukraine and disputes over Taiwan's sovereignty seem to speak for themselves as at least a yellow flag. That's enough of a concern even ignoring whatever impacts he has on domestic politics in the United States. 
  • The debacle of whether OpenAI as an organization will be a net positive for AI alignment and the involvement of effective altruism in the organization's foundation is thought of by some as one of the worst mistakes in the history of AI safety/alignment. Elon Musk played a crucial role in OpenAI's founding and has acknowledged he made mistakes with OpenAI since he has distanced himself from the organization. In general, the overall impact he has had on AI alignment is ambiguous. He remains one of a small number of individuals who have the most capability to impact public responses to advancing AI other than world leaders, though it's not clear whether or how much he could be relied on to have a positive impact on AI safety/AI alignment in the future.

These are only a couple examples of the potential impact and risks of the decisions he makes that are unlike anything that any individual in EA has done before. An actor in his position should have a greater deal of fear and uncertainty that should at least inspire someone to be more cautious. My assumption is he isn't cautious enough. I asked my initial question in the hope the causes of his recklessness can be identified, to aid in formulating adequate protocols for responding to the potentially catastrophic errors he commits in the future. 

I agree that Musk should have more epistemic guardrails but also that EA should me more ambitious and not less timid, but more tactful. Trying to always please everyone, be apolitical and fly under the radar can constitute an extreme risk aversion, a risk in itself.

I acknowledged in some other comments that I wrote this post sloppily, so I'm sorry for the ambiguity. Musk's recent purchase of Twitter and its ongoing consequences is part of why I've made this post. It's not about it being bad that he bought Twitter. The series of mistakes that has

It's not about him being outspoken and controversial. The problem is Musk's not being sufficiently risk-averse and potentially having blindspots that could have a significant negative impact on his EA-related/longtermist efforts.

I'm thinking of asking people like that about what they're doing but I'm also intending to request feedback from them and others in EA how to communicate related ideas better. I've asked this question to check if there are major factors I might be missing as a prelude to a post with my own views. That'd be high stakes enough that I'd put in the effort to write it better that I didn't put into this question post. I might title it something like "Effective Altruism Should Proactively Help Allied/Aligned Philanthropists Optimize Their Marginal Impact."

Other than at the Centre of Effective Altruism, who are the new/senior communications staff it'd be good to contact?

I meant the initial question literally and sought an answer. I listed some general kinds of answers and clarified that I'm seeking answers to what potential factors may be shaping Musk's approaches that would not be so obvious. I acknowledge I could have written that better and that the tone makes it ambiguous whether I was trying to slag him disguised as me asking sincere question.

Strongly upvoted. You've put my main concern better than I knew how to put it myself.

Load More