This dialogue with my optimizer-self, showing it evidence that it was undermining its own values by ignoring my other values, was very helpful for me too.
Do you subscribe to moral realism? If not, I’m curious what you think of Spencer‘s post: https://www.spencergreenberg.com/2022/08/tensions-between-moral-anti-realism-and-effective-altruism/
Just want to say that even though this is my endorsed position, it often goes out the window when I encounter tangible cases of extreme suffering. Here in Berlin, there was a woman I saw on the subway the other day who walked around dazedly with an open wound, seemingly not in touch with her surroundings, walking barefoot, and wearing an expression that looked like utter hopelessness. I don't speak German, so I wasn't able to interact with her well.
When I run into a case like this, what the "preliminary answer" I wrote above is hard to keep in mind, especially when I think of the millions who might be suffering in similar ways, yet invisibly.
Hi Braxton – I feel moved by what you wrote here and want to respond later in full.
For now, I just want to thank you for your phrase here: "How could I possibly waste my time on leisure activities once I’ve seen the dark world?" I think this might deserve its own essay. I think it poses a challenge even for anti-realists who aren't confused about their values, in the way Spencer Greenberg talks about here (worth a read).
Through the analogy I use in my essay, it might be nonsensical, in the conditions of a functional society, to say something like "It's mor...
Agree so much with the antidote of silliness! I’m happy to see that EA Twitter is embracing it.
Excited to read the links you shared, they sound very relevant.
Thank you, Oliver. May your fire bum into the distance.
Thank you, David! I also worry about this:
When we model to the rest of the world that “Effective” “Altruism” looks like brilliant (and especially young) people burn themselves out in despair, we are also creating a second order effect where we broadcast the image of Altruism as one tiled with suffering for the greater good. This is not quite inviting or promising for long term engagement on the world's most pressing problems.
Of course, if one believes that AGI is coming within a few years, then you might not care about these potential second order effects....
Ah, got it. My current theory is that maximizing caused suffering (stress) which caused gut motility problems which caused bacterial overgrowth which caused suffering, or some other crazy feedback phenomenon like that.
Sometimes positive feedback loops are anything but. 😓
Moreover:
It's not obvious to me that severe sacrifice and tradeoffs are necessary. I think their seeming necessary might be the byproduct of our lack of cultural infrastructure for minimizing tradeoffs. That's why I wrote this analogy:
...To say that [my other ends] were lesser seemed to say, “It is more vital and urgent to eat well than to drink or sleep well.” No – I will eat, sleep, and drink well to feel alive; so too will I love and dance as well as help.
Once, the material requirements of life were in competition: If we spent time building shelter i
If you, believe, as I do, that there aren't sturdy philosophical grounds for de-valuing my other ends, then life becomes a puzzle of how to fulfill all of your ends, including – for me – both my EA ends and making art for actually for its own sake (i.e., not primarily for the sake of instrumentally useful psychological health so I can do the greater good).
Hi Brad, I appreciate this reply. I wonder if we might have a fundamental disagreement!
I personally don't regard my non-EA ends as "beastly" – or, if I do, my valuing of EA ends is just as beastly as my valuing of other ends. I can adopt a moral or cultural framework that disagrees with my pre-existing "value function," and what it deems valuable. But something about this is a bit awkward: Wasn't it my pre-existing value function that deemed EA ends to be valuable?
I'm not sure the right way to address this. My burnout and disabling depression predated my gut condition by something like 9 months. My doctors have told me that inciting events for gut symptoms like mine very often include a period of severe stress/burnout. Given that I was quite fit and healthy, without any history of chronic illness, it seems likely the causality of what you're suggesting was reversed. However, it's true that once my gut symptoms started this made my subsequent suffering much worse. =(
I'm curious to hear more about how critiques have been processed historically by the EA movement. Shortform post here: https://forum.effectivealtruism.org/posts/boYH7XH4xE9iugxWi/tyleralterman-s-shortform?commentId=RJYzym2mwrnXP9amn
Apropos of the "Criticism and Red Teaming Contest," I am curious about how critiques have historically shaped EA:
a. What critiques have resulted in large and tangible changes in the movement?
b. What were the means by which these critiques were "metabolized?" Eg did they require a prestigious champion? Was there a highly shared article that changed people's minds? etc
Hmmm I think it’s actually really hard to critique EA in a way that EAs will find convincing. I wrote about this below. Curious for feedback: https://twitter.com/tyleralterman/status/1511364183840989194?s=21&t=n_isE2vL3UIJsassqyLs8w
I would expect there to be higher quality submissions if the team running this were willing to compile a list of (what they consider to be) all the high quality critiques of EA thus far, from both the EA Forum and beyond. Otherwise I expect you’ll get submissions rehashing the same or similar points.
I think a list like this might be useful for other purposes too:
Fwiw “… Tyler Alterman who don't glom with EA” Just to clarify, I glom very much with broad EA philosophy, but I don’t glom with many cultural tendencies inside the movement, which I believe make the movement a non-sufficient vehicle to implement the philosophy. There seem to be an increasing amount of former hardcore EA movement folks with the same stance. (Though this is what you might expect as movements grow, change, and/or ossify.)
(I used to do EA movement-building full time. Now I think of myself as an EA who collaborated with the movement from the outside, rather than the inside.)
Planning to write up my critique and some suggested solutions soon.
+1
Though I suspect it will be difficult to get to a sufficient threshold of EAs using LinkedIn as their social network without something similar to a marketing campaign. Any takers?
I agree with Owen's comments and the others. The basic message of my post, however, seems to be something like, "Make sure you compare your plans to reality" while emphasizing the failure mode I see more often in EA (that people overestimate the difficulty of launching their own project).
Would it be correct to say that your comments don't disagree with the underlying message, but rather believe that my framing will have net harmful effects because you predict that many people reading this forum will be incited to take unwise actions?
I'm glad you think it's nonsense, since - in some strange state of affairs - a certain unnamed person has been crushing on the communal Pom sheet lately. =P
Well-observed! Here's my guess on where I rank on the various conditions above:
Potential improvement: Rather than a binary pass fail for experts we should like a metric that grades the material they present.
Agreed. I tried to make it binary for the sake of generating good examples, but the world is much more messy. In the spreadsheet version I use, I try to assign each marker a rating from "none" to "high."
The Cambridge Handbook of Expertise
How worthwhile do you think it would be for someone to read the handbook?
Issue: It seems like the model might have trouble filtering people who have detailed but wrong models.
100%. The model above is only good for assessing necessary conditions, not sufficient ones. I.e., someone can pass all four conditions above and still not be an expert.
I imagine there is another class of experts who have decades of experience, rich implicit models and impressive achievements, but who would struggle to present concise, detailed answers if you asked them to share their wisdom. I suspect that quiet observation of such a person in their work environment, rather than asking them questions, would yield a better measure of their level of expertise, but this requires considerable skill on the part of the observer.
Indeed: tacit experts. The way I assess this now is basically by looking at indirect signs around...
I tested my predictions against the experts by rating applications for the top 5 candidates myself, then getting the domain expert to rank them and compare scores, watching them doing so.
Ah! This sounds like a great feedback mechanism for one's expert assessment abilities. I'm going to steal this. =)
Tyler’s model seems somewhat helpful here, and adding the components from John’s model improves it again.
+1 - you definitely want to use more signs than the ones I mentioned above to be confident that you have identified sufficient marker of expertise. The ones listed above are only intended to be necessary markers. A good way of generating markers beyond the necessary ones: think about a few people who you can confidently say are experts. What do they have in common? (Please send me any cool markers you've come up with! My own list has over 30 now, and it doesn't seem like ceiling has been hit.)
While it seems possible to make some progress on the problem of independently assessing expertise, I want to stress that we should still expect to fail if we proceed to do so entirely independently, without consulting a domain expert
Right, I should have mentioned this. Your job is much, much easier if you can identify a solid "seed" expert in the domain with a few caveats:
"Check to see whether the field has tangible external accomplishments."
This is a good one. I think you can decently hone your expertise assessment by taking an outside view which incorporates base-rates of strong expertise in the field amongst average practitioners, as well as the variance. (Say that five-times fast.) For example:
It will come from CEA's EA Outreach budget. Winners may choose to re-donate to CEA if they think that we're the best target of funds, or donate somewhere else they think is a better target. That said, we think the main reason why someone would be motivated to enter the contest would be to have the 1000s of future people being introduced to EA be introduced by the best content.
Just changed it to a Creative Commons Attribution 4.0 International License, so posting it elsewhere is fine (or even encouraged).
Very much support the thrust of this post. Oliver Habryka on the EA Outreach team is currently chatting with the Good Judgment Project team about implementing a prediction market in EA.
What about the following simple argument? "If you look at many many (most?) movements or organizations, you see mission creep or Goodharting."
Do you think there is anything that puts us in a different reference class?
Hi Julia - I wholeheartedly agree with your semantic point: the words "hardcore" and "softcore" seem potentially harmful.
However, I wonder if the stronger thesis is true: "Having strictly defined categories of involvement doesn’t seem likely to help."
It seems plausible, but I can think of worlds in which categories of involvement actually do play an important role. (For instance, there is a reason galas will do things like sort donors into silver, gold, and platinum levels based on their level of contribution.) Since one coul...
I was chatting with Julia Wise about this post. It seems plausible the types of people we prioritize recruiting isn't such a black-and-white issue. For instance, it seems likely that EA can better take advantage of network effects with some mass-movement-style tactics.
That said, it seems likely that there might be a lot of neglected low-hanging fruit in terms of outreach to people with extreme influence, talent or net worth.
EA Ventures would be very interested in hearing ideas for donor coordination. Feel free to email us about it at tyler@centreforeffectivealtruism.org.
It's a pretty tricky problem that probably requires the team solving it to have a good understanding of social dynamics from having solved similar issues in the past, so the ideal solution would factor this in.
+1 I'd avoid over-associating EA with just effective giving. E.g., startup-founding, political advocacy, and scientific research can all be undertaken with EA ideas in mind.
I would place quite a bit of emphasis on epistemic tools, since valuing (and ideally exercising) reason and evidence is the primary thing which differentiates EA and unites people across different causes.
Things to be covered might include:
Prioritization
Building models about relevant parts of the world
Epistemic humility (being open to changing your mind, steelmanning other people's arguments, etc)
People to contact for these things:
Oliver Habryka (panisnecis@gmail.com) - he runs an undergrad course at Berkeley
Cat Lavigne (cat.m.lavigne@gmail.co
Thanks for the comments, all! I pretty much agree with the bulk of them so far, and have added an edit to the post above.
Thoughts on how favorably or unfavorably pursuing movement-building compares to other EA career paths?
Yearly salary range (helpful for getting sponsorships in the future of EA events if the average yearly salary turns out to be high)
The difference between this and vegan flyering is that you're already targeting groups that have already self-selected for one aspect of EA. That said, I could definitely see a much lower than .1% rate being the case. Though the cost-effectiveness still seems competitive even at a conversion rate of .01% or even .001%. That's 10 days and 100 days, respectively, of work for a year of earn-to-give.
That said, as Peter alluded earn-to-give still seems competitive if, e.g., you're funding that much more of this work happens. Unless, by doing the work, you're recruiting EtGers that will fund the work. Unless... [mind explodes]
Peter Buckley attempted to hire some virtual assistants from ODesk. They were way too slow. My guess would be that EAs have a much better sense of what types of groups to look for and where to find them. The task also requires a decent amount of research, which is a comparative advantage of many EAs.
Would love to get tons of VAs on this though if you can think of a better way to use them.
I’m so happy to hear this!!