I think a chatbot fails the cost-benefit analysis pretty badly at this point. There are big reputational hits organizations can take for giving bad advice and potential hallucinations just create a lot of surface area there. Importantly, the upside is quite minimal too. If a user wants to, they can pull up ChatGPT and ask it to act as an 80k advisor. It might do okay (or similarly to how okay it would do if we tried to develop one), only it’d be much clearer that we didn’t sanction its output.
People are often surprised that full time advisors only do ~400 calls/year as opposed to something like 5 calls/day (i.e.1,300/yr). For one thing, my BOTEC on the average focus time for an individual advisee is 2.25 hours (between, call prep, the call itself, post-call notes/research on new questions, introduction admin, and answering follow up emails). Beyond that, we have to keep up with what’s going on in the world and the job markets we track, as well as skilling up as generalist advisors. There’s also more formal systems we need to contribute to like marketing, impact assessment, and maintaining the systems that get us all the information we use to help advisees and keep that 2.25 hours at 2.25 hours.
Perhaps surprisingly (and perhaps not as relevant to this audience): take cause prioritization seriously, or more generally, have clarity about your ultimate goals/what you’ll look to to know whether you’ve made good decisions after the fact.
It’s very common that someone wants to do X, I ask them why, they give an answer that doesn’t point to their ultimate priorities in life, I ask them “why [thing you pointed to]?” and they more or less draw a blank/fumble around uncertainly. Granted it’s a big question, but it’s your life, have a sense of what you’re trying to do at a fundamental level.
Don’t be too fixated on instant impact. Take good opportunities as they come of course, but people are often drawn towards things that sound good/ambitious for the problems of the moment even though they might not be best positioned to tackle those things and might burn a lot of future opportunities by doing so. Details will vary by situation of course.
Openness to working in existential risk mitigation is not a strict requirement for having a call with us, but it is our top priority and the broad area we know and think most about. EA identity is not-at-all a requirement outside the very broad bounds of wanting to do good and being scope sensitive with regard to that good. Accordingly, I think it’s worth the 10 minutes to apply if you’ve 1) read/listened to some 80k content and found it interesting, and 2) have some genuine uncertainty about your long run career. I think 1) + 2) describe a broad enough range of people that I’m not worried about our potential user base being too small.
So, depending on how you define EA, I might be fine with our current messaging. If people think you need to be a multiple-EAG attendee who wears the heart-lightbulb shirt all the time to get a call, that would be a problem and I’d be interested to know what we’re doing to send that message. When I look at our web content and YouTube ads for example, I’m not worried about being too narrow.
Speaking just to the little slice of the world I know:
Using a legal research platform (i.e. Westlaw, LexisNexis, Casetext) could be really helpful with several of these. If you're good at thinking up search terms and analogous products/actors/circumstances (3D firearms, banned substances, and squatting on patents are good examples here), there's basically always a case where someone wasn't happy someone else was doing X, so they hired lawyers to figure out which laws were implicated by X and file a suit/indict someone, usually on multiple different theories/laws.
The useful part is that courts will then write opinions on the theories which provide a clear, lay-person explanation of the law at issue, what it's for, how it works, etc. before applying it to the facts at hand. Basically, instead of you having to stare at some pretty abstract, high-level, words in a statute/rule and imagine how they apply, a lot of that work has already been done for you, and in an authoritative, citable way. Because cases rely on real facts and circumstances, they also make things more concrete for further analogy-making to the thing you care about.
Downside is these tools seem to cost at least ~$150/mo, but you may be able to get free access through a university or find other ways to reduce this. Google Scholar case law is free, but pretty bad.
1-3 seem good for generating more research questions like ASB's, but the narrower research questions are ultimately necessary to get to impact. 4-8 seem like things EA is over-invested in relative to what ASB lays out here, not that more couldn't be done there.
"Leadership" and "eco-systems" sound very nice as far as they're described here but I find this post unhelpful as a guide to what "EA" should do.
Assuming this post is addressing EA funders – rather than the collection of diverse, largely uncoordinated, people, organizations, and perspectives that 'EA' is – is the claim that funders should open 20 of these offices? Who do they pay to do that and apply the "high standards" for early membership? What are the standards? Should people have models of the world that distinguish good/bad opportunities and big/small ones? At what point does answering these questions become too analytical?
"Find all the smart altruistic people, point them to each other, give them some money, and let them do what they want" sounds nice, but aren't there hundreds or thousands of organizations interested in funding various projects, not least of which the whole VC industry? My sense is that not analyzing why you might be the funder of last resort, at least a little bit, is a recipe to crash and burn very quickly. $1m/yr/office could feed a handful of people and keep the lights on, but it's not scaling any projects. "EA" doesn't have enough money to last long without a lot of analysis and it's only been around for ~10 years.
People with diverse, niche interests and moxie have had really outsized influence on the world. It's easy to say "go find them," but the ones who will actually make a difference are very very few and far between and it takes some analysis to find them. There are a million people in Port Au Prince and probably hundreds of discernible perspectives on how to make things better there. Some multiplication for other localities. The Future Fund has 30 categories of ideas they want to pursue. Maybe that's "too small," but they're largely unaddressed and really big in scale. If they wanted to count all wins as equal, I don't doubt they could rack up a lot of very concrete wins and cool stories, but that seems to be what... all the rest of philanthropy is doing. And I'm glad they are!
There's an undergrad econ thing where burning a dollar lowers the price level for all other dollar holders and increases their welfare, but everyone thinks you can do better than that by being more discerning. So just saying "more causes/ideas!" isn't really helpful without some limiting principle.
I appreciate the care and detail here, but would guess that wild animals dwarf everything considered here and present a much more difficult + important question.
How bad are forests per unit of land vs corn/soy/wheat fields or cattle ranches that have been replacing them seems like a key question.