David T

425 karmaJoined Dec 2023


I don't think the "3% credence in utilitarianism" is particularly meaningful; doubting the merits of a particular philosophical framework someone uses isn't an obvious reason to be suspicious of them. Particularly not when Sam ostensibly reached similar conclusions to Will about global priorities, and MacAskill himself has obviously been profoundly influenced by utilitarian philosophers in his goals too.

But I do think there's one specific area where SBF's public philosophical statements were extremely alarming even at the time, and he was doing so whilst in "explain EA" mode. That's when Sam made it quite clear that if he had a 51% chance of doubling world happiness vs a 49% of ending it, he'd accept the bet....  a train to crazytown not many utilitarians would jump on and also one which sounds a lot like how he actually approached everything. 

Then again, SBF isn't a professional philosopher and never claimed to be, other people have said equally dumb stuff and not gambled away billions of other people's money, and I'm not sure MacAskill himself would even have read or heard Sam utter those words.

I also didn't vote but would be very surprised if that particular paper - a policy proposal for a biosecurity institute in the context of a pandemic - was an example of the sort of thing Oxford would be concerned about affiliating with (I can imagine some academics being more sceptical of some of the FHI's other research topics). Social science faculty academics write papers making public policy recommendations on a routine basis, many of them far more controversial.

The postmortem doc says "several times we made serious missteps in our communications with other parts of the university because we misunderstood how the message would be received" which suggests it might be internal messaging that lost them friends and alienated people. It'd be interesting if there are any specific lessons to be learned, but it might well boil down to academics being rude to each other, and the FHI seems to want to emphasize it was more about academic politics than anything else.

I think a dedicated area would minimise the negative impact on people that aren't interested whilst potentially adding value (to prospective applicants in understanding what did and didn't get accepted, and possibly also to grant assessors if there was occasional additional insight offered by commenters)

I 'd expect there would be some details of some applications that wouldn't be appropriate to share on a public forum though

I think the combination of bottom-up approach of local communities proposing their own improvements with EA-style rigorous quantitative evaluation (which, like you say would be best undertaken by evaluators based in similar LMICs) is potentially really powerful, and I'm not sure the extent to which it's already been tried in mainstream aid. 

Or possibly even better from a funding perspective, turn that round and have an organization that helps local social entrepreneurs secure institutional funding for their projects (a little bit like Charity Entrepreneurship). Existing aid spend is enormous, but I don't think it's easy for people like Antony to access.

I also think there's the potential for interesting online interaction between the different local social entrepreneurs (especially those who have already part-completed projects with stories to share), putative future donors and other generally interested Westerners who might bring other perspectives to the table.  I'm not sure to what extent and where that happens at the moment.

I’d also extend this to people who have strong skills and expertise that’s not obviously convertable into ‘working in the main EA cause areas’.

I think this is a key part. "Main EA cause areas" does centre a lot on a small minority of people with very specific technical skills and the academic track record to participate in (especially if you're taking 80k Hours for guidance on that front) 

But people can have a lot of impact in areas like fundraising with a completely different skillset (one that is less likely to benefit from a quantitative degree from an elite university) or earn well enough to give a lot without having any skills in research report writing, epidemiology or computer science.

And if your background isn't one that the "do cutting edge research or make lots of money to give away" advice is tailored to at all, there are a lot of organizations doing a lot of effective good that really really, really need people with the right motivations allied to less niche skillsets. So I don't think people should feel they're not a 'success' if they end up doing GHD work rather than paying for it, and if their organization isn't particularly adjacent to EA they might have more scope to positively influence its impactfulness.

Also, people shouldn't label themselves mediocre :) 

I think everyone agrees that it's harder to do cost effectiveness analysis for speculative projects than it is to do it for disease prevention, and that any longtermist cost/benefit analysis is going to have a lot more scope for debate on the numbers. But it is also harder to do cost effectiveness analysis in terms of lives saved for other GHD measures like rural poverty alleviation (though if this project affects malnutrition it might actually be amenable to GiveWell style analysis. )

I think ultimately if every marginal dollar proposed to be spent on GHD has to demonstrate reasoning as to why its as good as or better than AMF at the margin, it's only fair to demand similar transparency for community building and longtermist initiatives (with an acceptance of wider error bars).[1] Especially since there's a marked tendency for the former to be outsider organizations and the latter to be organizations within the EA network...

I make no comment either way about the particular viability of this project. And I'd actually be quite interested in your more detailed thoughts on it, as whilst you're not an expert on farming you clearly have in depth knowledge of Uganda.

  1. ^

    At the risk of boring on about Wytham, the bar seemed to be that it was net positive given lots of OpenPhil money was being directed to conference venues, not that it was better than buying a marginally inferior venue for a lot less money and donating the rest to initiatives that could save lives

This feels like a good example of how GPT can generate coherent and topic-relevant prose which on a deeper level, doesn't actually make much sense.

If there are lots of "bidders" wanting to fund something, a charity or research project will normally want to accept funding from all of them, not just a "winner". 

And OpenPhil exists to donate money to causes it believes are most effective and neglected, so picking projects that already have funding secured [in competition with the original funders] seems like a strange way to go about it. 

OK, I guess the tone of my original reply wasn't popular (which is fair enough I guess). 

The OP raised the subject of a non-trivial proportion of people perceiving EA as being a 'phyg' as a problem, and suggested with moderately high confidence that the transition to a "professional association" would radically reduce this. I'm not seeing this. Plenty of groups recruiting students brand themselves "movements" for "doing good" in some general way whilst being relatively unlikely to be accused of being a cult (climate change and civil/animal rights activists, fair-traders, volunteering groups etc)

And I suspect far more people would say the International Association of Scientologists and Association of Professional Independent Scientologists which both adopt the structure and optics of professional membership bodies are definitely cults (Obviously there are many more reasons to consider Scientology as a cult, but if anything I'd think the belief-system-under-a-professional-veneer approach looks more suspicious rather than less. At any rate, forming professional membership bodies definitely isn't something actual cults don't do)

So if people are perceiving EA as a cult it's probably their reaction - justified or otherwise - to other things, some of which might be far too important to dispense with like Giving Pledges and concern about x-risk, and some of which might be easily avoided like reading from scripts (and yes, substituting ordinary words for insider jargon like 'phyg'). Other ways to dispel accusations that EA is a cult (if it is indeed a problem) feels like the subject for an entirely different debate, but I'd genuinely be interested in counter-arguments from anyone who thinks I'm wrong and changing the organization structure is the key.

Even the Hanania article you linked to entitled "Diversity Is Our Strength" contains as one of its core arguments the suggestion that Hispanic immigrants might be won over to his support for "war with civil rights law" by "comparing them favorably to genderfluid liberals and urban blacks". 

The next sentence links to one of his own tweets about how "selling immigrants on hating liberals would be the easiest thing in the world", featuring a video of Muslims protesting in favour of LGBT book bans.

Perhaps you don't find this style of politics repugnant, perhaps it even represents a marginal improvement on his prior beliefs, but I don't think it's one EA should be endorsing.

Answer by David TApr 09, 20245

Is continued membership of the local group at all necessary for the career? Ultimately if you don't like people, you probably don't want to network with them to have the best possible chance of working alongside them.  I know some EA cause areas are niche, but I think there are generally more people working in them who won't be attending your local group than are, and ultimately developing your technical skill and getting good references from your colleagues is going to matter more.

Load more