Graduate student at Johns Hopkins. Looking for part-time work.
There's a lot of good, old, semi-formal content on the GiveWell blog: https://blog.givewell.org/ If you do some searches, you may be able to find the subject touched on.
I'm not sure if they have done any formal review of the subject however.
I don't have anything to add about the intra-cause effectiveness multiplier debate. But much of the multiplier over the average charity is simply due to very poor cause selection. So while I applaud OP for wanting rigorous empirical evidence, some comparisons simply don't require peer-reviewed studies. We can still reason well in the absence of easy quantification
Dogs and cats vs farmed animal causes is a great example. But animal shelters vs GHD is just as tenable.
This isn't an esoteric point; a substantial amount of donations are simply to bad causes. Poverty alleviation in rich countries (not political or policy directed), most mutual aid campaigns, feeding or clothing the poor in the rich world, most rich-world DEI related activism lacking political aims (movement building or policy is at least more plausible), most ecological efforts, undirected scholarship funds, the arts.
I'm comfortable suggesting that any of these are at least 1000x less cost effective.
Hot take, but political violence is bad and will continue to be bad in the foreseeable near-term future. That's all I came here to say folks, have a great rest of your day.
Sort of. But claiming that you are an EA organization is at least 80% of what makes you one in the eyes of the public, as well as much of self-identification among employees. Ex: There's a big difference between a company that happens to be full of Mormons and a company that is full of Mormons that calls itself "a Mormon company".
No. Just deflect, which admittedly, is difficult to do, but CEOs do it all the time. Ideally she should have been clear about her own personal relationship with EA, but then moved on. Insofar as she was (or seemed) dishonest here, it didn't help; the wired article is proof of that.
It's hard to pin-point a clear line not to cross, but something like "this is an EA company" would be one, as would "we are guided by the values of the EA movement".
No; it's best if individuals are truthful. But presidents of companies aren't just individuals, does that mean they should lie? Still no. It just means that they should be limited with who and what they associate with. I mentioned an " unnecessary news media firestorm", but the issue is much broader. Anthropic is a private corporation, its fidelity is to its shareholders. "Public Benefit" corporation aside, it is a far different entity than any EA non-profit. I'm not an expert, but I think that history shows that it is almost always a bad idea for private companies to claim allegiance to anything but the most anodyne social goals. It's bad for the company and bad for the espoused social goals or movement. I'm very much pro-cause neutrality in EA; the idea that a charity might all the sudden realize it's not effective enough, choose to shut down and divert all resources elsewhere, awesome! Private companies can't do this. Even a little bit of doing this is antithetical to the incentive structure they face.
As for your second response, I agree 100%.
My two cents is that "brand consistency" is interesting, because brands reflect, roughly, the strain of vegan club that it is, whether associated with particular activist networks, whether it's more vegetarian than vegan or something else. The level of inconsistency is also indicative of a lack of coordination across groups.
My experience in university was that the local club was a bit of an awkward merge between a social club and people with a particular activist agenda (very visible demonstrations against animal labs). In a sense, the career building approach of Alt Protein Projects or the cause agnosticism of EA groups may be better at attracting members. But I'm not sure.
Giving this an "insightful" because I appreciate the documentation of what is indeed a surprisingly close relationship with EA. But also a disagree because it seems reasonable to be skittish around the subject ("AI Safety" broadly defined is the relevant focus, adding more would just set-off an unnecessary news media firestorm).
Plus, I'm not convinced that Anthropic has actually engaged in outright deception or obfuscation. This seems like a single slightly odd sentence by Daniela, nothing else.
I don't this is an important or interesting question, at least not over the type of disagreement we are seeing here. The scope of the question (and of possible views) is larger than BB seems to acknowledge. At the very least, it is obvious to me that there is a type of realism/objectivity that is
1. Endorsed by at least some realists, especially with certain religious views.
2. Ontologically much more significant then BB is willing to defend.
Why ignore this?