I am an attorney in a public-sector position not associated with EA, although I cannot provide legal advice to anyone. My involvement with EA so far has been mostly limited so far to writing checks to GiveWell and other effective charities in the Global Health space, as well as some independent reading. I have occasionally read the forum and was looking for ideas for year-end giving when the whole FTX business exploded . . .
As someone who isn't deep in EA culture (at least at the time of writing), I may be able to offer a perspective on how the broader group of people with sympathies toward EA ideas might react to certain things. I'll probably make some errors that would be obvious to other people, but sometimes a fresh set of eyes can help bring a different perspective.
"AIDS related deaths in the next five years will increase by 6.3 million" if funding is not restored, UNAIDS executive director Winnie Byanyima said.
This is a quote from a BBC news article, mainly about US political and legal developments. We don't know what the actual statement from the ED said, but I don't think there's enough here to infer fault on her part.
For all we know, the original quote could have been something like predicting that deaths will increase by 6.3 million if we can't get this work funded -- which sounds like a reasonable position to take. Space considerations being what they are, I could easily see a somewhat more nuanced quote being turned into something that sounded unaware of counterfactual considerations.
There's also an inherent limit to how much fidelity can be communicated through a one-sentence channel to a general audience. We can communicate somewhat more in a single sentence here on the Forum, but the ability to make assumptions about what the reader knows helps. For example, in the specific context here, I'd be concerned that many generalist readers would implicitly adjust for other funders picking up some of the slack, which could lead to double-counting of those effects. And in a world that often doesn't think counterfactually, other readers' points of comparison will be with counterfactually-unadjusted numbers. Finally, a fair assessment of counterfactual impact would require the reader to understand DALYs or something similar, because at least a fair portion of the mitigation is likely to come from pulling public-health resources away from conditions that do not kill so much as they disable.
So while I would disagree with a statement from UNAIDS that actually said if the U.S. doesn't fund PEPFAR, 6.3MM more will die, I think there would be significant drawbacks and/or limitations to other ways of quantifying the problem in this context, and think using the 6.3MM number in a public statement could be appropriate if the actual statement were worded appropriately.
But I don't think people blindly defer to evaluators in other life domains either -- or at least they shouldn't for major decisions. For instance, there are fraudulent university accreditation agencies, non-fraudulent ones with rather low standards, ones with standards that are pretty orthogonal to whether you'll get a good education, and so on.
I suggest that people more commonly rely on a social web of trust -- one strand might be: College X looks good in the US News rankings, and I trust the US News rankings because the school guidance counselor thought it reliable, and the guidance counselor's reputation in the broader school community is good. In reality, there are probably a couple of strands coming out from US News (e.g., my friends were talking about it) and from School X (e.g., I read about some successful alumni). So there's a broader web to justify the trust placed in the US News evaluation, buttressed by sources in which the decisionmaker already had some confidence. Of course, the guidance counselor could be incompetent, my friends probably are ill-informed, and most schools have at least a few successful alumni. But people don't have the time or energy to validate everything!
My guess is that for many people, GiveWell doesn't have the outgoing linkages that US News does in my example. And it has some anti-linkages -- e.g., one might be inclined to defer to Stanford professors, and of course one had some harsh things (unjustified in my opinion) to say about GiveWell. It comes up in the AI overview when I google'd "criticisms of Givewell," so fairly low-hanging fruit that would likely come up on modest due diligence.
I'd also note that independent cannot be assumed and must be either taken on trust (probably through a web of trust) or sufficiently proven (which requires a fair amount of drilling).
My guess is that GiveWell is simply not enmeshed in John's web of trust the way it is in yours or mine. Making and sustaining a widely trusted brand is hard, so that's not surprising.
<I'm a bit disappointed, if not surprised, with the community response here.>
I can't speak for other voters, but I downvoted due to my judgment that there were multiple critical assumption that were both unsupported / very thinly supported and pretty dubious -- not because any sacred cows were engaged. While I don't think main post authors are obliged to be exhaustive, the following are examples of significant misses in my book:
It's important to not reflexively defend sacred cows or downvote those who criticize them . . . but one can believe that while also believing this post seriously missed the mark and warrants downvotes.
Yes and no -- the only concrete thing I see @WillieG having done was "sign[ing] letters of recommendation for each employee, which I later found out were used to pad visa applications."
I would find refusing to write a letter of recommendation on "brain drain" concerns to go beyond not funding emigration efforts. I'd view this as akin to a professor refusing to write a recommendation letter for a student because they thought the graduate program to which the student wanted to apply was a poor use of resources (e.g., underwater basketweaving). Providing references for employees and students is an implied part of the role, while vetoing the employee or student's preferences based on the employer's/professor's own views is not.
In contrast, I would agree with your frame of reference if the question were whether the EA employer should help fund emigration and legal fees, or so on.
Who said we should "PaNdEr" to conservatives? That reads like a caricature of the recent post on the subject. If you're claiming that there is a pro-pandering movement afoot, please provide evidence and citations to support your assertion.
I think the significant majority of people here -- including me! -- are somewhere between unhappy to extremely upset over yesterday's events, but that doesn't justify caricaturing good-faith posts. If you have a concrete, actionable idea about how we should respond to those events, that would make for a more helpful post.
Good observation -- most of the drop in the number of new donors was seen in 2022, but little of the drop in the amount of donations from new donors happened then [$43.4MM (2021) vs $41.1MM (2022) vs. $20.5MM (2023]. Because of their size, the bulk of the 2021 --> 2022 drop was almost certainly people giving under $1,000, which is somewhat less concerning to me due to the small percentage of GiveWell's revenue that donations under $1K provide (less than 3%). There are a good number in the $1-$10K range, but they did not show a significant decline overall between 2021 and 2022.
Presumably, the 2022 --> 2023 drop in revenue involved loss of new higher-dollar donors. My assumption is that higher-dollar donors act somewhat differently than others (e.g., I expect they engage in more due diligence / research than those donating > $1,000 on average). So it's plausible to me that the 2021 -> 2022 numerical decline and the 2022 --> 2023 volume decline have (or do not have) very similar causes. I'd guess FTX might hit higher-dollar new donors more because of the extra due diligence.
The following chart is for all donors, not new ones:
The other number I found potentially concerning was the 50% drop in year-over-year funds from new non-anon donors (p. 10 of the 2023 metrics report, see paste below). Funds from new/non-anon donors in 2021 were slightly higher than in 2022 per the 2022 metrics report, so the prior year wasn't the anomaly.
I don't want to over-update on a single year's Y/Y difference, but my concern would grow if 2024 ended up similar to 2023.
I would not have predicted much effect of the FTX affair on GiveWell's new donor acquisition, but it's possible that played a role.
You seem to be assuming that the primary harm of malaria deaths and (conditioned on "fetuses counted as people") of abortion is the suffering that children and fetuses experience when dying of malaria and abortion, respectively. That's an unusual assumption; I think most people would identify the primary harm as the loss of ability to live the rest of the child or fetus' life.
So I think you're missing a step of either (1) explaining why your implied assumption above is correct, or (2) comparing human loss-of-life to chicken suffering rather than suffering to suffering as your infographic does. (In the world where factory farming ended, these chickens would likely not exist in the first place, so I wouldn't include a loss-of-enjoyable-life factor on the chicken side of the equation).
The usefulness of smart people is highly dependent on the willingness of the powers-that-be to listen to them. I don't think lack of raw intelligence had much of anything to do with the recent US electoral results. The initial candidate at the top of the ticket was not fit for a second term, and was forced out too late for a viable replacement to emerge. Instead, we got someone who had never polled well. I also don't think intelligence was the limiting factor in the Democrats' refusal to move toward the center on issues that were costing them votes in the swing states. Intellectually understanding that it is necessary to throw some of your most loyal supporters under the bus is one thing; committing to do it is something else; and actually getting it done is harder still. One could think of intelligence as a rate-limiting catalyst up to a certain point, but dumping even more catalyst in after that point doesn't speed the reaction much.
I think @titotal's critique largely holds if one models EAs as a group as exceptional in intelligence but roughly at population baseline for more critical and/or rate-limiting elements for political success (e.g., charisma, people savvy). I don't think that would be an attack -- most people are in fact broadly average, and average people would be expected to fail against Altman, etc. And if intelligence were mostly neutralized by the powers-that-be not listening to it, having a few hundred FTEs (i.e., ~10% of all EA FTEs?) with a roughly normal distribution of key attributes is relatively unlikely to be impactful.
Finally, I think this is a place where EA's tendencies toward being a monoculture hurts -- for example, I think a movement that is very disproportionately educationally-privileged, white, STEM focused, and socially liberal will have a hard time understanding why (e.g.) so many Latino voters [most of whom share few of those characteristics] were going for Trump this cycle and how to stop that.