Feels like tractability is the key point here. It doesn't matter a huge amount if 7 billion is or isn't the total amount of animals that would counterfactually be saved if all pets were fed vegan diets[1]
What matters is what change can feasibly be achieved by a marginal campaign or food innovation, given that vegan pet food is already a thing which I suspect most vegans are aware of, and most pet owners are not vegans. Also, many vegans are comfortable feeding their pets (or in the case of one person I know, an entire zoo) with omnivorous or carnivorous diets.
I suspect the returns to campaigning would look like marginal returns to vegan advocacy and meat alternatives research for humans, but it feels like this is where the evidence would be most interesting.
the order of magnitude seems plausible when considering how many more animals free ranging domestic cats alone are estimated to kill...
However, I think it misses a big contribution of HIA: demonstrating the absence of a need to risk everything on AGI.
I don't think this is a real contribution. I don't think people are trying to make AGI because they are concerned that there will be an insufficient number of high IQ humans alive in the next few decades. I think they're trying to make it because they think they can.
And also because they [rightly or wrongly] believe that AGI will be more cost effective, more controllable, need less sleep and have higher problem solving potential than even the smartest possible humans. And be here a lot sooner. (And in some of the AGI fantasies, a route to making humans genetically smarter anyway!)
-
Even if one assumes near term "AGI" has a fairly low ceiling,[1] it seems like "intelligence augmentation" is unpromising as an EA intervention.[2] The necessary research is complex, expensive, long term and dependent not just on germline engineering, but on academic research to understand what intelligence is in less shallow terms than we currently do. It's not clear that there are individual tractable interventions. The quantifiable impact - if it actually worked - would presumably be a tiny proportion of people sufficiently rich and focused on maximising their offspring's intelligence paying to select a few genes somewhat correlated with intelligence for "designer babies", with the possibility this might translate enough into real world outcomes to turn a handful of children with already above average prospects into particularly capable and influential individuals. It is not obvious these children will grow up to use their greater talent (real or perceived) for mitigating existential risk or any other sort of greater good[3] Humans with rich, driven parents who've been taught about their superiority to ordinary humans from birth don't sound immune to "alignment problems" either....
As far as germline engineering goes, the more obviously positive quantifiable impacts would be addressing debilitating genetic conditions, where at least we can be confident that the expensive and risky process could alleviate some suffering.
In this case Anthropic chose to supply the DoW via a partnership with a company deeply embedded in the administration's part of the political spectrum, and even pointedly denied any objections to being used to support the administration's little expedition in Venezuela, and the administration decided that wasn't enough. There are many criticisms that can be made of Anthropic's stance on those issues; reluctance to engage with the current US administration isn't one of them.
If declining to actively support MAGA's demands they support development of AI with the explicit purpose of being an autonomous killing device is "virtue signalling", what's left of "AI alignment" to pursue?
A useful post and interesting starting point for further discussion
Few more that spring to mind
would be interesting to hear some of the more specialized ones that organizations like GiveWell and Rethink Priorities that evaluate a lot of research papers in particular fields use.
n.b. on the SMC vs ITN example, I'm fairly confident the answer is that the ITNs are a baseline directly comparable to "no treatment" as they shouldn't affect the progression from bite to symptomatic malaria infection targeted by SMC at all; they simply reduce the frequency of bites (but not to zero if the sample size is sufficiently large). Prevalance of malarial bites varies between cohorts in "no treatment studies" already. Access to some level of treatment after the fact (HMM) isn't a problem of study construction either; it complicates comparing severe malaria or death statistics with papers where sufferers may have had no treatment at all, but if anything would probably reduce the reported effect size for SMC. Medical ethics means the appropriate baseline/comparator for lifesaving treatment usually isn't "do absolutely nothing", it's "do the [next] best alternative"
I suspect that any resolution to this dispute is likely to be a lot less public than the OpenAI one
It's fairly obvious though that Amodei is signalling the company didn't object to the use of Claude to support the Venezuela operation, and that the company freely choose to be a defense contractor with a formal partnership with Palantir when they had plenty of other revenue/capital sources...
It seems like most of the additional coefficients you've added are impossible to estimate with any degree of confidence, particularly when it is plausible the impact may be negative. Whether it was the intention or not, that is the main message I get from your formulation
As someone who is not a strong longtermist, I note that an advantage for using non longtermist heuristics to evaluate impact is that identifying whether an action appears robustly positive for aggregate utility [for humans] on earth up to time t is much easier than anticipating the effect on the Virgo supercluster after time t
(A more sophisticated approach might use discounting for temporal and extreme spatial distance rather than time bounding, but amounts to the same thing; attaching zero weight to the estimated impact of my actions on the Virgo supercluster a thousand years from now)
EAs were also warning, for a long time, about the importance of health aid sent overseas. In contrast, non-EA leftists were more likely to call these institutions colonialist and call EAs racist for neglecting domestic political issues. But when Trump got elected, we were vindicated in the worst of ways: he destroyed much of USAID, and this was the single act in his presidency that led to the most deaths.
This feels like a strange argument to make, and one which seems to be trying way too hard to find evidence of vindication even in failures, which ironically is the opposite of what people with good epistemics should be doing. EAs were criticised [by critics whose arguments greatly varied in quality] for tending to treat international aid as primarily an optimization problem best addressed by small specialist charities and individuals maximising their donations, and largely ignoring the political dimension.
Then domestic political issues killed government programmes funding traditional Big Aid multinational aid agencies[1] with the stroke of a pen[2] and did far more damage than EA philanthropy is able to repair.
Directionally, thats the opposite of a validation of EA orthodoxy on aid.
I don't think neglecting the politics of whether aid actually gets disbursed is a strong argument against EA either - not least because I don't think EAs would have been able to dissuade people from voting Trump even if they'd made it their leading cause area, or convince Trump/Musk that foreigners lives mattered - but it's definitely not one where the "don't neglect politics" and "actually big programs that aren't quite as good as AMF are still really good" critics can be said to have lost the argument.
(programs not run by EAs, but admired by some of them for their results)
For added irony, the person who gleefully signed those death warrants was at least superficially EA-adjacent enough to have enthusiastically endorsed MacAskill's writing and funded a couple of longtermist organizations in the past.
It would probably be worthwhile to encourage legally binding versions of the Giving Pledge in general.
Donations before death are optimal, but it's particularly easy to ensure that the pledge is met at that stage with a will which can be updated at the time of signing it. (I presume most of the 64% did have a will, but chose to leave their fortune to others. I guess it's possible some fortunes inherited by widow[er]s will be donated to pledged causes in the fullness of time).
I don't think this should replace the Giving Pledge; some people's intentions and financial situations are too complex to write into a binding contract, but such pledges should be taken more seriously (even though in practice they are still likely to be reversible).
Meta is paying billions of dollars to recruit people with proven experience at developing relevant AI models.
Does the set of "people with proven experience in building AI models" overlap with "people who defer to Eliezer on whether AI is safe" at all? I doubt it.
Indeed given that Yudkowsky's arguments on AI are not universally admired and people who have chosen building the thing he says will make everybody die as their career are particularly likely to be sceptical about his convictions on that issue, an endorsement might even be net negative.
People empathise with ChatGPT transcripts too; the key distinction is between empathising with a wide range of physiological and behavioural cues as opposed to a narrow one. For all its flaws, observing whether they are like us seems like a more plausible way of establishing likely consciousness than aptitude for symbolic manipulation. Disintermediated by a computer to replace biases introduced by cuteness and non-verbal expressiveness with biases introduced by symbolic manipulation, humans would without exception rate a pocket calculator or Eliza- style toy script as more likely to be conscious than a dog or a two year old child. I don't think anyone sincerely believes this to actually be the case. A corollary of this is that facility with human language and mathematics - something most entities considered to be conscious do not possess - is not a particularly good standalone proxy for consciousness, even if the entity under examination is much much better at it than Eliza.
Unless we're positing dualism, what we perceive at consciousness is an emergent property of complex chemical processes rooted in our biology (and the imperatives of our biology to survive and self replicate.. That's the case whether we empathise with other entities that share this biology, dispassionately analyse the likelihood we evolved from a common ancestor or torture them into demonstrating similar stress hormone responses to humans. That doesn't necessarily mean warm blooded DNA-replicating machines with limbs and fur and cute eyes are the only possible form of consciousness, but it is something we have and current "AI" doesn't even have a loose analogue of. And yes, obsession with symbolic manipulation is doing an awful lot of work to explain why people are concerned about the consciousness of assemblies of silicon chips running specific software whilst disregarding the possibility of the sentience of more complex and interesting long running processes like forests, rivers or planetary systems, or indeed larger assemblies of silicon chips running software that doesn't transform human text into cute replies.