I'd love to see an 'Animal Welfare vs. AI Safety/Governance Debate Week' happening on the Forum. The risks from AI cause has grown massively in importance in recent years, and has become a priority career choice for many in the community. At the same time, the Animal Welfare vs Global Health Debate Week demonstrated just how important and neglected the cause of animal welfare remains. I know several people (including myself) who are uncertain/torn about whether to pursue careers focused on reducing animal suffering or mitigating existential risks related to AI. It would help to have rich discussions comparing both causes's current priorities and bottlenecks, and a debate week would hopefully expose some useful crucial considerations.
Some recent virology and aerosol science research[1][2] might support an ever-so-slightly higher real cost of atmospheric CO2 and, more practically, an even stronger case for ventilation indoors with respect to biosecurity and pandemics.
Basically, ambient CO2 concentrations have a direct effect on the duration that aerosolized droplets containing SARS-COV2, and probably some other pH-sensitive viruses, remain infectious. This is due to the presence of bicarbonate in the aerosol, which leaves the droplet as CO2. Consider the following equation[2] and then recall or review Le Chetalier's principle from chemistry.
H+(aq)+HCO−(aq)3↔H2CO3(aq)↔CO2(g)+H2O(1)
More CO2 in the surrounding air shifts the equilibrium to reduce the net loss of CO2 from the aerosol, slowing the rate at which the pH increases, thereby slowing the rate at which the aerosol loses its infectivity (this virus doesn't do well in a high-pH environment).
For getting an idea of the magnitude of the effect, Figure 2B[2] and its caption are simple and illustrative: "The effect that an elevated concentration of CO2 has on the decay profile of the Delta VOC and original strain of SARS-CoV-2 at 90% RH. Inset is simply a zoom-in of the first 5 min of the x-axis. Elevating the [CO2(g)] results in a significant difference in overall decay assessed using a one-sided, two-sample equal variance, t-test (n = 188 (independent samples)) of the Delta VOC from 2 min onward, where the significance (p-value) was 0.007, 0.027, 0.020 and 0.005 for 2, 5, 10 and 40 min, respectively." Other figures show differing results for other variants which seem to have different levels of pH-sensitivity.
This acts in addition to - and is not to be confused with - the generally more important (as far as I know) fact that indoor CO2 readings serve as a proxy for proportion of rebreathed air and thus aerosol concentrations in the absence of active air filtration.
An interesting research direction would be to look at likely future
Of 1500 climate policies that have been implemented over the past 25 years, the 63 most successful ones are in this article (that I don't have access to, but a good summary is here). The 63 policies reduced between 0.6 billion and 1.8 billion metric tonnes CO2 emissions. The typical effects that the 63 most effective policies had, could close the emissions gap by 26%-41%. Pricing is most effective in developed countries, while regulations are the most effective policies in developing countries. The climate policy explorer shows the best policies for different countries and sectors. I just wanted to write this if EA:s who are interested in climate change and policy have missed this.
Kind regards,
Ulf Graf
I just read Stephen Clare's 80k excellent article about the risks of stable totalitarianism.
I've been interested in this area for some time (though my focus is somewhat different) and I'm really glad more people are working on this.
In the article, Stephen puts the probability that a totalitarian regime will control the world indefinitely at about 1 in 30,000. My probability on a totalitarian regime controlling a non-trivial fraction of humanity's future is considerably higher (though I haven't thought much about this).
One point of disagreement may be the following. Stephen writes:
This is not clear to me. Stephen most likely understands the relevant topics way more than myself but I worry that autocratic regimes often seem to cooperate. This has happened historically—e.g., Nazi Germany, fascist Italy, and Imperial Japan—and also seems to be happening today. My sense is that Russia, China, Venezuela, Iran, and North Korea seem to have formed some type of loose alliance, at least to some extent (see also Anne Applebaum's Autocracy Inc.). Perhaps, this doesn't apply to strictly totalitarian regimes (though it did so for Germany, Italy and Japan in the 1940s).
Autocratic regimes control a non-trivial fraction (like 20-25%?) of World GDP. A naive extrapolation could thus suggest that some type of coalition of autocratic regimes will control 20-25% of humanity's future (assuming these regimes won't reform themselves).
Depending on the offense-defense balance (and depending on how people trade off reducing suffering/injustive against other values such as national sovereignty, non-interference, isolationism, personal costs to themselves, etc.), this arrangement may very well persist.
It's unclear how much suffering such regimes would create—perhaps there would be fairly little; e.g. in China, ignoring political prisoners, the Uyghurs, etc., most people are probably doing fairly well (though a lot of people in, say, Iran aren't doing too well, see more below).
An idea that's been percolating in my head recently, probably thanks to the EA Community Choice, is more experiments in democratic altruism. One of the stronger leftist critiques of charity revolves around the massive concentration of power in a handful of donors. In particular, we leave it up to donors to determine if they're actually doing good with their money, but people are horribly bad at self-perception and very few people would be good at admitting that their past donations were harmful (or merely morally suboptimal).
It seems clear to me that Dustin & Cari are particularly worried about this, and Open Philanthropy was designed as an institution to protect them from themselves. However, (1) Dustin & Cari still have a lot of control over which cause areas to pick, and sort of informally defer to community consensus on this (please correct me if I have the wrong read on that) and (2) although it was intended to, I doubt it can scale beyond Dustin & Cari in practice. If Open Phil was funding harmful projects, it's only relying on the diversity of its internal opinions to diffuse that; and those opinions are subject to a self-selection effect in applying for OP, and also an unwillingness to criticise your employer.
If some form of EA were to be practiced on a national scale, I wonder if it could take the form of an institution which selects cause areas democratically, and has a department of accountable fund managers to determine the most effective way to achieve those. I think this differs from the Community Choice and other charity elections because it doesn't require donors to think through implementation (except through accountability measures on the fund managers, which would come up much more rarely), and I think members of the public (and many EAs!) are much more confident in their desired outcomes than their desired implementations; in this way, it reflects how political elections take place in practice.
In the near-term, EA could bootstrap such a fun
The meat-eater problem is under-discussed.
I've spent more than 500 hours consuming EA content and I had never encountered the meat-eater problem until today.
https://forum.effectivealtruism.org/topics/meat-eater-problem
(I had sometimes thought about the problem, but I didn't even know it had a name)
Given that effective altruism is "a project that aims to find the best ways to help others, and put them into practice"[1] it seems surprisingly rare to me that people actually do the hard work of:
1. (Systematically) exploring cause areas
2. Writing up their (working hypothesis of a) ranked or tiered list, with good reasoning transparency
3. Sharing their list and reasons publicly.[2]
The lists I can think of that do this best are by 80,000 Hours, Open Philanthropy's, and CEARCH's list.
Related things I appreciate, but aren't quite what I'm envisioning:
* Tools and models like those by Rethink Priorities and Mercy For Animals, though they're less focused on explanation of specific prioritisation decisions.
* Longlists of causes by Nuno Sempere and CEARCH, though these don't provide ratings, rankings, and reasoning.
* Various posts pitching a single cause area and giving reasons to consider it a top priority without integrating it into an individual or organisation's broader prioritisation process.
There are also some lists of cause area priorities from outside effective altruism / the importance, neglectedness, tractability framework, although these often lack any explicit methodology, e.g. the UN, World Economic Forum, or the Copenhagen Consensus.
If you know of other public writeups and explanations of ranked lists, please share them in the comments![3]
1. ^
Of course, this is only one definition. But my impression is that many definitions share some focus on cause prioritisation, or first working out what doing the most good actually means.
2. ^
I'm a hypocrite of course, because my own thoughts on cause prioritisation are scattered across various docs, spreadsheets, long-forgotten corners of my brain... and not at all systematic or thorough. I think I roughly:
- Came at effective altruism with a hypothesis of a top cause area based on arbitrary and contingent factors from my youth/adolescence (ending factory farming),
Often people post cost-effectiveness analyses of potential interventions, which invariably conclude that the intervention could rival GiveWell's top charities. (I'm guilty of this too!) But this happens with such frequency, and I am basically never convinced that the intervention is actually competitive with GWTC. The reason is that they are comparing ex-ante cost-effectiveness (where you make a bunch of assumptions about costs, program delivery mechanisms, etc) with GiveWell's calculated ex-post cost-effectiveness (where the intervention is already delivered, so there are much fewer assumptions).
Usually, people acknowledge that ex-ante cost-effectiveness is less reliable than ex-post cost-effectiveness. But I haven't seen any acknowledgement that this systematically overestimates cost-effectiveness, because people who are motivated to try and pursue an intervention are going to be optimistic about unknown factors. Also, many costs are "unknown unknowns" that you might only discover after implementing the project, so leaving them out underestimates costs. (Also, the planning fallacy in general.) And I haven't seen any discussion of how large the gap between these estimates could be. I think it could be orders of magnitude, just because costs are in the denominator of a benefit-cost ratio, so uncertainty in costs can have huge effects on cost-effectiveness.
One straightforward way to estimate this gap is to redo a GiveWell CEA, but assuming that you were setting up a charity to deliver that intervention for the first time. If GiveWell's ex-post estimate is X and your ex-ante estimate is K*X for the same intervention, then we would conclude that ex-ante cost-effectiveness is K times too optimistic, and deflate ex-ante estimates by a factor of K.
I might try to do this myself, but I don't have any experience with CEAs, and would welcome someone else doing it.