223 karmaJoined Jan 2023



    I agree with your general case, and I'm interested in the role that genetics can play in improving educational and socio-economic outcomes across the world. In the case of a world where biological intelligence remains relevant (not my default scenario, but plausible), this will become an increasingly interesting question. 

    However, I'm unconvinced that an EA should want to invest in any of the suggested donation interventions at the minute - they seem to be examples where existing research and market incentives would probably be sufficient. I'm not sure that more charitable support would have a strong counterfactual impact at the margins. (Note: I know very little about funding for genetics research - it seems expensive and already quite well-funded, but please correct me if I'm wrong here).

    In terms of whether we should promote/ talk about it more, I think EA has limited "controversy points" that should be used sparingly for high-impact cause areas or interventions. I don't feel that improving NIQ through genetic interventions scores well on the "EV vs. controversy" trade-off. There are other genetic enhancement interventions (e.g. reducing extreme suffering, in humans or farmed animals), that seem to give you more EV for less controversy.

    Also, if we do make this case, I think that mentioning Lynn/ Vanhanen is probably unwise, and that you could make the case equally well without the more controversial figures/ references. 

    Finally, I'd like to see a plausible pathway or theory of change for a more explicitly EA-framed case for genetic enhancement. For example, we expect this technology to develop anyway, but people with an EA framing could:

    1. Promote the use of embryo screening to avert strongly negative cognitive outcomes 
    2. If this technology is proven to be cost-effective in rich countries, remove barriers to rolling it out in countries where it could have a greater counterfactual impact

    At risk of compounding the hypocrisy here, criticizing a comment for being abrasive and arrogant while also saying: "Your ideas are neither insightful nor thoughtful, just Google/ ChatGpt it" might be showing a lack of self-awareness...

    But agreed that this post is probably not the best place for an argument on the feasibility of a pause, especially as the post mentions that David M will make the case for how a pause would work as part of the debate. If your concerns aren't addressed there, Gerald, that post will probably be a better place to discuss.

    [This comment is no longer endorsed by its author]Reply

    My argument questions the ideas of lives saved, DALYs and QALYs as metrics - just like using lives saved as a metric, QALYs generally implicitly assume that death is worse than a very bad life, no matter the levels of mental suffering, pain, and physical debilitation. 

    I'm probably criticising GiveWell's methods as much as the post- their methodology assumes that the value of saving lives/ averting deaths is positive.

    I generally agree more with HLI's 'WELLBY' approach, as long as negative WELLBYs are taken seriously. 

    As I said on my final para, I do see global health interventions as probably being net positive, despite potentially saving more net negative lives, so my argument definitely wasn't to "defund GiveWell". It was more that "saving lives" is a bad metric and bad thing to feel good about.

    My cherry picking of negative phenomena was in response to the cherry picking of the original post. I think boring/ useless school (I didn't quote anything but... most African rural schools are boring and useless...), unpleasant labour, hunger/ stunting and poor mental health are very relevant variables, as they define a lot of the waking hours of the poorest people in the world.

    FGM and child marriage are probably less representative of general welfare - I was responding to the "first kiss" idea in the post.

    I chose Burkina Faso at random. For central African countries I might have stressed sexual violence, which seems to be lower in Burkina Faso.

    Thanks for responding.

    I accept your point about life satisfaction vs happiness measures not being equivalent. But if GiveWell recipients think that their life is significantly closer to the worst possible life than the best possible life, this still makes my point pretty well. Doesn't seem obvious to judge the net welfare of someone who is, say, 3/10 for life satisfaction and 'rather happy'. I haven't seen good studies on GiveWell recipients' happiness or moment-to-moment well being (using ESM etc.), or other ways of measuring what we care about, but I would appreciate better info on that.

    My (implicit) estimates for child marriage, stunting and mental illness should be adjusted for the fact that average GiveWell charity recipients in Burkina Faso have worse lives than the average citizen, but I acknowledge my language was imprecise. Stunting might plausibly cross the 50% threshold in that category, but might be under. The median marriage age for Burkinabe girls is 17, and is probably lower in the GiveWell pop. Some orgs define child marriage as <18.

    Mental illness thresholds seem to vary a lot, but this https://journals.plos.org/plosone/article?id=10.1371/journal.pone.0164790 article is a good example of how bad mental health is for 'ultra-poor' kids in Burkina Faso. My thinking would be that 20-30% of the kids in this study have lives clearly on the net-negative side, which I think would be unlikely to be outweighed by the more neutral/ positive lives. Don't know exactly how this would match with a typical GiveWell population.

    "To answer your comment "you have to work out whether you think this life you've saved is more likely or not to be net positive. " - We have worked it out, and the answer YES, a resounding yes"

    I consider this just obviously false. I just don't believe that you/ global health people have disproven negative-leaning utilitarian or suffering-focused ethical stances. You might have come to a tentative conclusion based on a specific ethical framework, limited evidence and personal intuitions (as I have).

    I'd say that there's probably a fairly fundamental uncertainty about whether any lives are net positive. There's definitely not a consensus within the EA community or elsewhere. It depends on stuff like suffering happiness assymetry and the extent to which you think pain and pleasure are logarithmic (https://qri.org/blog/log-scales).

    Most of us will acknowledge that at least some lives are net negative, some extremely so, and that these lives are far more likely to be saved by GiveWell charities. I suspect any attempt to model exactly where to draw the line will be very sensitive to subtle differences in assumptions, but my current model leans towards the average GiveWell life being net negative in the medium term, for the reasons I've mentioned.

    In terms of language, I think "great care and dignity" are suitable for most contexts, but I think that it's important that the EA forum is a safe space for blunt language on this topic.

    I'm increasingly convinced that EA needs to distance ourselves from this framing of "saving lives=good". And we need to avoid the satisfying illusion that giving to global health charities is saving a life "just like our own". (Particularly disagreeing with @NickLaing's comments here). If you've decided to save the lives of the ultra-poor, you should be able to bite the bullet and admit you're doing that. 

    We all like the idea of saving a kid who's "playing with their friends in the schoolyard, maybe spending time with their grandma, or maybe just kicking a football, alone", and " celebrating her birthday" and the "first kiss".

    But you don't need to be a negative utilitarian to recognise that the kid whose life you've saved probably isn't having a great life- mostly for the reasons you donated to that charity- it's shitty being a poor person in the poorest countries in the world. 

    Let's say you saved a life in Burkina Faso:

    - If you saved a girl, she'll probably be a victim of FGM, and get married as a child to an older man - that "first kiss" you mention might be as a 15-year-old girl with her 30-year-old husband
    - If they don't go to school, they'll do hard and dangerous work in agriculture, fishing or worse as children.
    - If they do go to school, they'll probably spend their days in extreme boredom, getting left behind, and end their school years functionally illiterate and innumerate
    - They're likely to spend a lot of their childhood hungry - they will get ill often, with malaria, diarrhea, or other communicable diseases
    - They will be likely to grow up stunted or wasted, and with diminished cognitive abilities
    - All of the above tend not to be great for mental health, so they're fairly likely to become depressed, anxious, or suffer from more serious mental issues
    - When you ask them how happy they are on a life satisfaction/ happiness scale, they'll give you around 4/10 

    Based on this reality, and your estimates about how the world is likely to improve in the coming few decades, you have to work out whether you think this life you've saved is more likely or not to be net positive. 

    I'm not saying that we shouldn't give more money to global health charities- they improve lives and stop people getting horrible diseases. All else equal, fewer communicable diseases are better. But I'm disagreeing strongly with the framing of this piece. 

    Interesting - I definitely think this is valuable. I have two small recommendations for the survey:

    - Specify in the sugary drinks question whether it only includes commercial, fizzy sugary drinks, or any drinks with sugar in (e.g. coffee with sugar, milkshakes, bubble tea, traditional sweet drinks etc.) As it is, you give examples of commercial fizzy drinks, but it's a little ambiguous whether other sweet drinks might be included.

    - Make it clear that you can choose percentages over 100% for the first two options (a life in prison, or without any pleasure is worse than death - many people are likely to believe this). I think that your example percentages (e.g. 0.1%, 1%, 10%, 20%, 30%, 100% etc) are anchoring people to a particularly low score. 

    Interesting. I think there are two related concepts here, which I'll call individual modesty and communal modesty. Individual modesty, meaning that an individual would defer to the perceived experts (potentially within his community) and communal modesty, meaning that the community defers to the relevant external expert opinion. I think EAs tend to have fairly strong individual modesty, but occasionally our communal modesty lets us down. 

    With most issues that EAs are likely to have strong opinions on, here are a few of my observations:

    1. Ethics: I'd guess that most individual EAs think they're right about the fundamentals- that consequentialism is just better than the alternatives. I'm not sure whether this is more communal or individual immodesty.
    2. Economics/ Poverty: I think EAs tend to defer to smart external economists who understand poverty better than core EAs, but are less modest when it comes to what we should prioritise based on expert understanding. 
    3. Effective Giving: Individuals tend to defer to a communal consensus. We're the relevant experts here, I think.
    4. General forecasting/ Future: Individuals tend to defer to a communal consensus. We think the relevant class is within our community, so we have low communal modesty. 
    5. Animals: We probably defer to our own intuitions more than we should. Or Brian Tomasik. If you're anything like me, you think: "he's probably right, but I don't really want to think about it".
    6. Geopolitics: I think that we're particularly bad at communal modesty here - I hear lots of bad memes (especially about China) that seem to be fairly badly informed. But it's also difficult to work out the relevant expert reference class. 
    7. AI (doom): Individuals tend to defer to a communal consensus, but tend to lean towards core EA's 3-20% rather than core-LW/Eliezer's 99+%. People broadly within our community (EA/ rationalists) genuinely have thought about this issue more than anyone else, but I think there's a debate whether we should defer to our pet experts or more establishment AI people. 

    I think there's a range of things that could happen with lower-level AGI, with increasing levels of 'fire-alarm-ness' (1-4), but decreasing levels of likelihood. Here's a list; my (very tentative) model would be that I expect lots of 1s and a few 2s within my default scenario, and this will be enough to slow down the process and make our trajectory slightly less dangerous. 

    Forgive the vagueness, but these are the kind of things I have in mind:

    1. Mild fire alarm: 

    - Hacking (prompt injections?) within current realms of possibility (but amped up a bit)
    - Human manipulation within current realms of possibility (IRA disinformation *5)
    - Visible, unexpected self-improvement/ escape (without severe harm)
    - Any lethal autonomous weapon use (even if generally aligned) especially by rogue power
    - Everyday tech (phones, vehicles, online platforms) doing crazy, but benign misaligned stuff
    - Stock market manipulation causing important people to lose a lot of money 

    2. Moderate fire alarm:

    - Hacking beyond current levels of possibility
    - Extreme mass manipulation
    - Collapsing financial or governance systems causing minor financial or political crisis
    - Deadly use of autonomous AGI in weapons systems by rogue group (killing over 1000 people)
    - Misaligned, but less deadly, use in weapons systems
    - Unexpected self-improvement/ escape of a system causing multiple casualties/ other chaos
    - Attempted (thwarted) acquisition of WMDs/ biological weapons
    - Unsuccessful (but visible) attempts to seize political power

    3. Major fire alarm:

    - Successful attempts to seize political power
    - Effective global mass manipulation
    - Successful acquisition of WMDs, bioweapons
    - Complete financial collapse 
    - Complete destruction of online systems- internet becomes unuseable etc.
    - Misaligned, very deadly use in weapons systems 

    4. The fire alarm has been destroyed, so now it's just some guy hitting a rock with a scorched fencepost:

    - Actual triggering of nuclear/ bio conflict/ other genuine civilisational collapse scenario (destroying AI in the process)

    Okay, I think your reference to infinite time periods isn't particularly relevant here (seems to be a massive difference between 5 and 20 years), but I get your point that short timelines play an important role.

    I guess the relevant factors that might be where we have different intuitions are:

    1. How long will this post-agentic-AGI, pre-God-AGI phase last?
    2. How chaotic/ dangerous will it be?
    3. When bad stuff happens, how likely is it to seriously alter the situation? (e.g. pause in AI progress, massive increase in alignment research, major compute limitations, massive reduction on global scientific capacity etc.)
    Load more