anonymous6

Topic Contributions

Comments

A Critical Review of Open Philanthropy’s Bet On Criminal Justice Reform

Unlike poverty and disease, many of the harms of the criminal justice system are due to intentional cruelty. People are raped, beaten, and tortured every day in America's jails and prisons. There are smaller cruelties, too, like prohibiting detainees from seeing visitors in order to extort more money out of their families.

To most people, seeing people doing intentional evil (and even getting rich off it) seems viscerally worse than harm due to natural causes.

I think from a ruthless expected utility perspective, this probably is correct in the abstract, i.e. all else equal, murder is worse than equivalently painful accidental death. However I doubt taking it into account (and even being very generous about things like "illegible corrosion to the social fabric") would importantly change your conclusions about $/QALY in this case, because all else is not equal.

But, I think the distinction is probably worth making, as it's a major difference between criminal justice reform and the two baselines for comparison.

Catholic theologians and priests on artificial intelligence

Good call -- I added a little more detail about these two discussions.

Responsible/fair AI vs. beneficial/safe AI?

A thought about some of the bad dynamics on social media that occurred to me:

Some well-known researchers in the AI Ethics camp have been critical of the AI Safety camp (or associated ideas like longtermism). It seems to be true that by contrast, AI Safety researchers are neutral-to-positive on AI Ethics. So there is some asymmetry.

However, there are certainly mainstream non-safety ML researchers who are harshly (typically unfairly) critical of AI Ethics. And there are also AI-Safety/EA-adjacent popular voices (like Scott Alexander) who criticize AI Ethics. Then on top of this there are fairly vicious anonymous trolls on Twitter.

So some AI Ethics researchers reasonably feel like they're being unfairly attacked and that people socially connected to EA/AI Safety are in the mix, which may naturally lead to hostility even if it isn't completely well-directed.

Responsible/fair AI vs. beneficial/safe AI?

https://facctconference.org is the major conference in the area. It's interdisciplinary -- mix of technical ML work, social/legal scholarship, and humanities-type papers.

Some big names: Moritz Hardt, Arvind Narayanan, and Solon Barocas wrote a textbook https://fairmlbook.org and they and many of their students are important contributors. Cynthia Dwork is another big name in fairness, and Cynthia Rudin in explainable/interpretable ML. That's a non-exhaustive list but I think is a decent seed for a search through coauthors.

I believe there is in fact important technical overlap in the two problem areas. For example, https://causalincentives.com is research from a group of people who see themselves as working in AI safety. Yet people in the fair ML community are also very interested in causality, and study it for similar reasons using similar tools.

I think much of the expressed animosity is only because the two research communities seem to select for people with very different preexisting political commitments (left/social justice vs. neoliberal), and they find each other threatening for that reason.

On the other hand, there are differences. An illustrative one is that fair ML people care a lot about the fairness properties of linear models, both in theory and in practice right now. Whereas it would be strange if an AI Safety person cared at all about a linear model -- they're just too small and nothing like the kind of AI that could become unsafe.

Mastermind Groups: A new Peer Support Format to help EAs aim higher

My feeling about the phrase "Mastermind Group" is fairly negative. I have heard people mention it from time to time and knew it was from Napoleon Hill, who was kind of the inventor of the self-help/self-improvement book. The phrase is something I associate,  I think reasonably, with the whole culture of self-improvement seminars and content that descends from Hill -- what used to be authors/speakers like Tony Robbins and is now also really big on YouTube. The kind of thing where someone is going to sell you a course on how to get rich, and the way to get rich is to learn to successfully sell a course on how to get rich.

Take this for what it's worth -- just one person's possibly skewed gut reaction to this phrase. I think the idea of peers meeting in a group to support each other remains sound.

Complex Systems for AI Safety [Pragmatic AI Safety #3]

One way I think it is plausible to draw lines between RL/core DL  is that post-AlphaGo a lot of people were very bullish on specifically deep networks + reinforcement learning. Part of the idea was that supervised learning required inordinately costly human labeling, whereas RL would be able to learn from cheap simulations and even improve itself online in the world. OpenAI was originally almost 100% RL-focused. That thread of research is far from dead but it has certainly not panned out the way people hoped at the time (e.g. OpenAI has shifted heavily away from RL). 

Meanwhile non-RL deep learning methods, especially generative models that kind of sidestep the labeling issue, have seen spectacular success.

Try to sell me on [ EA idea ] if I'm [ person with a viewpoint ]

I gave this a shot and it ended up being an easier sell than I expected:

"AI is getting increasingly big and important. The cutting edge work is now mainly being done by large corporations, and the demographics of the people who work on it are still overwhelmingly male and exclude many disadvantaged groups.

In addition to many other potential dangers, we already know that AI systems trained on data from society can unintentionally come to reflect the unjust biases of society: many of the largest and most impressive AI systems right now have this problem to some extent. A majority of the people working on AI research are quite privileged and many are willfully oblivious to the risks and dangers.

Overall, these corporations expect huge profits and power from developing advanced AI, and they’re recklessly pushing forward in improving its capabilities without sufficiently considering the harms it might cause.

We need to massively increase the amount of work we put into making these AI systems safe. We need a better understanding of how they work, how to make them reflect just values, and how to prevent possible harm, especially since any harm is likely to fall disproportionately on disadvantaged groups. And we even need to think about making the corporations building them slow down their development until we can be sure they’re not going to cause damage to society. The more powerful these AI systems become, the more serious the danger — so we need to start right now."

I bet it would go badly if one tried to sell a social justice advocate on some kind of grand transhumanist vision of the far future, or even just on generic longtermism, but it's possible to think about AI risk without those other commitments.

Thought experiment: If you had 3 months to turn a stressed and unhappy person into a calm and happy one, what meta approach would you take?

It is rare, but does happen, that using psychedelic drugs can trigger a psychotic episode. Even though it is rare, this is a such a bad outcome that it's worth taking into consideration.

My layperson's understanding of the risks and tradeoffs right now is as follows: I think that used as a treatment for a concrete and difficult problem like PTSD, psychedelic drugs seem like immensely useful tools that should be used much more.

But for just general self-improvement or self-actualization, using psychedelic drugs feels to me like "picking up pennies in front of a steamroller" -- it will be fairly good for most people most of the time, with a huge tail risk.

I don't think it's well understood when, why, or how often this happens. I wish it were better understood, as I suspect it's specific people who are at risk and most people can use psychedelics safely. But from where I sit it seems like a -EV bet absent better information about your own brain.

The AI Messiah

One can imagine, say, a Christian charitable organization where non-Christians see its work for the poor, the sick, the hungry, and those in prison as good, and don't really mind that some of the money also goes to funding theologians and building cathedrals.

Although Christianity kind of has it built in that you have to help the poor even if the Second Coming is tomorrow. The risk in EA is if people were to become erroneously convinced of a short AI timeline and conclude all the normal charitable stuff is now pointless.

Effective altruism’s odd attitude to mental health

People (including I think some of the research at the Happier Lives Institute) often distinguishes "serious mental illness (SMI)" which is roughly schizophrenia, bipolar I, and debilitating major depression, from "any mental illness (AMI)", which includes everything.

The term "mental health" lumps together these two categories that, despite their important commonalities, I think probably should be analyzed in very different ways.

For example, with SMI, there are often treatments with huge obvious effects. But the side effects are bad, and patients may refuse treatment for various reasons including lack of insight. Treating these diseases can have a huge impact -- the difference between someone being totally unable to work or care for themselves and then dying young by accident or suicide, vs. being able to live an independent and successful life. But they are fairly rare in the population.

Whereas it seems that with the set AMI minus SMI, like generalized anxiety, etc., effect sizes of treatments are small and hard to measure. There's often so much demand for treatment that rationing is required. Impairment and suffering can be really bad but not, I think, typically as bad as SMI. But these diseases are much more prevalent so even if effect sizes are smaller, maybe the total impact of an intervention is much greater.

This distinction is obvious, but I want to point it out explicitly, as I think even though everyone kind of knows this, it's still underrated, and probably important for thinking about expected impact.

Load More