This list is great. I recommend adding the new Intro to ML Safety course and the ML Safety Scholars Program. Or maybe everyone is supposed to read Charlie's post for the most up-to-date resources? It's worth clarifying.
The Inside View also focuses on AI alignment. There's a YouTube channel with videos of the interviews. Sometimes there are interview highlights on LessWrong.
Here are some comments on the article that I sent to my family.
In 1972 philosopher Peter Singer suggested using metrics rather than emotion to direct charitable giving.
Not sure what he's talking about. I think the main point of Famine, Affluence, and Morality is that if you can help someone without a significant cost to yourself, you should.
Effective altruism also seems to be related to the “work to give” movement. Workers will rationalize high-paying jobs by giving most of their income away. Actually, when you work, you already give to society, but that is too complex for some to understand.
Earning to give is only a small part of EA, and I don't think it's typically a post hoc rationalization. And EAs understand very well that working directly on problems can give to society - see the first WSJ article I sent.
An organization known as GiveWell will tell you what charities are effective. I did a little digging, and I’m not so sure they’re effective at all. Yes, they direct money toward malaria nets and treatments for parasitic worms, but they also supply supplements for vitamin A deficiency, though genetically modified “golden” rice already provides vitamin A more effectively. Hmmm, seems like a move backward.
It's plausible that the best way to reduce vitamin A deficiency is to invest in multiple strategies at once. But if he gave a thorough argument that donating to "golden" rice infrastructure fights vitamin A deficiency more effectively per dollar than vitamin A supplementation, then I wouldn't be surprised to see GiveWell change its recommendations.
William MacAskill, a major effective-altruism booster, told the Washington Post that more should be spent on “preparing for low-probability, high-cost events such as pandemics.” That’s a bit like closing the barn door after the horse has bolted.
The author's comment seems quite silly to me.
And Mr. Bankman-Fried’s various entities, along with Cari Tuna and others, have put up about $19 million for a future California ballot measure, the California Pandemic Early Detection and Prevention Act, which would add a 0.75% tax on incomes over $5 million to raise up to $15 billion over 10 years. Catch that? Someone else pays. Effective, but not exactly selfless.
I don't see anything wrong with SBF promoting a tax on extremely wealthy people to prevent pandemics (unless the resulting pandemic prevention efforts are less valuable than what the wealthy people would do with their money otherwise). In general, I'm sure some taxes are totally worth promoting.
I don’t care if altruists spend their own money trying to prevent future risks from robot invasions or green nanotech goo, but they should stop asking American taxpayers to waste money on their quirky concerns.
Pandemic prevention is not a "quirky" concern!
And “effective” is in the eye of the beholder. Effective altruism proponent Steven Pinker said last year, “I don’t particularly think that combating artificial intelligence risk is an effective form of altruism.”
Yes, EAs don't agree on everything, nor do I think they should. There's an emphasis within EA on updating your beliefs in response to new evidence, such as reasonable arguments from other people.
Development economist Lant Pritchett finds it “puzzling that people’s [sic] whose private fortunes are generated by non-linearity”—Facebook, Google and FTX can write code that scales to billions of users—“waste their time debating the best (cost-effective) linear way to give away their private fortunes.”
So the argument is that when deciding where to donate your money, you should use the same tactics that earned you that money in the first place? It's unclear how "cost-effectiveness" is the same as "linearity." Maybe he's advocating for donating to interventions that are like unicorn startups - interventions that could be hugely beneficial if they succeed, but probably won't do much. If so, this is kind of exactly what Open Philanthropy is doing ("hits-based giving").
He notes that “national development” and “high economic productivity” drive human well-being. So true. History has proved that capitalism is the most effective and altruistic system.
It's fully possible to believe in EA principles and support capitalism. But high economic productivity can come with damaging externalities, such as increased risk of global catastrophes from new technologies.
There are only four things you can do with your money: spend it, pay taxes, give it away or invest it. Only the last drives productivity and helps society in the long term.
That seems totally incorrect. GiveWell estimates that donations to its recommended charities have averted over 100,000 deaths.
Eric Hoffer wrote in 1967 of the U.S.: “What starts out here as a mass movement ends up as a racket, a cult, or a corporation.” That’s true even of allegedly altruistic ones.
This is one of the few points in the article that I like. EA (which EA headquarters likes to describe as "a project") resembles a cult in some ways: people worry about future catastrophes, care about "doing good," think about weird ideas, and dream about growing the movement.
Vael Gates's post "Resources I send to AI researchers about AI safety" offers this:
AI Safety in ChinaTianxia 天下 and Concordia Consulting 安远咨询 are the main organizations in the space. If you're interested in getting involved in those communities, let me know and I can connect you!China-related AI safety and governance pathsChinAI Newsletter
AI Safety in China
The Kendrick Lamar joke at the top makes me a little uncomfortable since that song (and more generally, that album) is about a very serious topic. Otherwise I really like this post; I'm also confused about the precise meaning of "doing X is worth $$$."
I think this post argues that people shouldn't obsess about elite universities as sources of talent. My paraphrasing of the title is "Most super smart students aren't at super elite schools."
Here's the most up-to-date version of the AGI Safety Fundamentals curriculum. Be sure to check out Richard Ngo's "AGI safety from first principles" report. There's also a "Further resources" section at the bottom linking to pages like "Lots of links" from AI Safety Support.
The ethical theory of utilitarianism essentially states that "we ought to act to improve the well-being of everyone by as much as possible," which has a strong "do the most good" vibe. There are certainly a lot of arguments for and against utilitarianism-style ethics.I think one relevant intuition people have is "there's never a point at which I would not want to help any more people." Like if one action helps N people (in expectation) and another helps N+1 people, I'd rather do the latter.A conflicting intuition is that we don't feel that much better about helping 10 billion people than helping 9 billion people. The essay "On caring" argues that it's still really important to help those extra 1 billion people.Another idea is that not maximizing (e.g. not saving an extra person's life because you used that money to eat at a fancy restaurant) is the same as allowing harm to happen, and some philosophers believe that this is no different than doing harm.You may also be interested in the Von Neumann–Morgenstern utility theorem, which proves that all agents whose behavior obeys some reasonable properties will behave as maximizers.As a side note, I don't think you need to deeply care about maximization to care about EA; for example, you might feel fine about frequenting fancy restaurants. EA is not utilitarianism; the core idea of EA is increasing the quality of your altruism, not the quantity (although plenty of EAs feel inspired to increase the quantity as well, and some of these EAs are utilitarians).
People act like the difficult problems in front of them are the reason for the low moods they are having.
Sometimes this is true! In which case I recommend contemplating "Detach the grim-o-meter."
Here's a separate error that I've made many times: People believe that their intellectual knowledge of the world's problems causes them to act a certain way, when in reality they act that way because of their mood.
That's great. Seems that these days all the Alignment Newsletter translations go directly onto the English website.