I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff like [investing](https://mdickens.me/category/finance/) and [fitness](https://mdickens.me/category/fitness/).
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
What can ordinary people do to reduce AI risk? People who don't have expertise in AI research / decision theory / policy / etc.
Some ideas:
Ah I see what you're saying. I can't recall seeing much discussion on this. My guess is that it would be hard to develop a non-superintelligent AI that poses an extinction risk but I haven't really thought about it. It does sound like something that deserves some thought.
When people raise particular concerns about powerful AI, such as risks from synthetic biology, they often talk about them as risks from general AI, but they could come from narrow AI, too. For example some people have talked about the risk that narrow AI could be used by humans to develop dangerous engineered viruses.
I agree with David's comment. These sorts of ethical dilemmas are puzzles for everyone, not just for utilitarianism.
And in the case of insect welfare, rights-based theories produce more puzzling puzzles because it's unclear how to reckon with tradeoffs.
There is a related concern where most of the big funders either have investments in AI companies, or have close ties to people with investments in AI companies. This biases them toward funding activities that won't slow down AI development. So the more effective an org is at putting the brakes on AGI, the harder a time it will have getting funded.*
Props to Jaan Tallinn, who is an early investor in Anthropic yet has funded orgs that want to slow down AI (including CAIP).
*I'm not confident that this is a factor in why CAIP has struggled to get funding, but I wouldn't be surprised if it was.
In general, writing criticism feels more virtuous than writing praise.
FWIW it feels the opposite to me. Writing praise feels good; writing criticism feels bad.
(I guess you could say that it's virtuous to push through those bad feelings and write the criticism anyway? I don't get any positive feelings or self-image from following that supposed virtue, though.)
I think this is an important point that's worth saying.
For what it's worth, I am not super pessimistic that "solving alignment" is something that can be solved in principle. But I'm quite concerned that the safety-minded AI companies seem to completely ignore the philosophical problems with AI alignment. They all operate under the assumption that alignment is purely an ML problem and they can solve it by basically doing ML research. Which I expect is false (credence: 70%).
Wei Dai has written some good stuff about the problem of "philosophical competence". See here for a collection of his writings on the topic.
Hard to know the full story but for me this is a weak update against CAIS's judgment.
Right now CAIS is one out of a total of maybe two orgs (along with FLI) pushing for AI legislation that both (1) openly care about x-risk and (2) are sufficiently Respectable TM* to get funding from big donors. This move could be an attempt to maintain CAIS's image as Respectable TM. My guess is it's the wrong move but I have a lot of uncertainty. I think firing people due to public pressure is generally a bad idea although I'm not confident that that's what actually happened.
*I hope my capitalization makes this clear but to be explicit, I don't think Respectable TM is the same thing as "actually respectable". For example, MIRI is actually respectable, but isn't Respectable TM.
Edit: I just re-read the CAIS tweet, from the wording it is clear that CAIS meant one of two things: "We are bowing to public pressure" or "We didn't realize John Sherman would say things like this, and we consider it a fireable offense". Neither one is a good look IMO.