AI safety
AI safety
Studying and reducing the existential risks posed by advanced artificial intelligence

Quick takes

4
5h
What can ordinary people do to reduce AI risk? People who don't have expertise in AI research / decision theory / policy / etc. Some ideas: * Donate to orgs that are working to AI risk (which ones, though?) * Write letters to policy-makers expressing your concerns * Be public about your concerns. Normalize caring about x-risk
37
1mo
5
I'm not sure how to word this properly, and I'm uncertain about the best approach to this issue, but I feel it's important to get this take out there. Yesterday, Mechanize was announced, a startup focused on developing virtual work environments, benchmarks, and training data to fully automate the economy. The founders include Matthew Barnett, Tamay Besiroglu, and Ege Erdil, who are leaving (or have left) Epoch AI to start this company. I'm very concerned we might be witnessing another situation like Anthropic, where people with EA connections start a company that ultimately increases AI capabilities rather than safeguarding humanity's future. But this time, we have a real opportunity for impact before it's too late. I believe this project could potentially accelerate capabilities, increasing the odds of an existential catastrophe.  I've already reached out to the founders on X, but perhaps there are people more qualified than me who could speak with them about these concerns. In my tweets to them, I expressed worry about how this project could speed up AI development timelines, asked for a detailed write-up explaining why they believe this approach is net positive and low risk, and suggested an open debate on the EA Forum. While their vision of abundance sounds appealing, rushing toward it might increase the chance we never reach it due to misaligned systems. I personally don't have a lot of energy or capacity to work on this right now, nor do I think I have the required expertise, so I hope that others will pick up the slack. It's important we approach this constructively and avoid attacking the three founders personally. The goal should be productive dialogue, not confrontation. Does anyone have thoughts on how to productively engage with the Mechanize team? Or am I overreacting to what might actually be a beneficial project?
80
3mo
1
I recently created a simple workflow to allow people to write to the Attorneys General of California and Delaware to share thoughts + encourage scrutiny of the upcoming OpenAI nonprofit conversion attempt. Write a letter to the CA and DE Attorneys General I think this might be a high-leverage opportunity for outreach. Both AG offices have already begun investigations, and AGs are elected officials who are primarily tasked with protecting the public interest, so they should care what the public thinks and prioritizes. Unlike e.g. congresspeople, I don't AGs often receive grassroots outreach (I found ~0 examples of this in the past), and an influx of polite and thoughtful letters may have some influence — especially from CA and DE residents, although I think anyone impacted by their decision should feel comfortable contacting them. Personally I don't expect the conversion to be blocked, but I do think the value and nature of the eventual deal might be significantly influenced by the degree of scrutiny on the transaction. Please consider writing a short letter — even a few sentences is fine. Our partner handles the actual delivery, so all you need to do is submit the form. If you want to write one on your own and can't find contact info, feel free to dm me.
31
1mo
16
The current US administration is attempting an authoritarian takeover. This takes years and might not be successful. My manifold question puts an attempt to seize power if they lose legitimate elections at 30% (n=37). I put it much higher.[1] Not only is this concerning by itself, this also incentivizes them to achieve a strategic decisive advantage via superintelligence over pro-democracy factions. As a consequence, they may be willing to rush and cut corners on safety.  Crucially, this relies on them believing superintelligence can be achieved before a transfer of power. I don't know how much the belief in superintelligence has spread into the administration. I don't think Trump is 'AGI-pilled' yet, but maybe JD Vance is? He made an accelerationist speech. Making them more AGI-pilled and advocating for nationalization (like Ashenbrenner did last year) could be very dangerous. 1. ^ So far, my pessimism about US Democracy has put me in #2 on the Manifold topic, with a big lead over other traders. I'm not a Superforecaster though.
64
3mo
5
Notes on some of my AI-related confusions[1] It’s hard for me to get a sense for stuff like “how quickly are we moving towards the kind of AI that I’m really worried about?” I think this stems partly from (1) a conflation of different types of “crazy powerful AI”, and (2) the way that benchmarks and other measures of “AI progress” de-couple from actual progress towards the relevant things. Trying to represent these things graphically helps me orient/think.  First, it seems useful to distinguish the breadth or generality of state-of-the-art AI models and how able they are on some relevant capabilities. Once I separate these out, I can plot roughly where some definitions of "crazy powerful AI" apparently lie on these axes:  (I think there are too many definitions of "AGI" at this point. Many people would make that area much narrower, but possibly in different ways.) Visualizing things this way also makes it easier for me[2] to ask: Where do various threat models kick in? Where do we get “transformative” effects? (Where does “TAI” lie?) Another question that I keep thinking about is something like: “what are key narrow (sets of) capabilities such that the risks from models grow ~linearly as they improve on those capabilities?” Or maybe “What is the narrowest set of capabilities for which we capture basically all the relevant info by turning the axes above into something like ‘average ability on that set’ and ‘coverage of those abilities’, and then plotting how risk changes as we move the frontier?” The most plausible sets of abilities like this might be something like:  * Everything necessary for AI R&D[3] * Long-horizon planning and technical skills? If I try the former, how does risk from different AI systems change?  And we could try drawing some curves that represent our  guesses about how the risk changes as we make progress on a narrow set of AI capabilities on the x-axis. This is very hard; I worry that companies focus on benchmarks in ways that
61
3mo
4
Holden Karnofsky has joined Anthropic (LinkedIn profile). I haven't been able to find more information.
43
4mo
2
Both Sam and Dario saying that they now believe they know how to build AGI seems like an underrated development to me. To my knowledge, they only started saying this recently. I suspect they are overconfident, but still seems like a more significant indicator than many people seem to be tracking.
3
5d
4
(just speculating, would like to have other inputs)   I get the impression that sexy ideas get disproportionate attention, and that this may be contributing to the focus on AGI risk at the expense of AI risks coming from narrow AI. Here I mean AGI x-risk/s-risk vs narrow AI (+ possibly malevolent actors or coordination issues) x-risk/s-risk. I worry about prioritising AGI when doing outreach because it may make the public dismiss the whole thing as a pipe dream. This happened to me a while ago.
Load more (8/185)