New & upvoted

Customize feedCustomize feed

Quick takes

Show community
View more
A reflection on the posts I have written in the last few months, elaborating on my views In a series of recent posts, I have sought to challenge the conventional view among longtermists that prioritizes the empowerment or preservation of the human species as the chief goal of AI policy. It is my opinion that this view is likely rooted in a bias that automatically favors human beings over artificial entities—thereby sidelining the idea that future AIs might create equal or greater moral value than humans—and treating this alternative perspective with unwarranted skepticism. I recognize that my position is controversial and likely to remain unpopular among effective altruists for a long time. Nevertheless, I believe it is worth articulating my view at length, as I see it as a straightforward application of standard, common-sense utilitarian principles that merely lead to an unpopular conclusion. I intend to continue elaborating on my arguments in the coming months. My view follows from a few basic premises. First, that future AI systems are quite likely to be moral patients; second, that we shouldn’t discriminate against them based on arbitrary distinctions, such as their being instantiated on silicon rather than carbon, or having been created through deep learning rather than natural selection. If we insist on treating AIs fundamentally differently from a human child or adult—for example, by regarding them merely as property to be controlled or denying them the freedom to pursue their own goals—then we should identify a specific ethical reason for our approach that goes beyond highlighting their non-human nature. Many people have argued that consciousness is the key quality separating humans from AIs, thus rendering any AI-based civilization morally insignificant compared to ours. They maintain that consciousness has relatively narrow boundaries, perhaps largely confined to biological organisms, and would only arise in artificial systems under highly specific con
It's the first official day of the AI Safety Action Summit, and thus it's also the day that the Seoul Commitments (made by sixteen companies last year to adopt an RSP/safety framework) have come due. I've made a tracker/report card for each of these policies at www.seoul-tracker.org. I'll plan to keep this updated for the foreseeable future as policies get released/modified. Don't take the grades too seriously — think of it as one opinionated take on the quality of the commitments as written, and in cases where there is evidence, implemented. Do feel free to share feedback if anything you see surprises you, or if you think the report card misses something important. My personal takeaway is that both compliance and quality for these policies are much worse than I would have hoped. I believe many peoples' theories of change for these policies gesture at something about a race to the top, where companies are eager to outcompete each other on safety to win talent and public trust, but I don't sense much urgency or rigor here. Another theory of change is that this is a sort of laboratory for future regulation, where companies can experiment now with safety practices and the best ones could be codified. But most of the diversity between policies here is in how vague they can be while claiming to manage risks :/ I'm really hoping this changes as AGI gets closer and companies feel they need to do more to prove to govts/public that they can be trusted. Part of my hope is that this report card makes clear to outsiders that not all voluntary safety frameworks are equally credible.
How might EA-aligned orgs in global health and wellness need to adapt calculations of cost-effective interventions given the slash-and-burn campaign currently underway against US foreign aid? Has anyone tried gaming out what different scenarios of funding loss look like (e.g., one where most of the destruction is reversed by the courts, or where that reversal is partial, or where nothing happens and the days are numbered for things like PEPFAR)? Since US foreign aid is so varied, I imagine that's a tall order, but I've been thinking about this quite a bit lately!
Screwworm is a flesh-eating maggot! I skimmed past many posts like this, assuming that it was some kind of stomach worm, or related to the suffering of wild worms (not that I am opposed to either of those, they just don't grab my attention as strongly)
Offer subject to be arbitrarily stopping at some point (not sure exactly how many I'm willing to do) Give me chatGPT Deep Research queries and I'll run them. My asks are that: 1. You write out exactly what you want the prompt to be so I can just copy and paste something in 2. Feel free to request a specific model (I think the options are o1, o1-pro, o3-mini, and o3-mini-high) but be ok with me downgrading to o3-mini 3. Be cool with me very hastily answering the inevitable set of follow-up questions that always get asked (seems unavoidable for whatever reason). I might say something like "all details are specified above; please use your best judgement"