I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me).
I have a website: https://mdickens.me/ Much of the content on my website gets cross-posted to the EA Forum, but I also write about some non-EA stuff over there.
I used to work as a software developer at Affirm.
According to a survey of Congressional staffers, they care about personalized emails more than phone calls, and phone calls (much) more than form emails:
https://mdickens.me/assets/images/Congress-influence.png
(the way my brain works is that I'd much rather spend 5 minutes writing a personalized email than spend 5 minutes looking up the name of my representative and a relevant law to reference. YMMV)
I realize this comment makes me sound lazy (that's because I am lazy), but the way the doc is presented gives me too much work to do, in a way that's avoidable.
I'm not going to call my representative because I hate phone calls, so I'll just focus on the email part. The email template requires me to look up my senator's name and look up relevant laws from a list. I'm too lazy to do that. I would send a letter if there were a website where I could put in my state and it would auto-fill my senator's name and the list of relevant laws. It would not be very hard to build a web form to do that.
ASPCA has a web form that requires less work on my part. It still require me to fill out a bunch of personal info but I believe that part is a requirement. Although I'm not sure it auto-fills the rep's name, and it doesn't give any option to customize the message.
https://ifanyonebuildsit.com/act/letter (unrelated to farm bill) is a good example of a page that auto-fills as much information as possible.
Here are three other responses that I didn't see addressed. I only skimmed the original article so apologies if these were addressed there.
A fourth point, which the article did address:
First, you could reinterpret the intuition as entirely instrumental. Perhaps variety matters only because it tends to support exploration, learning, and other welfare-promoting goods. If so, then our intuition that Variety is better than Homogeneity is a misfire, attributing intrinsic value to something that has only instrumental value.
[the article's reply:]
Imagine some truly wonderful moment — of falling in love, or a flash of creative insight, or communing with the spiritual. Suppose that this moment is far more wonderful than anything that humanity has experienced to date: you or I would give up years of ordinary happy life just to experience such a peak of joy. But now suppose that this moment is just ever so slightly less good than some other moment that could be produced, with the same resources. So, instead, the world consists of a near-endless reliving of that other moment, and that first moment is never experienced at all.
To me, this feels like a tremendous loss. The world has omitted something wonderful that could have been created.
This doesn't seem like a satisfying response to me. I could say that your feeling of tremendous loss stems from the instrumental value of novel experiences; you haven't done anything to dispel that argument.
I would consider it from a first person POV. Suppose I'm given the choice to experience something amazing, or something a bit less amazing. If I experienced either thing in the past, then I have no memory of it (to guarantee that it's a proper replication of the amazing experience, and not a version altered by memory). I would pick the amazing experience, and not the lesser one.
IMO a "Scientist AI" is more promising in a world where we first get a global ban on superintelligent AI, or something else that prevents anyone from building "high-impact AI". Then AI developers coalesce around "Scientist AI" as a safe approach and develop it carefully.
(I still think a Scientist AI would result in everyone dying, but at least it's a better starting point)
Right now I would give very little marginal philanthropic money to compute-based experiments. AI companies already do a lot of those, and I don't expect them to work anyway. ML experiments are not addressing the fundamental barriers to solving AI misalignment. A core problem is that experiments can't deal with the sharp left turn.
(I would make an exception for CaML-style alignment-to-animals work, but that's not about AI safety as it's normally construed.)
Superforecasters tend to believe x-risk isn't a big deal. Regardless of whether they're using reasonable procedures, they're getting the wrong object-level answer in this case. FRI's consulting plausibly made the scaling policies worse. Hard to say without knowing more details.
(I'm thinking particular of XPT, which is from 2023 so it may be outdated at this point. But that tournament had superforecasters predicting only a 1% chance of AI extinction, which is ludicrous and should not be used as the basis for decisions.)
if you have recs on things to read, would be useful :)
Since I'm biased on the matter, I will start by linking the posts I've written:
Eric Neyman and Zach Stein-Perlman write recommendations on AI risk advocacy, most of their work is non-public but Eric wrote Consider donating to Alex Bores, author of the RAISE Act.
Zvi wrote The Big Nonprofits Post [2024] and 2025. He's a grantmaker for SFF which is my favorite of the big grantmakers.
Coefficient Giving and Longview Philanthropy often write about their grantmaking decisions (although not in as much detail as I'd like). I tend to have some pretty big disagreements with them, but they're still worth reading.
Most reasoning on grantmaking/donations happens in private, so there's not a whole lot to read. If you broaden the question to writings on general strategy (not just donations), there's a ton of stuff worth reading. I will just link two that align best with my personal views:
I basically agree, although "dangerous approaches are tamped down" is doing most of the work here IMO. By default (i.e. no tamping-down), I expect the situation with a weakly-superhuman Scientist AI to be:
(I think Bengio would agree that this is a concern, and would agree that we need global coordination on AI safety to make this work.)