I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.
I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
I plan on donating to PauseAI, but I've put considerable thought into reasons not to donate.
I gave some arguments against slowing AI development (plus why I disagree with them) in this section of my recent post, so I won't repeat those.
Yes that's also fair. Conflicts of interest are a serious concern and this might partially explain why big funders generally don't support efforts to pause AI development.
I think it's ok to invest a little bit into public AI companies, but not so much that you'd care if those companies took a hit due to stricter regulations etc.
I think the position I'm arguing for is basically the standard position among AI safety advocates so I haven't really scrutinized it. But basically, (many) animals evolved to experience happiness because it was evolutionarily useful to do so. AIs are not evolved so it seems likely that by default, they would not be capable of experiencing happiness. This could be wrong—it might be that happiness is a byproduct of some sort of information processing, and sufficiently complex reinforcement learning agents necessarily experience happiness (or something like that).
Also: According to the standard story where an unaligned AI has some optimization target and then kills all humans in the interest of pursuing that target (e.g. a paperclip maximizer), it seems unlikely that this AI would experience much happiness (granting that it's capable of happiness) because its own happiness is not the optimization target.
(Note: I realize I am ignoring some parts of your comment, I'm intentionally only responding to the central point so my response doesn't get too frayed.)
I had remembered that the pause letter talked about extinction. Reading again, it doesn't use the word extinction; it does say "Should we risk loss of control of our civilization?" which is similar but somewhat ambiguous. CAIS' Statement on AI Risk would have been a better example.
Thanks for the comment! Disagreeing with my proposed donations is the most productive sort of disagreement. I also appreciate hearing your beliefs about a variety of orgs.
A few weeks ago, I read your back-and-forth with Holly Elmore about the "working with the Pentagon" issue. This is what I thought at the time (IIRC):
I re-read your post and its comments just now and I didn't have any new thoughts. I feel like I still don't have great clarity on the implications of the situation, which troubles me, but by my reading, it's just not as big a deal as you think it is.
General comments:
I believe the "consciousness requires having a self-model" is the only coherent model for rejecting animals' moral patienthood, but I don't understand the argument for why the model is supposedly true. Why would consciousness (or moral patienthood) require having a self-model? I have never seen Eliezer or anyone attempt to defend this position.
These numbers are approximately the same. I don't understand how you get that 5/6 of the work comes from volunteering / voluntary underpayment, did I do it wrong?