I do independent research on EA topics. I write about whatever seems important, tractable, and interesting (to me). Lately, I mainly write about EA investing strategy, but my attention span is too short to pick just one topic.
I have a website: https://mdickens.me/ Most of the content on my website gets cross-posted to the EA Forum.
My favorite things that I've written: https://mdickens.me/favorite-posts/
I used to work as a software developer at Affirm.
A time cost of 0.0417 $/d for 7.5 s/d and 20 $/h.
Nitpick: I just timed myself taking creatine and it took me 42 seconds.
(My process consists of: take creatine and glass out of cabinet; scoop creatine into glass; pour tap water into glass; drink glass; put creatine and glass back into cabinet.)
Agreed that creatine passes a cost-benefit analysis.
That sounds pretty reasonable for why psychotherapy wouldn't be as widespread as it should. It looks to me like most of these reasons wouldn't apply to AMF. Training new psychotherapists takes years and tens of thousands of dollars (at developing-world wages). Getting more malaria nets requires buying more $5 malaria nets, and distributing malaria nets is much easier than distributing psychotherapists. So reasons 1–3 and #6 don't carry over (or at least not to nearly the same extent). #4 doesn't seem relevant to my original question so I think #5 is the only one that carries over—recipients might not know that they should be concerned about malaria.
Why does distributing malaria nets work? Why hasn't everyone bought a bednet already?
I don't know why (I thought it was a good post) but I have some guesses:
I plan on donating to PauseAI, but I've put considerable thought into reasons not to donate.
I gave some arguments against slowing AI development (plus why I disagree with them) in this section of my recent post, so I won't repeat those.
Yes that's also fair. Conflicts of interest are a serious concern and this might partially explain why big funders generally don't support efforts to pause AI development.
I think it's ok to invest a little bit into public AI companies, but not so much that you'd care if those companies took a hit due to stricter regulations etc.
I think the position I'm arguing for is basically the standard position among AI safety advocates so I haven't really scrutinized it. But basically, (many) animals evolved to experience happiness because it was evolutionarily useful to do so. AIs are not evolved so it seems likely that by default, they would not be capable of experiencing happiness. This could be wrong—it might be that happiness is a byproduct of some sort of information processing, and sufficiently complex reinforcement learning agents necessarily experience happiness (or something like that).
Also: According to the standard story where an unaligned AI has some optimization target and then kills all humans in the interest of pursuing that target (e.g. a paperclip maximizer), it seems unlikely that this AI would experience much happiness (granting that it's capable of happiness) because its own happiness is not the optimization target.
(Note: I realize I am ignoring some parts of your comment, I'm intentionally only responding to the central point so my response doesn't get too frayed.)
I think you are right about this, you've changed my mind (toward greater uncertainty).
This seems to depend on a conjunction of several strong assumptions: (1) AI alignment is basically easy; (2) there will be a slow takeoff; (3) the people running AI companies are open to persuasion, and "make AI safety seem cool" is the best kind of persuasion.
But then again I don't think pause protests are going to work, I'm just trying to pick whichever bad plan seems the least bad.