Not sure how I missed this, but great post and this seems super important and relatively neglected. In case others think it would be worth coining a term for this specifically, I proposed "p-risk" after "p-zombies," in a Tweet (might find later) a few months back.
More substantively, though, I think the greater potential concern is false-positives on consciousness, not false negatives. The latter (attributing conscious experience to zombies) would be tragic, but not nearly as tragic as causing astronomical suffering in digital agents that we don't regard as moral patients because they don't act or behave like humans or other animals.
In direct violation of the instruction to put ideas in distinct comments, here's a list of ideas most of which are so underbaked they're basically raw:
From a Twitter thread a few days ago (lightly edited/formatted), with plenty of criticism in the replies there
Probably batshit crazy but also maybe not-terrible megaproject idea: build a nuclear reactor solely/mainly to supply safety-friendly ML orgs with unlimited-ish free electricity to train models Looks like GPT-3 took something like $5M to train, and this recent 80k episode really drives home that energy cost is a big limiting factor for labs and a reason why only OpenAI/Deepmind are on the cutting edgeIn 2017, the smallest active U.S. nuclear re
Probably batshit crazy but also maybe not-terrible megaproject idea: build a nuclear reactor solely/mainly to supply safety-friendly ML orgs with unlimited-ish free electricity to train models
Looks like GPT-3 took something like $5M to train, and this recent 80k episode really drives home that energy cost is a big limiting factor for labs and a reason why only OpenAI/Deepmind are on the cutting edge
In 2017, the smallest active U.S. nuclear re
This was a top-level LW post from a few days ago aptly titled "Half-baked alignment idea: training to generalize" (that didn't get a ton of attention):
Thanks to Peter Barnett and Justis Mills for feedback on a draft of this post. It was inspired by Eliezer's Lethalities post and Zvi's response.Central idea: can we train AI to generalize out of distribution? I'm thinking, for example, of an algorithm like the following:Train a GPT-like ML system to predict the next word given a string of text only using, say, grade school-level writ
Thanks to Peter Barnett and Justis Mills for feedback on a draft of this post. It was inspired by Eliezer's Lethalities post and Zvi's response.
I'm thinking, for example, of an algorithm like the following:
Wish I could buy some nits from you lol
You're welcome, and likewise!
And just to clarify, there's a huge black box in my mind between "inflammation decreases" and "depressive symptoms decrease." I have no idea what the mechanisms are there!
Thanks, updating slightly in the direction of "not effective" but not a ton, mostly because I have a pretty high prior on anything that causally reduces systemic inflammation to be effective for depression. It would be quite surprising to me if at least one of the following is true:
Thank you! :)
Added, thank you!
If anyone knows of others, let me know!
Thanks for pointing this out. I should--and plan to--look into this more by checking out the individual studies used in the Cochrane review. Worth noting that the review was of antioxidant supplementation (vits A, E, C, selenium) in particular rather than a multivitamin per se.
I wouldn't be shocked if any physiological harm can be traced to Vit A and E supplementation in excess of, say, 300% of RDA. It is a little concerning that the one I recommended contains 170% and 130% respectively.
Also, speaking for myself alone, I think I'd be will... (read more)
Honestly I don't have a great answer here other than my overall impression/intuition is that it's probably bad to take arbitrarily high doses of these (unlike water soluble vitamins, at least out of some sort of precautionary principle) and I recall seeing some anecdotes from others saying that they actively prefer only taking one or the other.
I don't think there's anything necessarily wrong with taking both (say, 1g per day of each) though
I think we may not disagree; I was focusing on their impact on mental health in particular, whereas most studies, including the Cochrane one, look only at physiological outcomes. From Examine:
And I combine this with this meta-analysis suggesting that EPA is more responsible for this antidepressant effect.
I haven't looked into claims around heart health or mortality, so am agnostic there for now
To clarify, are you also interested in proposals concerning animal welfare?
Yes - this fits within our GHW portfolio. From the FAQ page:Can I write about non-human animals?
Yes. Open Philanthropy is a major funder of work to improve farm animal welfare. If you want to write about a potential new cause area where the primary beneficiaries are non-human animals, please use the open prompt.
I'm not intending to, although it's possible I'm using the term "opportunity cost" incorrectly or in a different way than you. The opportunity cost of giving a dollar to animal welfare is indeed whatever that dollar could have bought in the longtermist space (or whatever else you think is the next best option).
However, it seems to me that at least some parts of longtermist EA , some of the time, to some extent, disregard the animal suffering opportunity cost almost entirely. Surely the same error is committed in the opposite direction by hardcore animal advocates, but the asymmetry comes from the fact that this latter group controls a way smaller share of financial pie.
Related to the funding point (note 4):
It seems important to remember that even if high status (for lack of a more neutrally-valenced term) longtermist interventions like AI safety aren't currently "funding constrained," animal welfare at large most definitely is. As just one clear example, an ACE report from few months ago estimated that Faunalytics has room for more than $1m in funding.
That means there remains a very high (in absolute terms) opportunity cost to longtermist spending, because each dollar spent is one not being donated to an anim... (read more)
You're right I didn't make a full, airtight argument, and that severity of infection is indeed a crucial consideration. My extremely unqualified impression is that:
This is what my brain has decided on after being exposed to a bunch of unstructured information so the error bars are very large, and I should probably update toward your POV
Taking the Boltzmann brain example, isn't the issue that the premises that would lead to such a conclusion are incorrect, rather than the conclusion being "crazy" per se?
In many cases in philosophy, if we are honest with ourselves, we find that the reason we think the premises are incorrect is that we think the conclusion is crazy. We were perfectly happy to accept those premises until we learned what conclusions could be drawn from them.
Effective Altruism Georgetown will be interviewing Rob Wiblin for our inaugural podcast episode this Friday! What should we ask him?
You're welcome and thanks for the comment. I too want to preserve what is good, but I can't help but think that EAs tend to focus too much on preserving the good instead of reducing the bad, in large part because we tend to be relatively wealthy, privileged humans who rarely if ever undergo terrible suffering.
Yes, I believe things would change a lot. Hopefully we can find some way to induce this kind of cognitive empathy without making people actually suffer for first hand experience.
Yes, this was a bit puzzling for me. Good to see it redeemed a bit. I could see the post being disliked for a few reasons:
Anyway, thanks for the reassuring comment!
Very much agreed. https://algosphere.org/ for those interested.
Thanks for all those references. Don't know how I missed the 80,000 page on the topic, but that's a pretty big strike against it being ignored. Regarding your second point, I largely agree but there are surely some MB interventions that don't require full-time generalists. For example, message testing and advertising (I assume) can be mostly outsourced with enough money.
Thanks so much for the feedback - just edited with the improved formatting. Regarding your thoughts:
Will the Zoom be recorded for those of us unable to join live? If so, would you be willing to post the link as a comment under this post?
Another type of intervention that could plausibly reduce the influence of malevolent actors is to decrease intergenerational transfer of wealth and power. If competent malevolence both (i) increases one's capacity to gain wealth and/or power and (ii) is heritable, then we should expect malevolent families amass increasing wealth and power. This could be one reason that the global shift away from hereditary monarchies is associated with global peace (I sense that both of these things are correct, but am not positive).
For example, North Korea's Ki... (read more)