V

VeryJerry

112 karmaJoined

Posts
1

Sorted by New

Comments
41

Hmm, that opens up a lot of interesting conversation threads. I actually think some goals will be easier to align ai towards than others, for example we've aligned some ai to winning at chess and now they're better than any human. Obviously that kind of goal is much simpler than any values framework that would be worth aligning agi too, but I think sentientist values would be easier to instill than "human values" (although not in the case of LLMs, I think they're already basically "aligned" with human values and we now need to shift them towards caring more about all sentient beings). And on top of that, I think sentientist values will care enough about us and our values that a sentientist agi would "go well" for us. 

But I'm not even close to an expert, so that's all very tentative speculation.

Have you seen the moral ambition folks?

As far as I know, most current alignment work is going towards aligning ai with human values. If that's successful then yay for us, but if we worked towards aligning ai with sentientist values (along the lines of "evidence, reason, and compassion for all sentient beings"), then we would also be in the group of valued beings. If people think that would go well for us, then I think it would make sense to think about ways to redirect more research towards aligning ai with all sentient beings, rather than just human values. 

For example, humans. We are somewhat aligned with ourselves, but not with other animals, and that's been catastrophic for animals (see factory farms, industrial fishing). If we encountered aliens that were more powerful than us, that had alignment like ours, they would not care about wiping us out (maybe a few of them would, but most wouldn't). But if those aliens were aligned with all sentient beings, they would care. Say for some reason that very powerful aliens were somehow convinced by elephants to be aligned with elephants, we would still be on the chopping block along with every other species. So it's in everyones interest to align them with all sentient beings, and in the process, we get alignment with us as well. 

I would be interested in hearing why people might think ai that went well for animals would not go well for humans, I can imagine scenarios like that but they seem extremely unlikely to me. 

I'm having a hard time putting what I mean into words, something like "alignment with all sentient beings gets alignment with humans for free whereas alignment with humans does not get alignment with other sentient beings for free" plus "alignment with all sentient beings is simpler than alignment with humans in particular". I think the question I posed in my original comment would help determine whether someone agrees with the first part of this paragraph.

I think a second important question is "If AGI goes well for animals, it'll go well for humans" which I think is extremely likely, but I'm much more doubtful about it going well for animals if it goes well for humans. 

 

We are animals, so it going well for animals allows us to instill at least somewhat simpler values in AI, but many people will want humans to be privileged in ai values which is not only more likely to exclude non-humans, but it'll also be a potentially more complicated and more likely to fail

One good technique for listening to the part of you that's struggling is internal double crux. Note that it has to be able to go both ways, it's not a new way to override the elephant 

Sweet, that would be really helpful! I recently read gwerns post on modafinil, but found it wasn't that helpful for understanding the benefits of taking it, or how to use it effectively (dose, schedule, etc) 😅 I tend to get pretty good sleep, and the main takeaway I got from that post was that it mainly lets you need less sleep

Do you have any insight into using psychedelics for good? I know gwerns microdosing lsd test came up negative, and I haven't found it helpful for me to focus on a job I don't like even though it pays well (it actually made it worse) so I haven't tried it since or for something I actually care about (I'm actively looking for a new role that I will care more about, and may pay better). But I know real trips sometimes help me focus for the following week or two. I'm also curious about using low-dose dmt while working, I found it slightly helpful the one time I tried and may try again but figured I'd ask if you know

Are there any guides for how to use stimulants effectively for good? Assuming someone could access pretty much whatever ones they wanted, which ones should they use and how should they go about dosing? Or/and, what strategy should they use when figuring out what works best for them? e.g. should they try to take a blind random dose and drug and take notes for a while, to pick the best for them? Or start with adderall and slowly increase dosage until they hit diminishing returns, then try modafinil and ramp up dosage till diminishing returns, and repeat for various other ones? Or should they cycle different ones to avoid tolerance?

I know for me caffeine amplifies my energy, but is still unguided and I find myself distracted by the same things but just with more energy and focus. And I know the "correct" answer is talk to a medical professional, but I prefer to avoid the medical system as much as possible, and that's expensive, and anyways how would a medical proffesional approach those questions?

If you're worried about stealth, they have a vape that looks like a pen, and can write too https://www.pulsarshop.com/products/510-dl-scribe-vape-pen

I'll have to think about a better way to phrase my point, since I still think that the sheer amount of suffering and death far outweighs human issues. Almost all animals we kill at the very least have a bad death, and ~94% of the ones we farm (~10% of the ones we kill) also have a bad life. We factory farm about as many animals per year as the total number of humans who have ever lived, maybe about a third as many, maybe almost twice as many. Multiply that out by the number of years we've been doing those things and I still don't think any human problem even comes close to as bad. 

Good point, the way I worded that is wrong since we kill more animals than we farm and looking into it more now it looks like the 99% figure applies to the US, but according to our world in data (link later in this comment), the global estimate including farmed fish is more likely 94%. It's also not more animals per year than all humans, apparently it's likely about on par

According to https://www.prb.org/articles/how-many-people-have-ever-lived-on-earth/, "About 117 billion members of our species have ever been born on Earth".

According to our world in data and sentience institute, we factory farm 111 billion per year, but "this has wide uncertainty, ranging from 39 to 216 billion" (https://ourworldindata.org/how-many-animals-are-factory-farmed) (i.e. on the low end my point is it happens every three years and it still dwarfs human issues but not by as much, and on the high end it happens almost twice per year and is an even worse problem.)

Once you factor in wild fishing, then it's even more clear. And the method of slaughter for sea fish (suffocating or crushed to death in a pile) does not seem meaningfully better to me than a factory farm slaughterhouse, so the connotation still applies imo.

I agree that my perspective is likely to turn away people, I don't lead with it in conversations with the general public, but I do still think it's true. The problem is multiplied by every year we let it continue, it's not just a one-time <torture as many animals as all humans ever> event. Effective messaging to the public is super important, but it's not what I was trying to do with my comment. I was trying to highlight a reality so that people who really care about reality can use it to help orient and decide what to focus resources on.

Load more