JW

Jacob_Watts

Human (a type of hominid) @ Earth (our home)
49 karmaJoined Ephraim, UT 84627, USA

Bio

Participation
3

Pause AI / Veganish / Generally Try to Be an Ally to Other People / 

Informal monkey man who is excited to help other hominids. 

Lets do a bunch of good stuff and have fun gang!

I am a fan of anarchy, nonviolent civil disobedience, compassion, charity, and meeting each others needs. 

I love you!

How others can help me

Sometimes I worry about not having enough money to afford stuff like shelter and I think that makes me less productive. More of a reassurance that people will at least try to meet my material needs in times of crisis is something that appeals to me. I know "you don't know me like that", but I figured it was better to be honest than polite on this sort of thing lol.

I am also interested in finding collaborators to make and learn stuff with. I might put a link here to relevant projects and stuff for that, but in the mean time, suffice it to say that I am a creative guy with a mix of skills interest in many cause areas.

How I can help others

I'm not an expert in anything really, but I know a little about a lot and would be happy to provide input where I can. Currently, I think a lot about how to make AI go well and I am happy to red-team your plans or brainstorm with you.

I can also try to help teach people. 

Comments
12

Note that the cost-effectiveness of epidemic/pandemic preparedness I got of 0.00236 DALY/$ is still quite high.


Point well-taken. 

I appreciate you writing and sharing those posts trying to model and quantify the impact of x-risk work and question the common arguments given for astronomical EV.

I hope to take a look at those more in depth some time and critically assess what I think about them. Honestly, I am very intrigued by engaging with well informed disagreement around the astronomical EV of x-risk focused approaches. I find your perspective here interesting and I think engaging with it might sharpen my own understanding.

:)

 

Interesting! This is a very surprising result to me because I am mostly used to hearing about how cost effective pandemic prevention is and this estimate seems to disagree with that.

Shouldn't this be a relatively major point against prioritizing biorisk as a cause area? (at least w/o taking into account strong long termism and the moral catastrophe of extinction)

Fictional Characters:

I would say I agree that fictional characters aren't moral patients. That's because I don't think the suffering/pleasure of fictional characters is actually experienced by anyone.

I take your point that you don't think that the suffering/pleasure portrayed by LLMs is actually experienced by anyone either.

I am not sure how deep I really think the analogy is between what the LLM is doing and what human actors or authors are doing when they portray a character. But I can see some analogy and I think it provides a reasonable intuition pump for times when humans can say stuff like "I'm suffering" without it actually reflecting anything of moral concern.

Trivial Changes to Deepnets:

I am not sure how to evaluate your claim that only trivial changes to the NN are needed to have it negate itself. My sense is that this would probably require more extensive retraining if you really wanted to get it to never role-play that it was suffering under any circumstances. This seems at least as hard as other RLHF "guardrails" tasks unless the approach was particularly fragile/hacky.

Also, I'm just not sure I have super strong intuitions about that mattering a lot because it seems very plausible that just by "shifting a trivial mass of chemicals around" or "rearranging a trivial mass of neurons" somebody could significantly impact the valence of my own experience. I'm just saying, the right small changes to my brain can be very impactful to my mind.

My Remaining Uncertainty:

I would say I broadly agree with the general notion that the text output by LLMs probably doesn't correspond to an underlying mind with anything like the sorts of mental states that I would expect to see in a human mind that was "outputting the same text".

That said, I think I am less confident in that idea than you and I maybe don't find the same arguments/intuitions pumps as compelling. I think your take is reasonable and all, I just have a lot of general uncertainty about this sort of thing.

Part of that is just that I think it would be brash of me in general to not at least entertain the idea of moral worth when it comes to these strange masses of "brain-tissue inspired computational stuff" which are totally capable of all sorts of intelligent tasks. Like, my prior on such things being in some sense sentient or morally valuable is far from 0 to begin with just because that really seems like the sort of thing that would be a plausible candidate for moral worth in my ontology.

And also I just don't feel confident at all in my own understanding of how phenomenal consciousness arises / what the hell it even is. Especially with these novel sorts of computational pseudo-brains.

So, idk, I do tend to agree that the text outputs shouldn't just be taken at face value or treated as equivalent in nature to human speech, but I am not really confident that there is "nothing going on" inside the big deepnets.

There are other competing factors at this meta-uncertainty level. Maybe I'm too easily impressed by regurgitated human text. I think there are strong social / conformity reasons to be dismissive of the idea that they're conscious. etc.

Usefulness as Moral Patients:

I am more willing to agree with your point that they can't be "usefully" moral patients. Perhaps you are right about the "role-playing" thing and whatever mind might exist in GPT, produces the text stream more as a byproduct of whatever it is concerned about than as a "true monologue about itself". Perhaps the relationship it has to its text outputs is analogous to the relationship an actor has to a character they are playing at some deep level. I don't personally find "simulators" analogy compelling enough to really think this, but I permit the possibility.

We are so ignorant about nature of a GPTs' minds that perhaps there is not much that we can really even say about what sorts of things would be "good" or "bad" with respect to them. And all of our uncertainty about whether/what they are experiencing, almost certainly makes them less useful as moral patients on the margin.

I don't intuitively feel great about a world full of nothing, but servers constantly prompting GPTs with "you are having fun, you feel great" just to have them output "yay" all the time. Still, I would probably rather have that sort of world than an empty universe. And if someone told me they were building a data center where they would explicitly retrain and prompt LLMs to exhibit suffering-like behavior/text outputs all the time, I would be against that.

But I can certainly imagine worlds in which these sorts of things wouldn't really correspond to valenced experience at all. Maybe the relationship between a NN's stream of text and any hypothetical mental processes going on inside them is so opaque and non-human that we could not easily influence the mental processes in ways that we would consider good.

LLMs Might Do Pretty Mind-Like Stuff:

On the object level, I think one of the main lines of reasoning that makes me hesitant to more enthusiastically agree that the text outputs of LLMs do not correspond to any mind is my general uncertainty about what kinds of computation are actually producing those text outputs and my uncertainty about what kinds of things produce mental states.

For one thing, it feels very plausible to me that a "next token predictor" IS all you would need to get a mind that can experience something. Prediction is a perfectly respectable kind of thing for a mind to do. Predictive power is pretty much the basis of how we judge which theories are true scientifically. Also, plausibly it's a lot of what our brains are actually doing and thus potentially pretty core to how our minds are generated (cf. predictive coding).

The fact that modern NNs are "mere next token predictors" on some level doesn't give me clear intuitions that I should rule out the possibility of interesting mental processes being involved.

Plus, I really don't think we have a very good mechanistic understanding of what sorts of "techniques" the models are actually using to be so damn good at predicting. Plausibly non of the algorithms being implemented or "things happening" are of any similarity to the mental processes I know and love, but plausibly there is a lot of "mind-like" stuff going on. Certainly brains have offered design inspiration, so perhaps our default guess should be that "mind-stuff" is relatively likely to emerge.

Can Machines Think:

The Imitation Game proposed by Turing attempts to provide a more rigorous framing for the question of whether machines can "think".

I find it a particularly moving thought experiment if I imagine that the machine is trying to imitate a specific loved one of mine.

If there was a machine that could nail the exact I/O patterns that my girlfriend, then I would be inclined to say that whatever sort of information processing occurs in my girlfriend's brain to create her language capacity must also be happening in the machine somewhere.

I would also say that if all of my girlfriend's language capacity were being computed somewhere, then it is reasonably likely that whatever sorts of mental stuff goes on that generates her experience of the world would also be occurring.

I would still consider this true without having a deep conceptual understanding of how those computations were performed. I'm sure I could even look at how they were performed and not find it obvious in what sense they could possibly lead to phenomenal experience. After all, that is pretty much my current epistemic state in regards to the brain, so I really shouldn't expect reality to "hand it to me on a platter".

If there was a machine that could imitate a plausible human mind in the same way, should I not think that it is perhaps simulating a plausible human in some way? Or perhaps using some combination of more expensive "brain/mind-like" computations in conjunction with lazier linguistic heuristics?

I guess I'm saying that there are probably good philosophical reasons for having a null hypothesis in which a system which is largely indistinguishable from a human mind should be treated as though it is doing computations equivalent to a human mind. That's the pretty much same thing as saying it is "simulating" a human mind. And that very much feels like the sort of thing that might cause consciousness.

I appreciate you taking the time to write out this viewpoint. I have had vaguely similar thoughts in this vein. Tying it into Janus's simulators and the stochastic parrot view of LLMs was helpful. I would intuitively suspect that many people would have an objection similar to this, so thanks for voicing it.

If I am understanding and summarizing your position correctly, it is roughly that:

The text output by LLMs is not reflective of the state of any internal mind in a way that mirrors how human language typically reflects the speaker's mind. You believe this is implied by the fact that the LLM cannot be effectively modeled as a coherent individuals with consistent opinions; there is not actually a single "AI assistant" under Claude's hood. Instead, the LLM itself is a difficult to comprehend "shoggoth" system and that system sometimes falls into narrative patterns in the course of next token prediction which cause it to produce text in which characters/"masks" are portrayed. Because the characters being portrayed are only patterns that the next token predictor follows in order to predict next tokens, it doesn't seem plausible to model them as reflecting an underlying mind. They are merely "images of people" or something; like a literary character or one portrayed by an actor. Thus, even if one of the "masks" says something about it's preferences or experiences, this probably doesn't correspond to the internal states of any real, extant mind in the way that we would normally expect to be true when humans talk about their preferences or experiences.

Is that a fair summation/reword?

Adjacent to this point about how we could improve EA communication, I think it would be cool to have a post that explores how we might effectively use, like, Mastodon or some other method of dynamic, self-governed federation to get around this issue. I think this issue goes well beyond just the EA forum in some ways lol.

Good suggestion! Happy Ramadan! <3

Just for the sake of feedback, I think this makes me personally less inclined to post the ideas and drafts I have been toying with because it makes me feel like they are going to be completely steamrolled by a flurry of posts by people with higher status than me and it wouldn't really matter what I said.

I don't know who your target demo here is and it sounds like "flurry of posts by high status individuals" might have been your main intention anyways. However, please note, that this doesn't necessarily help you very much if you are trying to cultivate more outsider perspectives.

In any case, you're probably right that this will lead to more discussion and I am interested to see how it shakes out. I hope you'll write up a review post or something to summarize how the event went because it's going to be hard to follow that many posts about different topics and the corresponding they each generate.

Sure thing! I don't think it'll be all that polished or comprehensive since it is mostly intended to help me straighten out my reasoning, but I would be more than happy to share it. 

Thank you for the survey info! I was favorably surprised by some of those results.

Thank you so much! This is exactly the sort of thing I am looking for. I'm glad there is high quality work like this being done to advance strategic clarity surrounding TAI and I appreciate you sharing your draft.

Load more