You’re correct, Fai - Jeff is not on a co-author on the paper. The other participants - Patrick Butlin, Yoshua Bengio, and Grace Lindsay - are.
What's something about you that might surprise people who only know your public, "professional EA" persona?
I suggest that “why I don’t trust pseudonymous forecasters” would be a more appropriate title. When I saw the title I expected an argument that would apply to all/most forecasting, but this worry is only about a particular subset
Unsurprisingly, I agree with a lot of this! It's nice to see these principles laid out clearly and concisely:
You write
AI welfare is potentially an extremely large-scale issue. In the same way that the invertebrate population is much larger than the vertebrate population at present, the digital population has the potential to be much larger than the biological population in the future.
Do you know of any work that estimates these sizes? There are various places that people have estimated the 'size of the future' including potential digital moral patients in the long run, but do you know of anything that estimates how many AI moral patients there could be by (say) 2030?
No, but this would be useful! Some quick thoughts:
A lot depends on our standard for moral inclusion. If we think that we should include all potential moral patients in the moral circle, then we might include a large number of near-term AI systems. If, in contrast, we think that we should include only beings with at least, say, a 0.1% chance of being moral patients, then we might include a smaller number.
With respect to the AI systems we include, one question is how many there will be. This is partly a question about moral individuation. Insofar as di
Hi Timothy! I agree with your main claim that "assumptions [about sentience] are often dubious as they are based on intuitions that might not necessarily ‘track’ sentience", shaped as they are by potentially unreliable evolutionary and cultural factors. I also think it's a very important point! I commend you for laying it out in a detailed way.
I'd like to offer a piece of constructive criticism if I may. I'd add more to the piece that answers, for the reader:
Hi Brian! Thanks for your reply. I think you're quite right to distinguish between your flavor of panpsychism and the flavor I was saying doesn't entail much about LLMs. I'm going to update my comment above to make that clearer, and sorry for running together your view with those others.
Ah, thanks! Well, even if it wasn't appropriately directed at your claim, I appreciate the opportunity to rant about how panpsychism (and related views) don't entail AI sentience :)
Unlike the version of panpsychism that has become fashionable in philosophy in recent years, my version of panpsychism is based on the fuzziness of the concept of consciousness. My view is involves attributing consciousness to all physical systems (including higher-level ones like organisms and AIs) to the degree they show various properties that we think are important for consciousness, such as perhaps a global workspace, higher-order reflection, learning and memory, intelligence, etc. I'm a panpsychist because I think at least some attributes of consciou...
The Brian Tomasik post you link to considers the view that fundamental physical operations may have moral weight (call this view "Physics Sentience").
[Edit: see Tomasik's comment below. What I say below is true of a different sort of Physics Sentience view like constitutive micropsychism, but not necessarily of Brian's own view, which has somewhat different motivations and implications]
But even if true, [many versions of] Physics Sentience [but not necessarily Tomasik's] doesn't have straightforward implications about what high-level systems, like or...
I like it! I think one thing the post itself could have been clearer on is that reports could be indirect evidence for sentience, in that they are evidence of certain capabilities that are themselves evidence of sentience. To give an example (though it’s still abstract), the ability of LLMs to fluently mimic human speech —> evidence for capability C—> evidence for sentience. You can imagine the same thing for parrots: ability to say “I’m in pain”—> evidence of learning and memory —> evidence of sentience. But what they aren’t are reports of sentience.
so maybe at the beginning: aren’t “strong evidence” or “straightforward evidence”
Thanks for the comment. A couple replies:
I want to clarify that these are examples of self-reports about consciousness and not evidence of consciousness in humans.
Self-report is evidence of consciousness in Bayesian sense (and in common parlance): in a wide range of scenarios, if a human says they are conscious of something, you should have a higher credence than if they do not say they are. And in the scientific sense: it's commonly and appropriately taken as evidence in scientific practice; here is Chalmers's "How Can We Construct a Science of Consci...
Agree, that's a great pointer! For those interested, here is the paper and here is the podcast episode.
[Edited to add a nit-pick: the term 'meta-consciousness' is not used, it's the 'meta-problem of consciousness', which is the problem of explaining why people think and talk the way they do about consciousness]
I enjoyed this excerpt and the pointer to the interview, thanks. It might be helpful to say in the post who Jim Davies is.
That may be right - an alternative would be to taboo the word in the post, and just explain that they are going to use people with an independent, objective track record of being good at reasoning under uncertainty.
Of course, some people might be (wrongly, imo) skeptical of even that notion, but I suppose there's only such much one can do to get everyone on board. It's a tricky balance of making it accessible to outsiders while still just saying what you believe about how the contest should work.
I think that the post should explain briefly, or even just link to, what a “superforecaster” is. And if possible explain how and why this serves an independent check.
The superforecaster panel is imo a credible signal of good faith, but people outside of the community may think “superforecasters” just means something arbitrary and/or weird and/or made up by FTX.
(The post links to Tetlock’s book, but not in the context of explaining the panel)
You write,
Those who do see philosophical zombies as possible don’t have a clear idea of how consciousness relates to the brain, but they do think...that consciousness is something more than just the functions of the brain. In their view, a digital person (an uploaded human mind which runs on software) may act like a conscious human, and even tell you all about its ‘conscious experience’, but it is possible that it is in fact empty of experience.
It's consistent to think that p-zombies are possible but to think that, given the laws of nature, digital peo...
You might be interested in this LessWrong shortform post by Harri Besceli, "The best and worst experiences you had last week probably happened when you were dreaming." Including a comment from gwern.
Thanks for the post! Wanted to flag a typo: “ To easily adapt to performing complex and difficult math problems, Minerva has That's not to say that Minerva is an AGI - it clearly isn't.”
Well, I looked it up and found a free pdf, and it turns out that Searle does consider this counterargument.
...Why is it so important that the system be capable of consciousness? Why isn’t appropriate behavior enough? Of course for many purposes it is enough. If the computer can fly airplanes, drive cars, and win at chess, who cares if it is totally nonconscious? But if we are worried about a maliciously motivated superintelligence destroying us, then it is important that the malicious motivation should be real. Without consciousness, there is no possibility
Feedback: I find the logo mildly unsettling. I think it triggers my face detector, and I see sharp teeth. A bit like the Radiohead logo.
On the other hand, maybe this is just a sign of some deep unwellness in my brain. Still, if even a small percentage of people get this feeling from the logo, could be worth reconsidering.
Since the article is paywalled, it may be helpful to excerpt the key parts or say what you think Searle's argument is. I imagine the trivial inconvenience of having to register will prevent a lot of people from checking it out.
I read that article a while ago, but can't remember exactly what it says. To the extent that it is rehashing Searle's arguments that AIs, no matter how sophisticated their behavior, necessarily lack understanding / intentionality/ something like that, then I think that Searle's arguments are just not that relevant to work on AI align...
Well, I looked it up and found a free pdf, and it turns out that Searle does consider this counterargument.
...Why is it so important that the system be capable of consciousness? Why isn’t appropriate behavior enough? Of course for many purposes it is enough. If the computer can fly airplanes, drive cars, and win at chess, who cares if it is totally nonconscious? But if we are worried about a maliciously motivated superintelligence destroying us, then it is important that the malicious motivation should be real. Without consciousness, there is no possibility
Just wanted to say that I really appreciated this post. As someone who followed the campaign with interest, but not super closely, I found it very informative about the campaign. And it covered all of the key questions I have been vaguely wondering about re: EAs running for office.
opinionated (per its title) and non-comprehensive, but "Key questions about artificial sentience: an opinionated introduction" by me:
I work at Trajan House and I wanted to comment on this:
But a great office gives people the freedom to not worry about what they need for work, a warm environment in which they feel welcome and more productive, and supports them in ways they did not think were necessary.
By these metrics, Trajan House is a really great office! I'm so grateful for the work that Jonathan and the other operations staff do. It definitely makes me happier and more productive.
Trajan House in 2022 is a thriving hub of work, conversation, and fun.
Leverage just released a working paper, "On Intention Research". From the post:
...Starting in 2017, some of Leverage’s psychology researchers stumbled across unusual effects relating to the importance and power of subtle nonverbal communication. Initially, researchers began by attempting to understand and replicate some surprising effects caused by practitioners in traditions like bodywork and energy healing. Over time researchers investigated a wide range of phenomena in subtle nonverbal communication and developed an explanation for these phenomena accord
Thanks for the comment! I agree with the thrust of this comment.
Learning more and thinking more clearly about implementation of computation in general and neural computation in particular, is perennially on my intellectual to-do list list.
We don't want to allow just any arbitrary gerrymandered states to count as an adequate implementation of consciousness's functional roles
maybe the neurons printed on each page aren't doing enough causal work in generating the next edition
I agree with the way you've formulated the problem, and the possible solution ...
Some past example that come to mind. Kudos to all of the people mentioned for trying ambitious things, and writing up the retrospectives:
Zvi Mowshowitz's post-mortem: https://thezvi.wordpress.com/2015/06/30/the-thing-and-the-symbolic-representation-of-the-thing/
Sarah Constantin's post-mortem: https://docs.google.com/document/d/1HzZd3jsG9YMU4DqHc62mMqKWtRer_KqFpiaeN-Q1rlI/edit
Michael Plant has a post-mo
Thanks for writing this! Your work sounds super interesting. You write, “ But you could be rewarded by the euphoric sense of revelation. Some of that sense may even be authentic; most of it will be fool’s gold.” What are some times you got that euphoric sense in your research for HLI?
[Replying separately with comments on progress on the pretty hard problem; the hard problem; and the meta-problem of consciousness]
The meta-problem of consciousness is distinct from both a) the hard problem: roughly, the fundamental relationship between the physical and the phenomenal b) the pretty hard problem, roughly, knowing which systems are phenomenally consciousness
The meta-problem is c) explaining "why we think consciousness poses a hard problem, or in other terms, the problem of explaining why we think consciousness is hard to explain" (6)
The me...
[Replying separately with comments on progress on the pretty hard problem; the hard problem; and the meta-problem of consciousness]
Progress on the hard problem
I am much less sure of how to think about this than about the pretty hard problem. This is in part because in general, I'm pretty confused about how philosophical methodology works, what it can achieve, and the extent to which there is progress in philosophy. This uncertainty is not in spite of, but probably because of doing a PhD in philosophy! I have considerable uncertainty about these background ...
That's a great question. I'll reply separately with my takes on progress on a) the pretty hard problem, b) the hard problem, and c) something called the meta-problem of consciousness [1].
[1] With apologies for introducing yet another 'problem' to distinguish between, when I've already introduced two! (Perhaps you can put these three problems into Anki?)
Progress on the pretty hard problem
This is my attempt to explain Jonathan Birch's recent proposal for studying invertebrate consciousness. Let me know if it makes rough sense!
The problem with studying anima...
Great question, I'm happy to share.
One thing that makes the reaching out easier in my case is that I do have one specific ask: whether they would be interested in (digitally) visiting the reading group. But I also ask if they'd like to talk with me one-on-one about their work. For this ask, I'll mention a paper of theirs that we have read in the reading group, and how I see it as related to what we are working on. And indicate what broad questions I'm trying to understand better, related to their work.
On the call itself, I am a) trying to get a better unde...
That's a great point. A related point that I hadn't really clocked until someone pointed it out to me recently, though it's obvious in retrospect, is that (EA aside) in an academic department it is structurally unlikely that you will have a colleague who shares your research interests to a large extent. Since it's rare that a department is big enough to have two people doing the same thing, and departments need coverage of their whole field, for teaching and supervision.
"I've learned to motivate myself, create mini-deadlines, etc. This is a constant work in progress - I still have entire days where I don't focus on what I should be doing - but I've gotten way better."
What do you think has led to this improvement, aside from just time and practice? Favorite tips / tricks / resources?
Thanks for this. I was curious about "Pick a niche or undervalued area and become the most knowledgeable person in it." Do you feel comfortable saying what the niche was? Or even if not, can you say a bit more about how you went about doing this?
This is very interesting! I'm excited to see connections drawn between AI safety and the law / philosophy of law. It seems there are a lot of fruitful insights to be had.
You write,
The rules of Evidence have evolved over long experience with high-stakes debates, so their substantive findings on the types of arguments that prove problematic for truth-seeking are relevant to Debate.
Can you elaborate a bit on this?
I don't know anything about the history of these rules about evidence. But why think that over this history, these rules have trended to...
Thanks for the great summary! A few questions about it
1. You call mesa-optimization "the best current case for AI risk". As Ben noted at the time of the interview, this argument hasn't yet really been fleshed out in detail. And as Rohin subsequently wrote in his opinion of the mesa-optimization paper, "it is not yet clear whether mesa optimizers will actually arise in practice". Do you have thoughts on what exactly the "Argument for AI Risk from Mesa-Optimization" is, and/or a pointer to the places where, in your opinion,...
small correction that Jonathan Birch is at LSE, not QMUL. Lars Chittka, the co-lead of the project, is at QUML