titotal

Computational Physicist
9237 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend on explaining why in great detail. 

Comments
769

To be clear, I wasn't saying that complexity itself was the cause of consciousness, just that some level of algorithmic complexity may be a requirement for consciousness. This seems like a common position: the prospect of present or future LLM sentience is a subject of debate, but it's rare to see a similar debate about the sentience of a pocket calculator. 

A brain and a digital simulation have some similarities, but they also have a lot of differences. One of those differences is that the brains are running on "laws of physics" algorithms that are overwhelmingly faster and more complex than that of digital simulations. They didn't need to evolve these "algorithms": it's inherent to any biological process. Seth identifies several other differences as well: continuous operation, embodiment, etc. His position seems to be that at least one of these differences may result in a lack of consciousness. 

disclaimer: I am not too well-versed on the philosophy here so I could be saying dumb things, feel free to correct:

From my computational physics experience I know that it is physically impossible to simulate the exact electrical properties of a system of a couple hundred atoms on a classical digital computer, due to a blowup in computational complexity. 

The laws of physics could be described as an algorithm, but the algorithm in question is on a level of complexity that is impossible for digital simulations to match. I think it's generally agreed that some degree of complexity is required for consciousness: it doesn't seem insane to say that that complexity might lie past what is digitally simulatable in practice. 

The question of digital consciousness seems to depend on whether simulated abstracted approximations to the physical process of thinking are close enough to produce the same effect. 

I am somewhat concerned about data contamination here: Are you sure that the original Givewell writeup has at no point leaked into your model's analysis? Ie: was any of givewell's analysis online before the august 2025 knowledge cutoff for GPT, or did your agents look at the Givewell report as part of their research?

Yeah, the future described in this post isn't particuarly "weird", per se, it's just using the assumption that every technology that has been hypothetically proposed for the future will be created by ASI soon after AGI arrives. 

I think the future will be a lot more unpredictable than this. Analolgously, I can imagine someone from 1965 being very confused about a future where immensely powerful computers can fit in your pocket, but human spaceflight had gone no further than the moon. It's very hard to predict in advance the constraints and shortcomings of future technology, or the practical and logistical factors that affect what is achieved. 

Have you considered that the reason these policies are not increasing AI usage is that AI usage is not particularly useful for many applications? Particularly when it comes to something like animal advocacy, I'm struggling to think of many things you'd actually need a full model subscription for (rather than just asking the occasional question to a free model). 

I think the original policies are fine: they let people evaluate and decide for themselves how useful AI models are, and adjust strategies accordingly. Trying to pressure people to use AI beyond this level is going to make your team less effective.

You correctly point out that "AI safety leaders" is a group that selects for high concern about AI, which means that the average is skewed towards high concern, relative to experts more generally. 

I would like to add that the same is probably true (to a lesser extent) for AGI timeline estimates: People that think that AGI is very far away are less likely to think that AI safety is a pressing concern and are thus less motivated to become AI safety leaders. Also, people who are concerned about present-day AI risks, but don't think AGI is imminent often call themselves "AI ethicists", rather than AI safety people. These "AI ethicists" are unlikely to show up to a "summit on existential security". 

To be clear, I think it's good to write this article, but we should always be mindful of selection effects when interpreting surveys.  

Answer by titotal2
0
0
1

Unfortunately, most estimates of LLM energy use are somewhat out of date due to the rise of reasoning models. A small amount of personal usage is probably still not that energy intensive, but I don't think it's negligible anymore. 

The most up-to-date estimates I've seen of AI energy use is this paper here. I recommend you look at table 4. For the o3 reasoning model, which is probably the closest analogue to todays reasoning models, a short query costs something like 7 Wh, a medium query is 20 Wh, and a long query is 30 Wh. Using a non-reasoning model like GPT-4o was much less intensive at like 0.4 Wh for a small query, however in my experience the results tend to be a lot worse. 

So if you end up using like 10 medium queries to a reasoning model over the course of a project, that would add up to 0.2 kWh: if you use 100 queries, that would be 2 kWh.  The typical household energy use is something like 30 kWh per day. So the impact is small, but non-neglible: probably there are other things you can do that will have a bigger impact on energy use. 

Personally, I would be worried about cognitive offloading: I think that an overreliance on AI can hamper your ability to learn things, if you offload mentally difficult tasks to the AI. 

This interpretation is not true. Thiel was talking specifically about money going to Gates in the event of Musk dying:

That's how Thiel said he persuaded Musk. He said he looked up actuarial tables and found the probability of Musk's death in the coming year equated to giving $1.4 billion to Gates, who has long sparred with the Tesla CEO.

"What am I supposed to do—give it to my children?" Musk responded, in Thiel's telling. "You know, it would be much worse to give it to Bill Gates."

I think this would only make sense if Musk had specifically willed his pledge money to the gates foundation? 

I think there is a good reason to focus more on novice uplift than expert uplift: there are significantly more novices out there than experts

To use a dumb simple model, say that only 1 in a million people is insane enough to want to kill millions of people if given the opportunity.  If there's 300 million americans, but only 200 thousand biology PhDs, that means we expect there to be 300 crazy novices out there, but only 0.2 crazy biology PhD's.  The numerical superiority of the former group may outweigh the greater chance of success of the latter group. 

I fully agree with this post. 

I think this type of belief comes from a flattening understanding of the difficulty of doing things: it's assumed that because doing well on a math olympiad is hard, and that curing death is hard, that if you can make an AI do the former it will soon be able to do the latter. But in fact, curing death is so much more difficult to do than a math olympiad that it breaks the scale.

You can also see this in the casual conflation of things like "cure cancer" and "cure death". The latter is many, many, many orders of magnitude more difficult than the former: claiming that the latter would occur at the same time as the former is an incredibly extraordinary claim, and it requires commensurate evidence to back it up. 

The chief argument in favour of this is "recursive self improvement", but intelligence is not a magic spell that you can just dial up to infinity. There are limits in the forms of empirical knowledge, real-world resources and computational complexity. Certainly current day AI trends seems to be limited by scaling laws that would be impractical a pretty fucking long way from god-like intelligence. 

Load more