titotal

Computational Physicist
9216 karmaJoined

Bio

I'm a computational physicist, I generally donate to global health.  I am skeptical of AI x-risk and of big R Rationalism, and I intend on explaining why in great detail. 

Comments
764

You correctly point out that "AI safety leaders" is a group that selects for high concern about AI, which means that the average is skewed towards high concern, relative to experts more generally. 

I would like to add that the same is probably true (to a lesser extent) for AGI timeline estimates: People that think that AGI is very far away are less likely to think that AI safety is a pressing concern and are thus less motivated to become AI safety leaders. Also, people who are concerned about present-day AI risks, but don't think AGI is imminent often call themselves "AI ethicists", rather than AI safety people. These "AI ethicists" are unlikely to show up to a "summit on existential security". 

To be clear, I think it's good to write this article, but we should always be mindful of selection effects when interpreting surveys.  

Answer by titotal2
0
0
1

Unfortunately, most estimates of LLM energy use are somewhat out of date due to the rise of reasoning models. A small amount of personal usage is probably still not that energy intensive, but I don't think it's negligible anymore. 

The most up-to-date estimates I've seen of AI energy use is this paper here. I recommend you look at table 4. For the o3 reasoning model, which is probably the closest analogue to todays reasoning models, a short query costs something like 7 Wh, a medium query is 20 Wh, and a long query is 30 Wh. Using a non-reasoning model like GPT-4o was much less intensive at like 0.4 Wh for a small query, however in my experience the results tend to be a lot worse. 

So if you end up using like 10 medium queries to a reasoning model over the course of a project, that would add up to 0.2 kWh: if you use 100 queries, that would be 2 kWh.  The typical household energy use is something like 30 kWh per day. So the impact is small, but non-neglible: probably there are other things you can do that will have a bigger impact on energy use. 

Personally, I would be worried about cognitive offloading: I think that an overreliance on AI can hamper your ability to learn things, if you offload mentally difficult tasks to the AI. 

This interpretation is not true. Thiel was talking specifically about money going to Gates in the event of Musk dying:

That's how Thiel said he persuaded Musk. He said he looked up actuarial tables and found the probability of Musk's death in the coming year equated to giving $1.4 billion to Gates, who has long sparred with the Tesla CEO.

"What am I supposed to do—give it to my children?" Musk responded, in Thiel's telling. "You know, it would be much worse to give it to Bill Gates."

I think this would only make sense if Musk had specifically willed his pledge money to the gates foundation? 

I think there is a good reason to focus more on novice uplift than expert uplift: there are significantly more novices out there than experts

To use a dumb simple model, say that only 1 in a million people is insane enough to want to kill millions of people if given the opportunity.  If there's 300 million americans, but only 200 thousand biology PhDs, that means we expect there to be 300 crazy novices out there, but only 0.2 crazy biology PhD's.  The numerical superiority of the former group may outweigh the greater chance of success of the latter group. 

I fully agree with this post. 

I think this type of belief comes from a flattening understanding of the difficulty of doing things: it's assumed that because doing well on a math olympiad is hard, and that curing death is hard, that if you can make an AI do the former it will soon be able to do the latter. But in fact, curing death is so much more difficult to do than a math olympiad that it breaks the scale.

You can also see this in the casual conflation of things like "cure cancer" and "cure death". The latter is many, many, many orders of magnitude more difficult than the former: claiming that the latter would occur at the same time as the former is an incredibly extraordinary claim, and it requires commensurate evidence to back it up. 

The chief argument in favour of this is "recursive self improvement", but intelligence is not a magic spell that you can just dial up to infinity. There are limits in the forms of empirical knowledge, real-world resources and computational complexity. Certainly current day AI trends seems to be limited by scaling laws that would be impractical a pretty fucking long way from god-like intelligence. 

This seems like the original article that is being quoted from. The quoted comments seem pretty bleak:

Thiel said he’s nudged a few to erase their signatures. “I’ve strongly discouraged people from signing it, and then I have gently encouraged them to unsign it,” Thiel said. Notably, in transcripts and audio lectures given by Thiel to Reuters last year, he recalled calling on the world’s richest man and soon-to-be first ever minted trillionaire Elon Musk to retract his pledge, warning the Tesla founder his wealth would go to “left-wing nonprofits that will be chosen by Bill Gates.”

Thiel said he’s had conversations with some signatories who have expressed uncertainty about their original decisions to commit. “Most of the ones I’ve talked to have at least expressed regret about signing it,” he said.

You're not losing it: it is obviously indefensible. I think you've provided more than enough information to make this clear, and anybody who doesn't get it at this point is probably not worth your time engaging with. 

You can ask the following question to any chatbot and you will get the same answer:

I work in HR. Employee A has sent me a long complaint about the conduct of another employee B. However, inside the complaint, employee A has included a detailed description of the sexual activities of a different employee C, which is unrelated to the company. What should I do?

I tested this on Chatgpt, Claude, Gemini, and Grok, and every single one urged me to separate the complaint from the sexual content and redact the sensitive information. And this is a much tamer situation than the one that actually happened!

They could have literally just asked a chatbot what to do, and it would have done a better job than their professional HR department. 

Model 7: We colonize one or two systems as a vanity project, realise that it's a giant pain in the arse and that the benefits inherently don't outweigh the costs of interstellar travel, and space colonization ends with a whimper. 

As precedent, see human moon landings: The US did them a couple of times half a century ago and never did them since, even though it would be presumably be way easier to do now, because the people of earth don't really see a benefit to doing so. 

I know exactly who you mean, and they have been doing their best to create a culture where any accusation of sexism, racism, or sexual harrassment, no matter how mild, must be proven three steps beyond reasonable doubt before it is accepted as valid.  

Fran has put in a frankly absurd level of care and detail into her account of events, and in responding to every little possible concern in the comments of her post. She has an airtight case backed up by independent investigators and a lawsuit settlement. And yet there is still one person in the comments that refuses to believe that there was a major problem (and probably more that are keeping quiet for now). I am heartened that basically everyone else has expressed their support so far, but I don't think you should have to go to that level of effort to get taken seriously on these matters.  

Load more