I have children, and I would precommit to enduring the pain without hesitation, but I don’t know what I would do in the middle of experiencing the pain. If pain is sufficiently intense, “I” am not in chatter any more, and whatever part of me is in charge, I don’t know very well how it would act
I have the complete opposite intuition: equal levels of pain are harder to endure for equal time if you have the option to make them stop. Obviously I don’t disagree that pain for a long time is worse than pain for a short time.
This intuition is driven by experiences like: the same level of exercise fatigue is a lot easier to endure if giving up would cause me to lose face. In general, exercise fatigue is more distracting than pain from injuries (my reference points being a broken finger and a cup of boiling water in my crotch - the latter being about as distractingly painful as a whole bunch of not especially notable bike races etc).
Thinking a bit more: the boiling water actually was more intense for a few seconds, but after that it was comparable to bike racing. But also, all I wanted to do was run around shouting obscenities and given that I was doing exactly that I don’t recall the sense of being in conflict with myself, which is one of the things I find hard to deal with about pain.
I don’t know that this scales to very intense pain. The only pain experience I’ve had notable enough to recall years later was e when I ran 70km without having done very much running to train for it - it hurt a lot I don’t have any involuntary pain experiences that compare to it (running + lack of preparation was important here - I’ve done 400km bike rides with no especially notable pain). This was voluntary in the sense that I could have stopped and called someone to pick me up, but that would have disqualified my team.
One prediction I’d make is that holding my hand in an ice bucket with only myself for company would be much harder than doing it with other people where I’d be ashamed to be the first to pull it out. I don’t just mean I’d act differently - I mean I think I would actually experience substantially less psychological tension.
Conditional on AGI being developed by 2070, what is the probability that humanity will suffer an existential catastrophe due to loss of control over an AGI system?
Requesting a few clarifications:
I think journalists are often imprecise and I wouldn't read too much into the particular synonym of "said" that was chosen.
Does it make more sense to think about all probability distributions that offers a probability of 50% for rain tomorrow? If we say this represents our epistemic state, then we're saying something like "the probability of rain tomorrow is 50%, and we withhold judgement about rain on any other day".
I think this question - whether it's better to take 1/n probabilities (or maximum entropy distributions or whatever) or to adopt some "deep uncertainty" strategy - does not have an obvious answer
Perhaps I’m just unclear what it would even mean to be in a situation where you “can’t” put a probability estimate on things that does as good as or better than pure 1/n ignorance.
Suppose you think you might come up with new hypotheses in the future which will cause you to reevaluate how the existing evidence supports your current hypotheses. In this case probabilistically modelling the phenomenon doesn’t necessarily get you the right “value of further investigation” (because you’re not modelling hypothesis X), but you might still be well advised to hold off acting and investigate further - collecting more data might even be what leads to you thinking of the new hypothesis, leading to a “non Bayesian update”. That said, I think you could separately estimate the probability of a revision of this type.
Similarly, you might discover a new outcome that’s important that you’d previously neglected to include in your models.
One more thing: because probability is difficult to work with, even if it is in principle compatible with adaptive plans, it might in practice tend to steer away from them.
Fair enough, she mentioned Yudkowsky before making this claim and I had him in mind when evaluating it (incidentally, I wouldn't mind picking a better name for the group of people who do a lot of advocacy about AI X-risk if you have any suggestions)
I skimmed from 37:00 to the end. It wasn't anything groundbreaking. There was one incorrect claim ("AI safteyists encourage work at AGI companies"), I think her apparent moral framework that puts disproportionate weight on negative impacts on marginalised groups is not good, and overall she comes across as someone who has just begun thinking about AGI x-risk and so seems a bit naive on some issues. However, "bad on purpose to make you click" is very unfair.
But also: she says that hyping AGI encourages races to build AGI. I think this is true! Large language models at today's level of capability - or even somewhat higher than this - are clearly not a "winner takes all" game; it's easy to switch to a different model that suits your needs better and I expect the most widely used systems to be the ones that work the best for what people want them to do. While it makes sense that companies will compete to bring better products to market faster, it would be unusual to call this activity an "arms race". Talking about arms races makes more sense if you expect that AI systems of the future will offer advantages much more decisive than typical "first mover" advantages, and this expectation is driven by somewhat speculative AGI discourse.
She also questions whether AI safetyists should be trusted to improve the circumstances of everyone vs their own (perhaps idiosyncratic) priorities. I think this is also a legitimate concern! MIRI were at some point apparently aiming to 1) build an AGI and 2) use this AGI to stop anyone else building an AGI (Section A, point 6). If they were successful, that would put them in a position of extraordinary power. Are they well qualified to do that? I'm doubtful (though I don't worry about it too much because I don't think they'll succeed)
Now I want to see how much I like honey-drenched fat