KYP

Kinoshita Yoshikazu (pseudonym)

31 karmaJoined Nov 2022Pursuing a doctoral degree (e.g. PhD)

Comments
15

I think the second reason "against" is probably the only true argument that my hypothesis is wrong.

The first reason still doesn't prevent the disease burden after covid/any new pathogen spilled into humans from being appreciably bigger than the disease burden before the event. 

The third reason is of course about interventions, which can go...many ways.

I didn't raise it in my original question but I wondered if this hypothesis applied to non-human species, which would still be a pretty interesting problem since it might impose a limit to the propagation of any species (as it collects more different kinds of pathogens overtime and need to contend with them). 

That is true! Maybe the disease burden increases when new pathogens are introduced but eventually reaches a balance.

But it doesn't seem to completely remove the effect from accumulating multiple pathogens in the community, a population with covid+flu (or any other combination of pathogens with no cross-immunity) in circulation and adapted to live with them will probably still have a higher disease burden than a population with just the flu in circulation and adapted with them.

Quite likely that the 19th century was worse than before due to increased population density and global connectivity, and I shudder to imagine what if AIDS became established in humans in the 19th century instead of the 20th. 

I think it removes the "Hostile AGI is the Great Filter" scenario, which I recall seeing a few times but it doesn't make much sense to begin with.

"Food supply collapse" isn't a simple binary switch, though. 

It's possible that whatever food leftover will be distributed in the most militarily efficient way possible, and a large number of civilians will be left to starve so that the remenants of conventional military forces could continue their fight to the death.

 

Of course, I think this scenario will not lead to outright human extinction. But it does make the post-nuclear war situation a lot more difficult, despite civilisation nominally surviving the ordeal.

Before going too deep into the "should we air strike data centres" issue, I wonder if anyone out there has good numbers about the current availability of hardwares for LLM training. 

Assuming that the US/NATO is committed to shutting down AI development, how much impact does a serious restriction on chip production/distribution have on the ability of a foreign actor to train advanced LLMs? 

I suspect there are enough old GPUs out there that can be repurposed into training centres, but how much more difficult would it be that no/little new hardwares are coming in? 

And for those old GPUs inside consumer machines or crpto farms, is it possible to cripple their LLM training capability through software modifications? 

Assuming that Microsoft and Nvidia/AMD are onboard, I think it should be possible to push a modification to the firmware of almost every GPU installed inside windows machines that are connected to the internet (that...should be almost everything). If software modification can prevent GPUs/whatever from being used effectively in LLM training runs, this will hopefully take most existing GPU stocks (and all newly manufactured GPUs) out of the equation for at least sometime. 

I agree with your post in principle, we should take currently unknown, non-human moral agents into the calculation of X-risks.

On the other hand, I personally think leaving behind an AGI (which, afterall, is still an "agent" influenced by our thoughts and values and carries it on in some manner) is a preferable end game for human civilisation compared to a lot of other scenarios, even if the impact of an AGI catastrophe is probably going to span the entire galaxy and beyond.

Grey goo and other "AS-catastrophes" are definitely very bad.

From "worst" to "less bad", I think the scenarios would line up something like this:

1: False vacuum decay, obliterates our light-cone.

2: High velocity (relativistic) grey goo with no AGI. Potentially obliterates our entire light-cone, although advanced ailen civilisations might survive.

3: Low velocity grey goo with no AGI, sterlises the Earth with ease and potentially spreads to other solar systems or the entire galaxy but probably not beyond (the intergalatic travel time would probably be too long for the goo to maintain their function). Technological ailen civilisations might survive.

4: End of all life on Earth from other disasters.

5: AGI catastrophe with spill over in our light cone. I think an AGI's encounter with intelligent alien life is not guaranteed to follow the same calculus as its relationship with humans, so even if an AGI destroys humanity, it is not necessarily going to destroy (or even be hostile to) some ailen civilisation it encounters.

 

For a world without humans, I am a bit uncertain about whether the Earth has enough "time left" (about ~500 million years before the sun's increasing luminosity makes the Earth significantly less habitable) for another intelligent species to emerge after a major extinction event (say, large mammals took the same hit as dinosaurs) that included humanity. And whether the Earth would have enough accessible fossil fuel for them to develop technological civilisation.

This is true, I do wonder what could be done to get around the fact that we really can't handle remembering complex passwords (without using some memory aid that could be compromised). 

Biometrics makes sense for worker/admin access, but I'm not sure about the merits of deploying it en masse to the users of a service. 

Despite all the controversies surrounding that (in?)famous XKCD comic, I would still agree with Randall that passphrases (I'm guilty of using them) are okay if we make them long enough. And the memory aids that one might need for pass phrases are probably less easy to compromise (e.g. 

I imagine it's not too hard for an average human to handle a few pass phrases of 10 words each, so maybe bumping "allowed password length" from 16-30 characters to 100 would solve some problems for security-minded users. 

Another tool I imagine might be good is allowing unicode characters in passwords, maybe mixing Chinese into passwords could let us have "memorable" high entropy passwords.

I suspect that all three political groups (maybe not the libertarians) you mentioned could be convinced to turn collectively against AI research. Afterall, governmental capacity is probably the first thing that will benefit significantly from more powerful AIs, and that could be scary enough for ordinary people or even socialists. 

Perhaps the only guaranteed opposition for pausing AI research would come from the relevant corporations themselves (they are, of course, immensely powerful. But maybe they'll accept an end of this arms race anyway), their dependents, and maybe some sections of libertarians and progressives (but I doubt there are that many of them committed to supporting AI research). 

The public opinion is probably not very positive about AI research, but also perhaps a bit apathetic about what's happening. Maybe the information in this survey, properly presented in a news article or something, could rally some public support for AI restrictions.

Do you think it's a serious enough issue to warrant some...not very polite responses? 

Maybe it would be better if policy makers just go and shut AI research down immediately instead of trying to make reforms and regulations to soften its impact? 

Maybe this information (that AI researchers themselves are increasingly pessimistic about the outcome) could sway public opinion enough to that point? 

I do think what Plague Inc is doing..is far from a simulation of an infectious disease...

The pathogen in PI receives "updates" from a handler, and cannot be cleared from a host without intervention (nobody in PI recovers from the pathogen unless a cure is distributed). This reminds me more about computer malware than any biological agent...

Load more