Prometheus

28Joined May 2022

Posts
2

Sorted by New

Comments
9

Are there plans from any organizations to support former FTX grantees? I'm not one of them, but know of many who received funding from the FTX regranting program, who are now suddenly without funding and might face clawbacks. 

Why is forecasting TAI with bio anchors useful? Many argue that the compute power of the human brain won’t be needed for TAI, since AI researchers are not designing systems that mimic them.

Thanks! I'll update to correct this.

Is it actually more cost effective, though? Someone in suspended animation does not eat or consume resources. Unless you mean sometime in the future, but in that future we don't know what resource constraints will actually be, and we don't know what we will value most. Preventing irrecoverable loss of sentient minds still seems like a wiser thing to do, given this uncertainty. As for AI Safety, I think we're facing a talent deficit much more than a financial deficit right now. I'm not sure how much adding, say, 5 more million to the cause will really change at this time.

My guess a big reason is there doesn't really seem to be any framework to go about working on it, except perhaps on the policy side. Testing out various forms of nanotechnology to see if they're dangerous might be very bad. Even hypothetically doing that might create information hazards. I imagine we would have to see a few daring EAs blaze the trail for others to follow in. There's also the obvious skill and knowledge gap. You can't easily jump into something like nanotech the way you could for something like animal welfare.

"Also, the limiting factor for cryonics seems to be more it's weirdness rather than research?"

 

Not really. The perfusion techniques haven't really updated in decades. And the standby teams to actually perform preservation in the event of an accident are extremely spread out and limited. I think some new organizations need to breath life back into cryonics, with clear benchmarks for standards they hope to achieve over a certain timeline. I think Tomorrow Biostasis is doing the kind of thing I'm speaking of, but would love to see more organizations like them.

I think you make some good points about the assumption an AGI will be a goal-directed agent, but I wouldn't be so certain that this makes Doom Scenarios less probable, only opens new doors that currently aren't being researched enough.

In terms of AGI that are just beyond human-level not being much of a threat, I think there are a lot of key assumptions that misunderstand the radical scope of change this would cause. One is speed. Such an intelligence would probably be several orders of magnitude faster than any human intelligence. A second is the ability to replicate. Such a breakthrough would spark radical economic incentive to ramp up computational ability. Even if the first one takes a huge amount of space, the radical amount of investment to scale it I think would quickly change this in a matter of a few years. This would enable a vast number of copies of the AGI to be created. The third is coherence. These AGI copies could all work together in a far more coherent way than any corporation. Corporations are not unified entities. There is a huge amount of disorder within each, and the key decisions are still normally made by just a few individuals, radically slowing progress they can make in terms of company-wide direction and planning. The fourth change that seems very likely is the one that you credited humanity's power for: communication. These copies could share and communicate with each other with extremely high bandwidth. Humans have to talk, write, read, and listen to share information. This is very low-bandwidth. AGIs can just share their weights with each other. Imagine if every person working on the Manhattan Project had access to all of Von Neumann's insights, skills, and knowledge. And Einstein's. And that of the most experienced mechanical engineers, chemists, etc. How long do you think it would have taken them to develop the atom bomb? And given this large scale of new mental power, I don't see why no one would try to tweak it so that the AGIs start working on self-optimization. The massive incentive for outcompeting other AGIs and mere humans seems far, far too high for this not to be attempted, and I don't see any reason why this would somehow be impossible or even extremely difficult once you have already created AGIs. Most of the progress in current capabilities of AI have come from a few basic insights from a small number of individuals. In the scope of all of humanity's available mental power, this was unbelievably low-hanging fruit. If anything, creating more efficient and effective copies seems too easy for an AGI to do. I suspect that this will be achievable before we create AGIs that can even do everything a human can do. In other words, I expect we'll cross into the danger/weird zone of AI before we even realize it.

But this wouldn't be global domination in any conventional sense. When humans implement such things, its methods are extremely harsh and inhibit freedoms on all levels of society. A human-run domination would need to enforce such measures with harsh prison time, executions, fear and intimidation, etc. But this is mostly because humans are not very smart, so they don't know any other way to stop human y from doing x. A powerful AGI wouldn't have this problem. I don't think it would even have to be as crude as "burn all GPUs". It could probably monitor and enforce things so efficiently that trying to create another AGI would be like trying to fight gravity. For a human, it would simply be that you can't achieve it, no matter how many times you try, almost a new rule interwoven into the fabric of reality. This could probably be made less severe with an implementation such as "can't achieve AGI that is above intelligence threshold X" or "poses X amount of risk to population". In this less severe form, humans would still be free to develop AIs that could solve aging, cancer, space travel, etc., but couldn't develop anything too powerful or dangerous.

If someone manages to create a powerful AGI, and the only cost for most humans is that it burns their GPUs, this seems like an easy tradeoff for me. It's not great, but it's mostly a negligible problem for our species. But I do agree using governance and monitoring is a possible option. I'm normally a hardline libertarian/anarchist, but I'm fine going full Orwellian in this domain.