kpurens

38 karmaJoined Apr 2023Working (15+ years)

Bio

My career has focused on deploying AI and data science into large corporations to solve difficult problems, with a focus on Earth-science related including energy, agriculture, and mining. 

 

How I can help others

My PhD developed new machine learning methods to measure organisms and their life history, understand patterns of origination and extinction, their fossil records, and identifying species. I am a breadth-focused scientist, who finds the similarities between different fields and learns how to get them to work together. 

Please reach out if you have questions on Earth systems, deployment of AI in natural science, or paleobiology. 

Comments
12

Answer by kpurensApr 15, 20233
0
0

Great question! If AI kills us in the next few years, it seems likely it would be by using a pathway to power that is currently accessible by humans; and just acts as a helper/accelerator to human actions.

The top two existential risks that meet that criteria are engineered bioweapon and nuclear exchange.

Currently, there is a great deal of research into how LLMs can assist research in a broad set of fields, with good results performing similar to a human specialist and in creativity tasks to identify new possibilities. Nothing that human researches can’t do, but the speed and low cost of these models is already surprising and likely to accelerate many fields.

For bioweapon risk, the risk I see would be direct development where the AI is an assistant to the human-led efforts. The specific bioengineering skills to create a an AI designed bug are scarce, but the equipment isn’t.

How could an AI accelerate nuclear risk? One path I could see is again AI-assisted, human led, except for controlling social media content and attitude to increase global tensions. This seems less likely than the bioweapon option.

What others are there?

>Bostrom says that if everyone could make nuclear weapons in their own home, civilization would be destroyed by default because terrorists, malcontent people and "folk who just want to see what would happen" would blow up most cities.

Yes, and what would the world be like to change this? 

It's terrible to think that the reason we are safe is because others are powerless. If EA seeks to maximize human potential, I think it's really insightful that we are confident many people would destroy just because they can. And I think focusing on real well being of people is a way we can confront this. 

Let's do the thought experiment: What would the world look like where anyone had power to destroy the world at any time--and choose not to?  Where no one made that choice?

What kind of care and support systems would exist? How would we respect each other? How would society be organized?

I think this is a good line of thinking because it helps understand how the world is vulnerable, and how we can make it less so. 

-Kristopher

This is a really good piece of input for predictions of how the supply-demand curve for coding will change in the future. 

50% increase in time effectively reduces cost of coding by 50%. Depending on the shape of the supply-demand curve for coding, this could lead to high unemployment, or a boom for coders that leads to even higher demand. 

Note:  coding productivity tools developed over the past 40 years have led to ever-increasing demand since so much value is generated :) 
 

It seems to me that consciousness is a different concept than intelligence, and one that isn't well understood and communicated because it's tough for us to differentiate them from inside our little meat-boxes!

We need better definitions of intelligence and consciousness; I'm sure someone is working on it, and so perhaps just finding those people and communicating their findings is an easy way to help? 

I 100% agree that these things aren't obvious--which is a great indicator that we should talk about them more!

I'm referring to the 2014 event which was a 'weak' version of the Turing test; since then, the people who were running yearly events have lost interest, and claims that that the Turing test is a 'poor test of intelligence'--highlighting the way that goalposts seem to have shifted. 

https://gizmodo.com/why-the-turing-test-is-bullshit-1588051412

Is GPT-4 an AGI?

One thing I have noticed is goalpost shifting on what AGI is--it used to be the Turing test, until that was passed. Then a bunch of other criteria that were developed were passed and and now the definition of 'AGI' now seems to default to what previously what have been called 'strong AI'.

GPT-4 seems to be able to solve problems it wasn't trained on, reason and argue as well as many professionals, and we are just getting started to learn it's capabilities. 

Of course, it also isn't a conscious entity--it's style of intelligence is strange and foreign to us! Does this mean that goalposts will continue to shift as long as any humans intelligence is different in any way from the artificial version? 

Wow, this is much higher support than I would have ever imagined for the topic. I guess Terminator is pretty convincing as a documentary!

Great post! It is so easy to get focused on the bad that we forget to look towards the path towards the good, and I want to see more of this kind of thinking.

One little note about AGI:
"cars have not been able to drive autonomously in big cities "...
I think that autonomous car driving is a very bad metric for AGI because humans are -hyper specialized- at the traits that allow it--any organism with hyperspecialized traits shouldn't be expected to be easily reached by a 'general' intelligence without specialized training!

In order to drive a car, you need to:
1. Understand complex visual information as you are moving through a very complex environment, in wildly varying conditions, and respond almost instantly to changes to keep safe
2. Know the right path to move an object through a complex environment to avoid dangers, infer the intentions of other objects based on their movement, and calculate this incredibly fast
3. Coordinate with other actors on the road in a way that allows harmonious, low-risk movement to meet a common objective

It turns out--these are all hard problems--and ones that Homo sapiens was evolutionary designed to do in order to survive as persistence hunters, working in a group, following prey through forests and savannah, and sharing the proceeds when the gazelle collapsed from exhaustion! Our brain's circuits are designed for this task and it excels at them, and it does it in a way that we don't even realize how hard driving is! (You know how you are completely exhausted after a long drive? it's hard!)

It's easy to not notice how hard something is when your unconscious is designed to do the hard work effortlessly :) 

Best,
Kristopher

Really great point about a curious trend!

Human cultural evolution has replaced gene evolution of the main way humans are advancing themselves, and you certainly point at the trend that ties them together. 

One reason I didn't dig into the anthropology record is that it is so fragmented, and I am not an expert in it--very little cross-communication between the fields, excepting in a few sub-disciplines such as taphonomy. 

This is a good proposal to have out there, but needs work on talking about the weaknesses. A couple examples: 


How would this be enforced? Global carbon taxes are a good analogue and have never gotten global traction. Linked to the cooperation problem between different countries, the hardware can just go to an AWS server in a permissive country. 

From a technical side, I can break down a large model into sub-components and then ensemble them together. It will be tough to have definitions that avoid these kind of work-around and also don't affect legitimate use cases. 

Load more