Riccardo

COO @ The Singularity Group
26 karmaJoined Aug 2022Working (6-15 years)
singularitygroup.net/

Bio

I work together with a group of activists in Germany to make a difference in the world. You can find more details on our website: https://singularitygroup.net/

Starting 2023 and with the release of new AI technologies like GPT-4 we have somewhat shifted our focus towards these developments, trying to raise awareness about the capabilities of the new tech, mainly through livestreams that implement and combine the latest APIs that are available while combining it with entertainment to reach a larger audience, a bit more info on what we worked on here: https://customaisolutions.io/ 

We have tried many other projects in the past years since I have been part of the group (2015), starting with fundraising for charity, focusing on spreading awareness to working on a mobile game.
The reason we decided to work on a the game "Mobile Minigames" is that the mobile games industry is one of the biggest industries in the world in terms of profits and audience. We want to make use of our experience in the industry to build a platform we can use for good as well as make money we can use for good cause.

How others can help me

If you're interested in what we're doing you can always apply to work together with us:
https://singularitygroup.net/

I'm the person doing the interviews, so I'm looking forward to speak with you :D

I'm also interested in different perspectives especially relating to how to have the biggest impact, so long as they have a practical use and are not just for the sake of the argument.

How I can help others

I have some life experience in figuring out what the best thing I can do is and thought a lot about how I can have the biggest impact. I also talked to a lot of people about this topic. I think everyone has their own path to get the same conclusion in the end, so I guess if you're on that path and have questions you could reach out, though there are also other great resources out there.

I'm also quite interested in productivity and optimizing workflows, so when it comes to organizing I could also give some advice, or at least say what I found works really well for me.

Comments
17

thank you for the references, I'll be sure to check them out!

Since these developments are really bleeding edge I don't know who is really an "expert" I would trust on evaluating it.

The closest to answering your question is maybe this recent article I came across on hackernews, where the comments are often more interesting then the article itself:
https://news.ycombinator.com/item?id=35603756

If you read through the comments which mostly come from people that follow the field for a while they seem to agree that it's not just "scaling up the existing model we have now", mainly because of cost reasons, but that's it's going to be doing things more efficiently than now. I don't have enough knowledge to say how difficult this is, if those different methods will need to be something entirely new or if it's just a matter of trying what is already there and combining it with what we have.

The article itself can be seen skeptical, because there are tons of reasons OpenAIs CEO has to issue a public statement and I wouldn't take anything in there at face value. But the comments are maybe a bit more trustworthy / perspective giving.

Thanks a lot for transcribing this, was a great read!

Small nitpick think there is a word missing here:
> "which seems perhaps in itself" (bad?)

Yea big companies wouldn't really use the website service, I was more thinking of non technical 1 man shops, things like restaurants and similar.

Agree that governments definitely will try to counter it, but it's a cat and mouse game I don't really like to explore, sometimes the government wins and catches the terrorists before any damage gets done, but sometimes the terrorists manage to get through. Right not getting through often means several people dead because right now a terrorist can only do so much damage, but with more powerful tools they can do a lot more damage.

I'd argue that the implementation of the solution is work and a customer would be inclined to pay for this extra work.

For example right now GPT-4 can write you the code for a website, but you still need to deploy the server, buy a domain and put the code on the server. I can very well see an "end to end" solution provided by a company that directly does all these steps for you.

In the same way I very well see commercial incentive to provide customers with an AI where they can e.g. upload their codebase and then say, based on our codebase, please write us a new feature with the following specs.

Of course the company offering this doesn't intent that their tool where a company can upload their codebase to develop a feature get's used by some terrorist organisation. That terrorist organisation uploads a ton of virus code to the model and says, please develop something similar that's new and bypasses current malware detection.

I can even see there being no oversight, because of course companies would be hesitant to upload their codebase if anyone could just view what they're uploading, probably the data you upload is encrypted and therefor there is oversight.

I can see there being regulation for it, but at least currently regulators are really far behind the tech. Also this is just one example I can think of and it's related to a field I'm familiar with, there might be a lot of other even more plausible / scarier examples in fields I'm not as familiar with like biology, nano-technology, pharmaceuticals you name it.

Maybe to explain a bit more in detail what I meant with the example of hallucinating, rather than showcasing it's limitation it's showcasing it's lack of understanding.

For example if you ask a human something and they're honest about it, if they don't know something they will not make something up but just tell you the information they have and beyond that they don't know.

While in the hallucinating case the AI doesn't say that it doesn't know something, which it often does btw, but it doesn't understand that it doesn't know and just comes up with something "random".

So I meant to say that it hallucinating is showcasing it's lack of understanding.

I have to say though that I can't be sure why it hallucinates really, it's just my likely guess. Also for creativity there is some that you can do with prompt engineering but indeed at the end you're limited by the training data + the max tokens that you can input where it can learn context from.

Loved the language in the post! To the point without having to use unnecessary jargon.

There are two things I'd like you to elaborate on if possible:

> "the challenge is getting AIs to do what it says on the tin—to reliably do whatever a human operator tells them to do."

If I understand correctly you imply that there is still a human operator to a superhuman AGI, do you think this is the way that alignment will work out? What I see is that humans have flaws, do we really want to give a "genie" / extremely powerful tool to humans that even already struggle with the powerful tools that they have? At least right now these powerful tools are in the hands of the more responsible few, but if it becomes more widely accessible that's very different.

What do you think of going the direction of developing a "Guardian AI", which would still solve the alignment problem using the tools of ML, but involving humans giving up control of the alignment?

The second one is more practical, which action do you think one should take. I've of course read the recommendations that other people have put out there so far, but would be curious to hear your take on this. 
 

From my current understanding of LLMs they do not have the capability to reason or have a will as of now. I know there are plans to see if with specific build in prompts this can be made possible, but the way the models are build at the moment is that they do not have an understanding of what they are writing.

Aside from my understanding of the underlying workings of GPT-4, an example that illustrates this, is that sometimes if you ask GPT-4 questions that it doesn't know the precise answer to, it will "hallucinate", meaning it will give a confident answer that is factually incorrect / not based on it's training data. It doesn't "understand" your question, it is trained on a lot of text and based on the text you give it, it generates some other text that is likely a good response, to say it really simplified.

You could make an argument that even the people at OpenAI don't truly know why GPT-4 gives the answers that it does, since it's pretty much a black box that is trained on a preset of data and then OpenAI adds some human feedback, to quote from their website:

> So when prompted with a question, the base model can respond in a wide variety of ways that might be far from a user’s intent. To align it with the user’s intent within guardrails, we fine-tune the model’s behavior using reinforcement learning with human feedback (RLHF). 


So as of now if I get your question right there is no evidence that I'm aware of that would point towards these LLMs "applying" anything, they are totally reliant on the input they are given and don't learn significantly beyond their training data.

The reasons you provide would already be sufficient for me to think that AI safety will not be an easy problem to solve. To add one more example to your list:

We don't know yet if LLMs will be the technology that will reach AGI, it could also be a number of other technologies that just like LLMs make a certain breakthrough and then suddenly become very capable. So just looking at what we see develop now and extrapolating from the currently most advanced model is quite risky.

For the second part about your concern about the welfare of AIs themselves, I think this is something very hard for us to imagine, we anthropomorphize AI, so words like 'exploit' or 'abuse' make sense in a human context where beings experience pain and emotions, but in the context of AI those might just not apply. But I would say in this area I still know very little so I'm mainly repeating what I read is a common mistake to make when judging morality in regards to AI.

The FAQ response from Stampy is quite good here:
https://ui.stampy.ai?state=6568_

Load more