Dear friends, 

Since December, there has been quite a lot of noise about ChatGPT. Perhaps even here there may have been comments hailing a great new step in the field of intelligence or even AGI. 

However, in my own experience, I've managed to force (for a lack of better word) ChatGPT3 (update I think it was December 15th, 2022) into consistently producing errors in our interaction. I wrote a post documenting my experience and thoughts here: 

January Researchgate post

I have screenshots of three or four more attempts where I successfully produced the errors, which I can provide upon request (or if I figure it out how to do so here). 

Yesterday, after a significantly longer conversation (around an hour), I also managed to produce an error on the February 13th update of ChatGPT (the one currently in use). I have a video of the whole conversation, again I can share it here. 


There were no threats or weird behaviour by ChatGPT, all I noticed was the ease at which it was contradicting itself or getting into tangles, at times taking a while to get back to me as if 'it was thinking' before finally giving up and producing an error. 

However, after discussing with a friend, I was presented with the worrying possibility that something else may be going on underneath the bonnet. This was the possibility that, at least the last error, which was produced after I waited for 20 mins for it to finish its sentence was produced by mechanisms similar to the ones used in the Great China Firewall, at least in terms of the way it filters and blocks conversations containing certain keywords by slowing things down and eventually producing errors. I would really appreciate if there are any OpenAI people reading this who could reassure me that this is not the case.

Anybody have similar observations? Have you noticed (with evidence, preferably screenshots or videos of conversations) ChatGPT faltering in conversations and eventually producing errors? Any programmers out there who could explain to me why it would 'get itself in tangles' over the topic of what constitutes opinion and information or what is a discussion? Anybody out there who can convince me that it is really 'intelligent' in a way that one may trust it with (even parts of) significant decisions or citations, or even useful conversations, as it claims?

Best Wishes,
Thanks for the space,
Haris Shekeris

-7

0
0

Reactions

0
0
Comments2
Sorted by Click to highlight new comments since: Today at 10:51 AM

Being intelligent and being error-prone are not mutually exclusive. Humans are highly intelligent, and yet they make mistakes constantly. I believe AGI will have mental flaws and make errors as well. 

ChatGPT is very far from human level intelligence. All it's trying to do is predict text based on gargantuan amounts of training data. So if there are lot's of examples online of the thing you're trying to do, such as writing a cover letter, it can learn to do it very well, but if your task is highly specific, it will be more likely to have errors. 

It's still highly impressive though, speaking natural language used to be highly dificult for AI and ChatGPT nails it on this front. It can do things in terms of adapting to prompts and basic reasoning that I was surprised by. 

Dear friend @titotal 

Many many thanks for your measured response, as well as with the link to your article, which is very, very enlightening to me. I think I agree with you in your assessment that the transition to an AGI or something close to it will not take place overnight, and that it may even never arrive or at least there won't be such an AGI existential-threat as many prominent commentators, even in this community, assume. 

However, I guess as you may see from my own (ok, admittedly a bit polemical) linked post (though from what I see now I haven't managed to turn into a hyperlink), I'm a bit worried by us humans making AI (or computability anyway) the yardstick of our intelligence, and then being surprised that we may fail in that or find something that is better in that, rather than naming the thing as something different to intelligence. A sort of negative performativity in action there. 

So, in summary, ok, nailing responding to linguistic prompts in language terms, fine, good, excellent, but not reduce what we humans believe makes us lords of the universe (intelligence, this is a bit tongue-in-cheek as I also believe that animals have civilisations and intelligences of their own) to responding to prompts, when we can do so much better (as in I believe that intelligence also entails emotions, artistic behaviour, cooking behaviour, empathy behaviour, and other behaviour not reducible to 'responding to prompts'). 

Best Wishes
Apologies if I was waffling a bit above, I'd be delighted to hear your thoughts!
Haris

 

PS: The edit is just changing the link to the article into a hyperlink :)

Curated and popular this week
Relevant opportunities