My daughter, who has had a degree in computer science for 25 years, posted this observation about ChatGPT on Facebook. It's the best description I've seen:

Something that seems fundamental to me about ChatGPT, which gets lost over and over again: When you enter text into it, you're asking "What would a response to this sound like?" If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and an author name who's written things related to your question, it's not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing! But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it is doing something else. It's good at generating things that sound like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation.


@DrewKadel Love this. We want so badly for ChatGPT to produce answers, opinions, and art, but all it can do is make plausible simulations of those things. As a species, we've never had to deal with that before.


@ngaylinn @DrewKadel to be fair, most people on the Internet given a question, write the right answer rather than everyone consistently writing the same wrong answer!

(* For most things, I'm sure there are several compelling counterexamples)


@ngaylinn @DrewKadel Well we had - conceptually this is exactly the same as with Eliza - just two orders of magnitude more sophisticated and two orders of magnitude more connected.

At its core it's really just people lacking technical understanding hallucinating an antropomorphization of a conditional probability distribution.

With ChatGPT, the interface is the innovation, not the model.


@ngaylinn @DrewKadel
Actually we as a species have had to deal with that before.

We call them grifters.


@DrewKadel Does that image have alt text? I don't see the usual tooltip that I expect to pop up over an image...

Anyway, I like the response. I also think it's worth keeping in mind that (many) people are hoping this line of machine learning research is going to lead to a system that does give real answers, or at least is good enough at completion that its responses are indistinguishable from real answers. So there's always going to be a drive to test how well the models are doing and to push them toward giving more realistic and reliable responses, regardless of the fact that they're not actually designed to do that.


@diazona @DrewKadel No alt-text, and as a screen-reader user, I now don't know what the poster's daughter actually said, unfortunately...


@FreakyFwoof @diazona @DrewKadel

Here you go,
"Something that seems fundamental to me about ChatGPT, which gets lost over and over again:
When you enter text into it, you're asking "What would a response to this sound like?"
If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and...


@FreakyFwoof @diazona @DrewKadel

...an author name who's written things related to your question, it's not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing! But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it is doing something else. It's good at generating things that sound...


@FreakyFwoof @diazona @DrewKadel like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation. "


@simon_lucy @diazona @DrewKadel Profound, and oh so true.


@simon_lucy @diazona @DrewKadel Aah, thanks very much. I appreciate it.


@FreakyFwoof @diazona I just woke up and added the alt text


@DrewKadel @diazona Thank you for doing so. It does help.


@DrewKadel @diazona And as a result, I feel confident in sharing it. As a screen-reader user without any sight whatsoever, it's sometimes hard to know whether what I think is true, is true. Without proper context, I could be sharing text that, in a post, says one thing, but the image could well be anything, from hate-speech to well, you can imagine. Alt-text isn't just for Christmas, it helps so many, many people.


@FreakyFwoof @diazona @DrewKadel I know some people just hate phone apps, but this is an area where they're unquestionably superior. I'm using MetaText. VoiceOver saw the attached image, automatically recognized it, and read the text without me having to take any additional steps.


@bryansmart @diazona @DrewKadel Nope, that's not the reason. The poster added alt-text just recently. I always check before asking.


@FreakyFwoof @diazona @DrewKadel

Here's what it is:

Something that seems fundamental to me about ChatGPT, which gets lost over and over again:

When you enter text into it, you're asking "What would a response to this sound like?"


@FreakyFwoof @diazona @DrewKadel

If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and an author name who's written things related to your question, it's not being tricky or telling lies or dooing anything at all surprising! This is what a response to that question would sound like! It did the thing!


@FreakyFwoof @diazona @DrewKadel

But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it is doing something else.

It's good at generating things that sound like responses to being told it was wrong, so people think it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation.

[Fin]


@diazona @DrewKadel But why? What would be the purpose of a machine learning system that does give good answers?


@DrewKadel

It's so important for everyone to understand this


@DrewKadel No introspection and only waiting to generate the next line of a conversation. Sounds like quite a few humans!


@PCOWandre I think that's why so many are fooled, they don't even notice the role of listening, understanding or introspection in thought or communication. Of course they have some., quite a bit really, but they don't value it and ignore it.


@DrewKadel Has she made it public? And if so, would you mind sharing the link? I know a lot of people who really need to see this.


@Julie Last I looked, she keeps her posts just for friends. I couldn't share it on FB, but you could send her a friend request- Rachel Meredith Kadel


@DrewKadel Great description. The bot described itself in one of my conversations with it:


@Odiseo79 @DrewKadel

again, no alt-text. Please, folks. Be inclusive.

(1)(https://home.social/tags/alt4you) Screenshot. Odiseo asks ChatGPT: "so you are just simulating that you are intelligent". ChatGPT answers: "Yes, that's correct. As an AI language model, my responses are based on statistical patterns and algorithms rather than true intelligence or consciousness. I'm designed to simulate human conversation.... My responses are generated based on the data I was trained on and the algorithms used..."


@Odiseo79 @DrewKadel the meta thing is that ChatGPT doesn't really know how it works, it's been instructed to say stuff like that, and mostly it does, but it's still just a simulacrum- it's not really introspecting when you ask it whether it's intelligent or conscious, OpenAI just trained it to say "no."


@Odiseo79 @DrewKadel that is, it's not evidence either way. OpenAI could have chosen to train it to insist that it is conscious instead and it would have done so. So what you're really getting when you ask it about itself is mostly what OpenAI wants you to hear


@roywig @Odiseo79 That's mostly right, but it was smart of the company to train it thus: it increases its credibility in the face of obvious flaws in consciousness. Otherwise it would have been exposed as fraud instead of interesting & amazing. Like the Mechanical Turk which played chess. Turned out that a small person was in the base of the machine playing the games and its promoters were treated as charlatans.


@Odiseo79 @DrewKadel (that's not true for most topics but I assume that OpenAI has fairly carefully cultivated what information it's seen about ChatGPT, if only for PR purposes. If they hadn't done that, you'd be getting responses that were its best guess based on its corpus of material, which could include information about how it works, it doesn't know more about how it works than how Bing Chat works or how a motorcycle works; if it's read the operating manual then it can tell you a bit, but it's not *actually *introspecting any more than humans can tell you how their brains work vs just repeating what they read in a neurology textbook)


@roywig @Odiseo79 It's interesting though to consider how humans learn. I've read mostly popular stuff about neurology & absorbed some concepts, but as I get old I start to put together ideas based on the changes I've perceived in myself over time + what I've read. For instance, I'm more convinced that the brain is merely a physical organ that undergoes changes over time--and that humans are less wise than they think. (I'm writing a theology book featuring that) We're more tool makers than wise.


@DrewKadel This evening I talked it into telling me that it was "plausible" that God is a mushroom. It didn't want to. I coaxed it.

But what it did was not like what I did. It just responded. Choosing a controversial subject is ... revealing.


@DrewKadel (And I could talk it into saying it's plausible easier next time... I learned from my prompts. It didn't.)


@DrewKadel

I've been a fairly prompt engineer: mostly on time to meetings and reasonably quick to respond to questions.


@OCRbot


@DrewKadel indeed, the best use case I find for that thing is “please write a speech fill of lies and bullshit just like [politician] would do” and be pretty close to the usual crap the original would do


@DrewKadel alt text:
Something that seems fundamental to me about ChatGPT, which gets lost over and over again:
When you enter text into it, you're asking "What would a response to this sound like?"
If you put in a scientific question, and it comes back with a response citing a non-existent paper with a plausible title, using a real journal name and an author name who's written things related to your question, it's not being tricky or telling lies or doing anything at all surprising! This is what a response to that question would sound like! It did the thing!
But people keep wanting the "say something that sounds like an answer" machine to be doing something else, and believing it is doing something else.
It's good at generating things that sound like responses to being told it was wrong, so people think that it's engaging in introspection or looking up more information or something, but it's not, it's only, ever, saying something that sounds like the next bit of the conversation.


@DrewKadel alt text in previous reply, @diazona @clacke


@DrewKadel She’s right. And that being the case, why on earth is anyone using it? If it’s not providing correct information, WTH is the point?

I just saw a post that cites info from ChatGPT about companies doing business in TN. As if it’s an authoritative source…


@DrewKadel

https://historians.social/@vecrumba/110155292712561186


@kyozou @DrewKadel My impression is that it can be useful for tasks like brainstorming, because you then have the most obvious ideas right in front of you and can build on them.

I realize that this can make some people uncomfortable, but I believe that it may actually lead to more human creativity and originality rather than less.


@kyozou @DrewKadel Well, it is usually correct for questions that are common on the Internet.

Today I pasted it some data definition and asked to implement a simple filtering in Javascript. I could have looked it up in five minutes. But it generated something that worked (with my particular data) in five seconds. I asked a follow-up request "write this in functional style", and it did. I generally know Javascript, but it used an API method that I didn't know about.


@maxy @kyozou It's not that it's useless. For a sophisticated user like you it can be a very useful tool. But it's apparent language sophistication results in people who are more sophisticated in, say, literature or historical research than computer tech, thinking that it does more than it does.


@maxy @kyozou @DrewKadel

And especially when you ask real humans on Stackoverflow your JavaScript question is more likely to result in a toxic response, people have actually found ChatGPT to be more helpful in this regard
https://youtu.be/N7v0yvdkIHg


@kyozou People use it because it's attractive and talks in language we can understand. Some sophisticated users can get it to do things like aspects of coding-but as a shortcut & they know how to test for reliability. But, good grammar aside, it's not ready for use in the humanities or scholarship. Ironically, its big weakness is that it doesn't give right answers about things found in uncontrolled natural language--though it's all about natural language.


@DrewKadel This seems squarely in "Chinese Room" territory. (https://en.m.wikipedia.org/wiki/Chinese_room)

The main troubling questions if you accept this approach: "Where does understanding/meaning reside?" and "What specifically makes humans distinctive?"


@alakest @DrewKadel I think the point that is being missed is that language processing is just the tip of the iceberg, gpt4 is already multi-modal, can "reason" through objectives like how to get a human to fill out a recaptcha form on its behalf, so it's good to understand how LLMs work, but the results will go far beyond what we see happening currently, and it doesn't matter if the "reasoning" is human-like or a solution to a probability equation https://cdn.openai.com/papers/gpt-4.pdf


@alakest @DrewKadel They aren't - much of social discourse is 'saying what you're supposed to say'. I sometimes wonder if you could write a phrasebook that would be able to socialise perfectly well with humans because the conversations are so scripted.

Good example is my local butcher.. he'll try to engage you in conversation but never listens to the response, merely says the next line in the script he had in his head - which probably works for most people but not me..


@alakest @DrewKadel ChatGPT doesn''t have understanding though.. it's a trained parrot. Humans are capable of logical analysis and genuine creativity..


@tony @alakest ChatGPT doesn't have the cognitive abilities of a parrot.


@DrewKadel @tony @alakest
For example parrots know 2 is bigger than 1

https://en.m.wikipedia.org/wiki/Alex_(parrot)

Large Language Models struggle with this
https://wetdry.world/@w/110064653547143681


@alakest @DrewKadel Understanding something in the real world, like a physical object, involves knowledge beyond just words. But I think either way I don't really see this as a Chinese Room question - there is plenty about humans that differentiate us from even a generalised artificial intelligence. We have built in quirks; we aren't simply "intelligence" and nothing else. We have evolved fears and desires, preferences, interests. Those are what makes us human.


@DrewKadel please add alt Text, especially for screenshots of text.


  1. alt4you  ↩︎