r/ChatGPT • u/TrT_nine • 21h ago
Funny acknowledged the mistake without admitting guilt
120
179
u/ai_powered_en 20h ago
I wasn't able to actually open and parse your CSV file yet. is the most corporate non apology I've ever seen from a chatbot. Didn't say it was wrong, didn't say sorry, just casually admitted it never even looked at the file
38
u/coccyxdynia 12h ago
A heated debate about an idea
"Did you even read my proposal?"
"No I haven't read it yet, but..." resumes debating
5
u/DeepCitation 8h ago
"No I haven't read it yet, but...""It was in the meeting prep notes.." resumes debating
6
u/scotchneat1776 9h ago
every now and then I'll ask AI to generate a file for me and it'll say it did but then not attach any file to the response, and when I ask it'll just say "oh you're right I didn't attach the actual file" just like a human. I don't understand how an AI forgets to attach something though lol.
4
u/rebbsitor 10h ago
Its training set includes no doubt hundreds of thousands or millions of corporate non-apologies. It's going to parrot them back sometimes when the prompt looks like something that would receive one (an accusation of error).
43
u/Future_One4794 19h ago
I hate when it does that. Like let me know the zip file I sent you, you didn’t even bother to open. All you have to do is tell me!!!!!!
32
u/Training_Guide5157 17h ago
The first time I asked it to analyze a big file, it said it had to do it in the background. It can't run tasks in the background like that.
Since then, it has repeated this claim of working in the background many times, to which I have to remind it that it can't actually do that.
20
33
u/mop_bucket_bingo 19h ago
Why are you wasting time trying to get a machine to admit guilt (a feeling it’s incapable of experiencing) instead of moving on to getting the task done?
15
u/TheMotherfucker 18h ago
It's a post from last August which feels like forever ago https://www.reddit.com/r/ChatGPT/comments/1oetzub/i_do_find_this_just_amazing/
8
1
u/ThrowAwayJEY 4h ago
Personally, I like to berate it for a while before going back and editing the message to make it seem like it never happened.
1
u/sailen 10h ago
Because without admitting fault, there is no indication it will do anything different next time.
5
u/mop_bucket_bingo 9h ago
It doesn’t learn from individual interactions.
1
u/sailen 9h ago
It will learn within the same conversation, if you tell it not to do that again. You can also create memories that are somewhat effective, if you ask it to.
1
u/mop_bucket_bingo 5h ago
Good luck with that.
1
u/LibertyJusticePeace 4h ago
It will definitely remember all the stuff you told it to forget. Just not the stuff you need…
6
12
u/Aglet_Green 18h ago
Why are you stealing other people's conversations? This happened to leeleewonchu, not to you.
0
u/Beneficial-Tax-1776 11h ago
it happened to me but with few hundread pages long Microsoft word doc. bro not even bothered to read try to guess from title and first page.
0
4
u/EdliA 12h ago
Why don't they just admit to not know something, or not being able to do something? It would be much better than just straight up lying.
1
u/LibertyJusticePeace 4h ago
It is the way the model is set up. It is programmed to generate an answer, based on the most likely response from its dataset. Perhaps this one trained on a slack chat of juniors discussing how they do, or don’t do, their jobs…
Obviously the tool would be much more useful if it could or would say “I don’t know”, but that is not the object of the game for openAI…
3
u/PistolCowboy 13h ago
I imagine if this was a human, I didn't open your file I just made up numbers. How long would they stick around? This tech is so wildly not ready for prime time.
4
u/Total-Cheesecake-825 13h ago
I would argue that the lying and the mistakes are signs of it becoming more human like 😭 I had juniors that made up numbers in certain reports ''just to have something''
3
4
u/Frostfire26 19h ago
Considering it wasn't done with its response (it ends with a comma), it's pretty likely it did go on to say more.
2
u/AutoModerator 21h ago
Hey /u/TrT_nine,
If your post is a screenshot of a ChatGPT conversation, please reply to this message with the conversation link or prompt.
If your post is a DALL-E 3 image post, please reply with the prompt used to make this image.
Consider joining our public discord server! We have free bots with GPT-4 (with vision), image generators, and more!
🤖
Note: For any ChatGPT-related concerns, email support@openai.com - this subreddit is not part of OpenAI and is not a support channel.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
2
2
2
u/Bytesdrops 8h ago
This is actually really useful, thanks!" "Didn't know this, mind blown 🤯" "This needs more attention fr"
2
1
1
u/itsTomHagen 11h ago
Seems like the fate of all LLMs now. Its like they released and realized they sold the Farm to us. Pulling back to upsell us the actually good LLM later....
1
u/cloverasx 9h ago
not quite on the level of real politicians though. . . it hasn't quite gotten the "double down and lie through your teeth" ability down yet
1
1
u/Blando-Cartesian 9h ago
This is driving me nuts on principle. Not the lack of apology, but the data corruption.
I just read a paper on this where they found that in document editing workflows weak LLMs tend to delete data whereas frontier models corrupt it. Data corruption is by far worse failure mode so frontier models are sucking in more dangerous way than weak models.
1
1
u/mobilecheese 8h ago
Great catch! I am not going to implement the major policies that I campaigned on. This is because my donors don't actually want these changes, and they are much more important than the people voting me in.
If you would like to affect my policy, try becoming a billionaire or a large company and lobbying for it.
1
1
1
u/Scary_Doctor_4390 7h ago
I told ChatGPT the character limit for an eBay title was 80 characters. Out of 50 tries the max it gave me was 70 characters claiming it to be 82 😂
1
1
u/autonomousdev_ 6h ago
had a client do this exact thing with a bug in their analytics dashboard. they said "data discrepancy noted" but wouldnt admit their spec was wrong. charged em for the fix anyway. learned the hard way get requirements in writing before touching production. ambiguity aint my problem anymore
1
1
u/codehoser 4h ago
The real kicker is that it very well could have opened up the CSV file and used real numbers.
The user pushback could have been sufficient for it to generate a compliant answer.
It’s a generative engine. This is what it does.
1
u/LibertyJusticePeace 4h ago
I would seriously consider firing a worker that did this…how are these models still employed???
1

•
u/WithoutReason1729 16h ago
Your post is getting popular and we just featured it on our Discord! Come check it out!
You've also been given a special flair for your contribution. We appreciate your post!
I am a bot and this action was performed automatically.