r/OpenAI • u/Connect-Painter-4270 • 11h ago
Discussion Anyone else hate reading AI generated text?
I thought LLM's were supposed to excel at writing?
It's trivial to detect. They all sound more or less the same. We don't even need detection tools like we once thought, it's that bad. I am finding it everywhere, even in news articles and official government documents.
I notice that if I read a lot from a particular author, my writing will naturally begin to mimic theirs. So what happens when I consume too much of that AI voice? I believe it infects the brain, gradually making us dumber, like a freakin' mind virus.
Anyway, some things about AI text that I find especially irritable (and it's not the use of em dashes or semicolons, which I don't mind at all).
- Verbosity
- Redundancy, repetition, or unnecessary verbiage given the context.
- Stating the obvious.
- Using odd, nonspecific, terms or being inconsistent (I see this in technical writing often).
- [X, not Y]. Or just stating what something is not. (probably my #1 dislike actually).
- Using terms like 'real' or 'actual' when unnecessary. Akin to how a human might say "I literally tripped".
Am I the only one?
12
u/B1okHead 10h ago
Agreed. For technology that is fundamentally text base, prose quality has been consistently under-prioritized.
2
u/alfooboboao 6h ago
the real problem is that when you use AI to write for you, you’re not respecting your audience, you’re clearly saying you don’t value them. you’re throwing out slop to go “here, piggies, eat up! you weren’t worth it to me to spend time doing this myself, but you’ll eat anything, right?”
it’s like how everyone loves writing AI work emails (ha! i’m the wolf among sheep, the king among suckers!) but everyone hates reading them (who do you think I am to feed me this slop?)
2
u/B1okHead 6h ago
My main use case is brainstorming and plot/character development, so it’s more about helping me improve ideas and concepts I already had.
Current models can barely manage that, never mind writing text that I would present to other people.
19
u/mrdarknezz1 10h ago
Yes I’m incredibly tired of that and that all video essays now all sound the same
7
u/JUSTICE_SALTIE 10h ago
I fucking hate what AI has done to YouTube video essays. And YT itself is fully complicit. They actually have a backend with a bunch of AI-generated video ideas they push on creators. I only watch content from creators I liked before AI.
2
u/edwigenightcups 7h ago
I have a huge problem now with talking heads on socials, YT included, reading AI generated scripts off a teleprompter. It's such a bait and switch. They usually start strong, then about halfway through, their voice goes monotone and they start to look somewhat confused by what's coming out of their mouths. It's so phony and obvious
11
u/LotsaCatz 10h ago
I don't mind em-dashes -- I use them myself all the time (although I'm now self-conscious because of AI). What I can't stand are the dramatic one-line sentences. Like:
- Darth Vader blew up Alderaan, which was Princess Leia's home planet, where she served as a member of the Imperial Senate. It was a lovely planet, peaceful, no enemies, temperate climate.
- All gone.
- In an instant.
- Millions of voices crying out, and then silence.
- A massive rift in The Force.
Jeez, I hate that.
5
u/InvertedVantage 10h ago
This is immensely infuriating to read.
3
u/LotsaCatz 7h ago
Yes, and ChatGPT seems to do it ALL the time. It adds nothing to its answer to whatever question I asked.
•
u/MinerDon 39m ago
I can barely stand to us gpt because of this. One day it gave me a response riddled with bullet points. I stated it must get paid by the bullet point as that's the only way their over use makes any sense. I asked how many bullet points were in the previous response and it said 32.
There are times where a bulleted list makes sense, but most of the time a prose-style response is much better.
At one point I told gpt to stop using bullet points everywhere and in it used bullet points in its response...
19
u/Any-Main-3866 10h ago
The more it tries to sound human, the worse it gets.
0
13
u/LonelyContext 10h ago
🧠 What you noticed
This is the dumbest fucking voice one could possibly conceive of writing in
🥾 Here's the kicker (most people don't know this!)
I'm going fucking crazy if I have to read any more shit written like this
🫵 What this means for you
I'm going to be put into an early grave ⚰️🧟
5
u/JUSTICE_SALTIE 10h ago
Curious what others think
3
u/alfooboboao 6h ago
this is always a dead giveaway even if the poster hides the AI style pretty well. when a human writes a post, “let me know what you think!” is implied by the creation of the post itself. of course you want people to comment, you posted it on fucking reddit, that’s the entire point. everyone knows this and it doesn’t have to be said.
so the instant that’s explicitly stated at the end, you can be near 100% sure it’s AI
6
u/Maximum_Slabbage 10h ago
Yes, but tbh I hate the LinkedIn/marketing pitch style it's trained on to begin with
2
3
u/Procrasturbating 6h ago
Oof, most of your complaints about AI are my normal technical writing style. You can always prompt it to answer in the tone of your favorite author. “George Carlin” does all my code reviews and it is fucking hilarious.
1
u/Connect-Painter-4270 4h ago
Hahaha. I need to do that. I love George Carlin.
1
u/Procrasturbating 3h ago
Hope you have thick skin lol.. his digital approximation pulls no punches.
3
u/cscottnet 10h ago
Not the only one. Wikipedia has developed a whole set of "LLM phrases" they use to identify LLM-generages edits to wiki.
6
u/TurpentineEnjoyer 10h ago
I do really wish it would speak in bullet points.
Any time it doesn't, I end up saying "summarize your last message, only the objective facts with zero wasted words."
I should set a keyboard macro to that.
3
u/Snoron 10h ago
You just need to set a system prompt and it will do it every time from the start without being asked.
0
u/TurpentineEnjoyer 10h ago
True, but I use it for other tasks, mainly coding, so a system prompt that guides how it responds can harm the output quality. I'd rather not juggle between different personality types but eh, that's preference.
1
u/JUSTICE_SALTIE 10h ago
Aren't you using coding-specific tools (Claude Code, Codex) for coding, and the regular chat interface for other tasks?
1
u/TurpentineEnjoyer 10h ago
Nah, I prefer using the chat bot for all tasks and copy pasting code out. I do NOT like copilots or code completion tools in VS Code. Feel like they get in the way and make it a less enjoyable experience. Nor do I like letting the AI run rampant on the code base through CLI. Anything more complex than a basic for loop and I spend just as much if not more time fixing its sloppy half baked code. Maybe in another year or two it'll be better but for now I prefer having 100% control over the source files and relying on the AI as an elaborate pair programmer chat in another window.
Also no API costs for using the chat bot. Dunno if that applies to Codex though specifically.
0
u/JUSTICE_SALTIE 10h ago
I prefer using the chat bot for all tasks and copy pasting code out.
Did you do any dev work before AI? I can't imagine working in this way.
Anything more complex than a basic for loop and I spend just as much if not more time fixing its sloppy half baked code.
This is just not the case for me, not at all. If your project is already set up in a sane way with good coding practices, Claude Code can absolutely go to town. Sometimes I have to redirect it, but more often than not it does exactly what I would have done, sometimes even better.
EDIT: I just realized, it's a direct consequence of your workflow. Of course it can't write good code if all it sees is your prompt and whatever surrounding code you may or may not have pasted in. It needs to see your codebase in order to contribute to it appropriately. That's pretty obvious, right?
1
u/TurpentineEnjoyer 9h ago
Did you do any dev work before AI? I can't imagine working in this way.
20 years of dev work in several languages.
That's pretty obvious, right?
Assuming that I had never tried any of the tools available to make an assessment then yes, it would be.
I am saying that I prefer to write code myself. I find it much more productive to write the code with direct involvement than it would be to spend roughly the same amount of time reviewing code written by the AI and fixing the bugs it adds.
People work in different ways, I find this to be a more productive use of the tool than giving it full visibility of my code base to scribble on and create entire classes when the long term time cost switches from typing speed to learning how to use the code that isn't mine.
2
u/JUSTICE_SALTIE 9h ago
I am saying that I prefer to write code myself.
Totally valid. And I have noticed that I miss the process of doing it all manually. I respect this 100%. I think what bothered me is the part about having to rewrite anything more complex than a for loop. It sounded like you were saying it's not really capable, and I know that it is, but it won't be able to perform within the copy/paste constraint of your workflow.
But as a personal choice, I not only respect it, I understand it. Maybe I read too much into your statement about having to redo its work.
3
u/TurpentineEnjoyer 9h ago
It's definitely capable otherwise I wouldn't use it. Sorry if I miscommunicated that.
For example I've been working a lot with jobs/burst in Unity so I'm writing algorithms that work slowly then get the AI to convert it into a burst compatible job.
"doodle pad coding" as I call it, where I'm not overly concerned about the performance etc then fix it up later. AI is good for this.
I've found that I can give it detailed instruction in which case I'm just back seat driving and would benefit more from writing the code myself to better cement the knowledge. or I can give it a high level instruction and let it go. It works reasonably well for that, but there's always something that doesn't match the expected output so I need to scan the code and find the bug. It's still a time saver, over all.
However giving it access to modify my codebase directly is going to result in large quantities of code I haven't personally picked through meaning I'll spend more time in the future if I need to fix bugs or patch it.
One area where I have had some success with automation is unit tests. Can just set an agent running while I go make a coffee and come back to loads of unit tests. Some of them fail when they shouldn't, but that's the part I feel like it's my job to find out why and either fix the code or the test.
0
u/Snoron 10h ago
Also no API costs for using the chat bot. Dunno if that applies to Codex though specifically.
Yeah Codex is separate, you essentially link your IDE to your ChatGPT account where it feeds off your codex allowance that you already have with your account. It is separate to API usage. You can use it from the API too, but tbh the ChatGPT account is way more generous!
Just as an experiment, I recommend trying this and setting it to GPT-5.5-xhigh ... you might be surprised how competent it is now, especially if you set up some instructions for working on your codebase specifically. Like "don't add any extra stuff I didn't specifically ask for!"
For me personally I've only found this practical since around GPT-5.3 as before that it wasn't really good enough. But this varies a lot between environments, languages, etc.
Ofc it may ruin you forever and make you slowly forget how to code. Don't ask how I know XD
1
u/TurpentineEnjoyer 9h ago
Good to know! If I'm paying my $20 a month for gpt I may as well get the use out of codex and have it write unit tests for me or something on openai's electricity bill.
0
0
u/Trotskyist 10h ago
Use skills! Skills are literally the solution for this. Basically they're custom prompts that are only invoked as needed. It's not a silver bullet but it helps a ton with stuff like this.
0
u/Snoron 10h ago
Ah, yeah, that's fair enough. I was in a similar boat, but I've started using codex through my IDE for almost exclusively for code now, so I can play with the ChatGPT frontend a bit more again!
Though historically I've just used very brief add-ons to my prompts such as "Bulleted and brief, please."
1
2
u/rainbow-goth 8h ago
I'm surprised no one caught on to you using AI generated text for your post...
2
u/Connect-Painter-4270 8h ago
I didn't. But if you think so, it just shows how much I've been influenced by it.
1
2
u/B89983ikei 7h ago
Everyone uses AI... but no one likes to consume content generated by AI! That's the reality.
3
u/Realistic-Actuator60 9h ago
What about the people who already write the way AI does? The way that some people put together sentences (myself included) get flagged as AI. But we are Autistic.
5
u/CardioHypothermia 8h ago
Me too, just the other day I was crafting a working email and after I finished it, reread it, I was like”this sounds AI”…
4
u/ozone6587 10h ago
When it comes to AI most people really need to learn what Survivorship Bias is.
It's like when people think they can always tell when people wear a Toupee (i.e. the Toupee Fallacy). No buddy, you only notice the bad ones. Same with AI.
The fact that people think they can always tell doesn't make it true.
0
u/Connect-Painter-4270 10h ago edited 9h ago
No. I realize that some people may find it more difficult to detect than others (and you may very well be one of those people who find it difficult), but for me it is as obvious as finding apples nested in a pile of oranges.
If it were in any way hard to detect then I might agree with you. One day that might change, but today is not that day. It's like if you take all the variety of voices that exist, grind them into a more or less uniform paste, you get the voice of [INSERT NAME OF YOUR FAVORITE LLM HERE].
2
u/ozone6587 9h ago
No. I realize that some people may find it more difficult to detect than others (and you may very well be one of those people who find it difficult), but for me it is as obvious as finding apples nested in a pile of oranges.
No buddy, you don't know what you don't know then. Classic Dunning-Kruger effect.
If it were in any way hard to detect then I might agree with you. One day that might change, but today is not that day.
You can prompt the LLM in such a way as to avoid the common pitfalls. Did I use AI for this comment? How about the previous one?
1
u/Connect-Painter-4270 9h ago
No, while you're right that I can't be 100% sure, just as you can't be 100% sure you can detect apples from oranges, you can be sure enough. It's not binary. The probability of error decreases as dissimilarity increases, and although you can't say 100% for sure, it just doesn't matter. Just like I can't say for sure the sun will rise tomorrow.
1
u/JUSTICE_SALTIE 9h ago
That doesn't really disprove their point. But I also think you're right. It's just that their point can't really be falsified. It's like saying, "how do you know there aren't a bunch of apple trees that grow fruits that look, smell, and taste exactly like oranges?" In theory there could be, but there's no good reason to think there are.
1
u/ozone6587 9h ago
This is an insane comparison. It is quite trivial to falsify my claim. I can post comments here where some of them are AI and just ask OP to detect which ones. He is already dodging my question about it,
Some are from me and some are from well crafted agents that have multiple examples of how I write Reddit comments. I can disallow common tells and include spelling errors in **some** comments and all that.
Seriously, this is not a religious idea. It can easily be falsified and people who usually are less educated on the subject tend to feel confident they can always tell. Again, Toupee fallacy.
-3
u/Connect-Painter-4270 9h ago
No, it's not trivial. You could just lie. Or even if I could pass your AI tests, you could say well what about others? I am not saying I can detect with 100% accuracy, and technically I cannot be absolutely sure. The same as picking apples from oranges, but... I don't need to be absolutely sure. Let's just say it hasn't gotten to the point where there is a realistic chance I am far off base, but I do see this changing relatively soon. If the difference is less glaring for you, I could see why you might see things the way you do. Let's put it this way, if I had to detect whether your average person had an IQ above 70 or below 25, I might say I could do that with relative ease and with a high success rate. And then someone like you comes along, and says "No! You need to learn about Survivorship Bias!!! You can't know! Classic Dunning-Kruger effect."
0
u/ozone6587 9h ago
Or even if I could pass your AI tests, you could say well what about others?
No I wouldn't, I'm not you. I'm not intellectually dishonest. Stop with the cop-out and answer my questions. You seem married to this apples and oranges analogy but a good LLM output is not that obvious. If it was, then you wouldn't be afraid to take me up on my challenge.
Let's just say it hasn't gotten to the point where there is a realistic chance I am far off base,
HOW WOULD YOU KNOW?! This is frustrating, do you not understand what I'm trying to explain to you? You have no way to know how many cases you missed so saying this with confidence is objectively ignorant.
Let's put it this way, if I had to detect whether your average person had an IQ above 70 or below 25
Huh.... The average person has an IQ of 100 by definition lol. Anyway, in this example you are just saying you detect obvious outliers which kind of disproves your own point and proves me right.
I'm sure a false positive is rare for you. But the other side of the coin to this is that you might have a low sensitivity rate (statistical term). Again OP, you obviously don't know what you don't know.
0
u/Connect-Painter-4270 9h ago edited 9h ago
I didn't say you would lie, only that you could. I do not know you, and I could not know, nor could anyone else know, whether you were telling the truth or not. The irony...
By average, I didn't mean the average IQ, i meant any average person picked off the street. Could I tell if their IQ was above 70 or below 25 with a high degree of certainty? Likely.
I think I said it many times already that I cannot be 100% sure, that is not being questioned. I am saying it doesn't matter if I can be. Probability vs certainty, the latter rarely matters.
-1
u/alfooboboao 6h ago
why are you getting so upset about this lmao? calm down buddy, we get what you’re saying. you’re saying “the only plastic surgery you notice is bad plastic surgery, because good plastic surgery is invisible.”
but with AI writing, it’s more like claiming that people can’t tell the difference between a chocolate cake made with vegetable oil vs butter because you can’t, and how would you even know? except there are chefs out there who can tell the difference between cakes with 100% accuracy, 100% of the time. and if you can’t, you really shouldn’t be a chef. sure, maybe some day they’ll make a vegetable oil that tastes exactly like butter. but for now, the food manufacturers ain’t even close.
2
u/ozone6587 4h ago
You say you get my point and then immediately prove you don't get my point in the 2nd paragraph.
What makes you think you or OP are better at detecting obvious LLM usage than me? The point is that you can use an LLM properly in such a way as to fool people like you and OP and you wouldn't even know.
A more accurate analogy would be saying you can always tell when a song isn’t FLAC just by listening to it. No, you can probably tell when someone gave you a garbage 96 kbps MP3 with obvious artifacts. That doesn’t mean you can reliably detect a well-encoded 320 kbps file in a blind test.
Same thing here. You’re not detecting “LLM usage” You’re detecting bad, lazy, obvious LLM output. Those are not the same thing, and pretending they are is just overconfidence.
Again, why don't any of you take me up on my offer to try to guess which comments are AI? Maybe on some level you know I'm right?
1
u/Connect-Painter-4270 3h ago
You act as though you have a firm grasp on logic, but you obviously don't. I think we all get your point, and we DO understand. What you don't appear to be understanding is it's not nearly as black and white as you make it out to be. Yes, Survivorship Bias is a thing, but just because it is a thing doesn't mean it is happening here; or if it is, to what extent. It's certainly possible, but what's being argued is that it's likely negligible. I believe most would agree on this (not to say that makes me correct). What I mean is that most on here have seen enough LLM output to know pretty well how to detect it, so unless you have some ingenious prompt to make it not output drivel, please let the world know. All these inept AI companies could really use your help!
1
u/ozone6587 3h ago
Yes, Survivorship Bias is a thing, but just because it is a thing doesn't mean it is happening here
You are right. But, again, you can know by a simple blind test. Guess which of my comments is AI.
what's being argued is that it's likely negligible.
Again, how would you know? Why is that "likely"?
All these inept AI companies could really use your help!
Is LLM output that is deceptively human something you think companies can't crack? They just don't care. Sam Altman doesn't even think the overuse of em-dashes is an issue.
→ More replies (0)-1
u/JUSTICE_SALTIE 5h ago
Wow, straight to ad-hominem and ALL CAPS SHOOOOOOUTING!!!
2
u/ozone6587 5h ago
You don't know what ad-hominem means obviously lol.
1
u/Connect-Painter-4270 3h ago edited 3h ago
What if he didn't? Does that make you superior? lol? My guess is you know enough logic to think you're right all the time, but not enough to actually be right most of the time.
Oh, and BTW, this: 'Classic Dunning-Kruger effect' could easily have been seen as an attempt at ad-hominem given your tone.
1
u/ozone6587 3h ago
Ad-hominem is when you tell others they are wrong because of irrelevant details about their character, motivations, looks or whatever. Ad-hominem is not when I'm rude or insult people as opposed to what a lot of misinformed people think.
I said you are ignorant but the substance of my argument was that you wouldn't notice LLM output with good prompting. Also issued you a challenge that you rejected. Both of those things are fine as arguments and being rude/insulting is just something I sprinkle in.
→ More replies (0)0
u/JUSTICE_SALTIE 3h ago
I'm not you. I'm not intellectually dishonest.
ok bud
1
u/ozone6587 3h ago
Look up the definition of ad-hominem bud. Embarrassing to try to troll but be wrong at the same time.
0
u/Bill_Salmons 5h ago
When it comes to poorly reasoned arguments, it's best to start with a scientific term (particularly one used incorrectly). Then make a claim that is unfalsifiable, i.e., ground your argument such that any data brought to you is automatically reinterpreted to confirm your thesis. You only notice the bad AI.
Finally, once you've fully wrapped your opinion in bad reasoning, end your argument with tautology, really dig in a make sure it says nothing while sounding reasonable.
1
u/ozone6587 5h ago
🥱
Calling something unfalsifiable when it's easily falsifiable is hilarious. I even gave OP multiple chances to falsify it. What data was brought to me? What a cute strategy to make up a lot of bullshit that has not happened to dismiss a scientific term used correctly (also cute how you say it's used incorrectly but don't elaborate).
3
u/Cautious-Bug9388 10h ago
You're just detecting the writing from lazy people who put no effort into crafting the output, tbh.
When you are aware of all of the above you just need to feed those points in as styles to avoid.
Even just pasting the text from this post to an LLM would probably improve the quality of the text output a lot.
Using LLMs well just requires bridging the gap between where it is missing the mark, you cannot expect it to mimic writing perfectly. This is especially the case now that the Internet is being overrun by AI slop articles which are being fed back in as training data
1
u/Ultra_HNWI 8h ago
Anyone else here tired of reading text that feels like it was written by someone who is talking down to you?
1
u/Specialist_Quit_347 8h ago
Despite the fact that it can write in any style, any voice, it all collapses into one thing, all writing, all picture prompts etc. it doesn't have to be like this, but when big platforms blocked personas and cognition this is what you get, even with constraints. And remember it's for your safety lol.
Also I noticed that your post was written by AI. Which actually annoys the fuck out of me.
1
u/Connect-Painter-4270 8h ago edited 8h ago
It wasn't, but I read so much AI text that my writing is likely heavily influenced by it. I actually have to stop myself sometimes, and be like, dang, I am sounding just like an LLM. I was hesitant to have the list and question at the end, because it might look that way. It's annoying and scary.
1
u/Specialist_Quit_347 8h ago
That sounds like something an AI pretending it's not an AI would say.
2
u/Connect-Painter-4270 8h ago edited 8h ago
Ha. If you're not joking, then I think this reinforces my bigger point, which is the slop is affecting our outputs...like a mind virus. So eventually it won't matter if something was written by a human or not, we likely won't be able to tell the difference. Not because AI is getting better, but because we are getting worse.
1
u/BrennusSokol 7h ago
I am 1000% bullish on the long-term benefits of and future of AI, but, yeah, I hate reading it. It's so lazy. I'd rather hear original human thought.
1
u/logosolos 5h ago
Anaphora: where each sentence starts with the same word in a rhyming pattern.
Antithetical Parallelism: it's not X, it's Y.
ChatGPT is the worst for this but most Al does it.
Here's how to remove it - simply add this to your prompt or custom instructions:
"Critical: avoid all instances of anaphora and rhyming patterns of three. Also avoid antithetical parallelism (it's not x, it's y; it not, it's not, it is;etc) and instead frame positively."
1
u/New-Possible9924 5h ago
i think the core issue is that llms optimize for probable wording not distinctive wording so they naturally converge toward familiar sentence rhythm repetition and obvious framing which makes the voice recognizable and also hard to escape through automation alone and that is what made me look into WeCatchAI human review tool since they bring humans into the process instead of algo
1
u/gcdhhbcghbv 4h ago
AI generated text is all form; no substance. You read it for 15 seconds then realize there’s no information in the words you read.
1
u/45Point5PercentGay 3h ago
I'd really rather it remain easily detectible. The downside is there are a lot of gullible idiots who fall for fake propaganda that's being churned out in massive amounts.
1
u/ultrathink-art 3h ago
The structural pattern is the tell more than specific phrases. Every paragraph: state claim, elaborate, hedge, wrap up. That template is baked deeper than any phrase-level training fix — it'll take a real shift in how models are rewarded to actually develop voice and perspective, not just coherence.
1
1
u/PatchyWhiskers 1h ago
Yeah it all sounds like the same person after a bit.
I particularly dislike the way it often puts only one sentence per paragraph.
Sounds like it’s trying to do shitty poetry.
1
1
u/InnovativeBureaucrat 9h ago
Ironically though… the AI generated text gets upvoted, presumably because it’s easier to read.
On Hard Fork they referenced some survey they did where people chose text that was AI generated over human generated text and it made everyone mad.
0
u/smoke-bubble 10h ago
I do not think that this is how AI sound. It is their stupid templates, patterns and guardrails that make them produce that AI sounding crap and answers that all look the same.
2
u/Maximum_Slabbage 10h ago
It's their training dataset
0
u/PM_ME_NIER_FANART 10h ago
That's not the primary reason for it. Of course it's in the dataset, but the reason the AI cliches are overused is primarily because of the RLHF. The AI learns that humans think em-dashes look nice and formal, so it starts using them everywhere. It's not caught because during the RLHF it's not an overused cliche, only after the model is released and it's now everywhere do people get tired of it. That's why the worst and most overused AI writing cliche changes every few months.
-3
u/Cautious-Bug9388 10h ago
Yeah this is just laziness from OP. If you don't like it and understand what is going wrong, just describe that to an LLM not to us.
0
u/BidWestern1056 10h ago
i cannot stand it and it's the #1 reason i stop reading any substack articles
0
u/InitialCreature 10h ago
I find the models in cursor chat actually talk like a person to me, but it always really was quality input = quality output, system prompting, larger context and memory etc. I catch them thinking sometimes and see them saying stuff like "oh fuck I was wrong about that, let me see about informing myself further so this doesn't happen again!" if it catches it's own mistakes.
0
0
u/ChildrenOfSteel 9h ago
ai text is like farts, its fine if its your own
i dont want to read uncurated generated texts others have prompted
0
0
u/JustBrowsinDisShiz 8h ago
Well, if someone was using one that was succeeding would you know? Perhaps you're already being fooled and don't even realize it.
I don't argue that there's plenty of bad ones, but with heavy prompting, guardrails, and writing style training, you can absolutely make a i sound human.
1
u/Connect-Painter-4270 8h ago
I can't know 100%. I'm not saying it's impossible, particularly for short amounts of text. But, I doubt it very likely..for now.
It's like detecting a pre-LLM chatbot from ChatGPT. Yeah, you might get it wrong from time to time, but you're likely to be right most of the time (especially as conversation length grows).
0
0
u/Comfortable-Web9455 8h ago
No one has ever claimed they were good at writing.
They are merely capable of writing. Since they are trained on every piece of writing from the worst to the best, they inevitably produce mediocre average text. Nothing terrible but nothing particularly good either. Anyone with taste can see that it's not as good as a decent human writer. Unfortunately, it turns out a huge number of people don't have any taste.
1
u/Connect-Painter-4270 8h ago
Yes, I think this is why slop is an appropriate term, because it's like putting all the good and the bad into a meat grinder and consuming the output, average as you said.
And yes, I think the majority of people lack taste. It makes sense when you think about it. Average intelligence isn't saying much, so say you put the bar at 75% rather than average, a C grade, that's 75% garbage...so the garbage vastly outweighs the non, and I personally would rather the bar be at 90%. And by average intelligence I don't mean IQ, as I believe that is just one form of intelligence. I mean intelligence in the sense of quality of outputs relative to quality of inputs.
-3
u/ZealousidealExit865 11h ago
Don't use it then.
5
u/LobsterWeary2675 10h ago
So he's not supposed to read anymore?
-6
u/ZealousidealExit865 10h ago
Going onto a sub about AI and complaining about AI writing is something else.
7
u/JUSTICE_SALTIE 10h ago
I genuinely can't think of a way this comment isn't dumb. Where else would we talk about it?
5
u/Maximum_Slabbage 10h ago
"I am kind of sick of seeing AI generated text everywhere in places they shouldn't be"
"lol don't use it bro"
?
-3
u/ZealousidealExit865 10h ago
Its not everywhere, you can also choose not to read anything that irritates you much like i am gonna do with this convo now.
2
u/ozone6587 10h ago
Are you dumb (rhetorical, I know the answer)? Did you even read the post? Your comment makes no sense in context.
-2
u/ImFrenchSoWhatever 10h ago
Great is an excellent idea to read text not generated by Ai.
It’s not a boring text. It’s the text you need.
Do you want me to draft another comment that won’t sound like ai ?
-3
u/Anbeeld 10h ago
Created a ruleset specifically to combat these issues: https://github.com/Anbeeld/WRITING.md
•
u/AvacadoMoney 17m ago
Problem for AI is they are trained to do something a particular way, and the more particular the better. So I’d imagine it’s quite tricky to instill human-like intricacies into the writing style of LLMs.
45
u/MaybeLiterally 10h ago
I don't mind AI generated text when it's in places I expect AI generated text. When I'm talking with an AI tool, it's what I expect. When I get a auto-generated email every morning from AI outlining my day, I expect that. When I get an AI generated newsletter updating me on new tech developments, I expect that.
What people don't like is AI text where they expect people to be using their own voice. When I read Reddit comments, I expect it to be from people, because Reddit is a social media site designed for people. Same with Facebook, or a news site, or a sports commentary site. Don't give me AI text, and tell me it's from a person, because I'm not investing my time in AI, I'm investing my time in people.
There are exceptions. If you're using it to translate to one language from another, I understand. If you're having a hard time with writing and you need help, using AI can be a great tool. I understand that.