It's the curse of writing well. ChatGPT is designed to write well, and so everyone who does that is accused of being AI.
I just saw someone today that multiple people accused of using ChatGPT, but their post was one solid block of text and had multiple grammar errors. But they used something similar to the way ChatGPT speaks, so they got accused of it and the accusers got massive upvotes.
They said nigerian but there may be a common way English is taught in the entire area. Maybe the article author will chip in.
> ChatGPT is designed to write well
If you define well as overly verbose, avoiding anything that could be considered controversial, and generally sycophantic but bland soulless corporate speak, yes.
> They said nigerian but there may be a common way English is taught in the entire area.
Nigeria and Kenya are two very different regions with different spheres of business. I don't know, but I wouldn't expect the English to overlap that much.
There are a lot of very distinctive versions of English floating around after the British Empire, Indian newspapers are particularly delightful that way - but there is as the author says, an inherited common educational system dating back to the colonial period, which has probably created a fairly common "educated dialect" abroad, just as it has between all the local accents and dialects back in the motherland.
That's not a very good argument, because then you could say the same for American, Canada, South Africa, Australia and so on. If recency is an issue, then here's a list of colonies that got their freedom around the same time:
Cyprus, Somalia, Sierra Leone, Kuwait, Tanzania, Jamaica, Trinidad and Tobago, Uganda, Kenya, Malawi, Zambia, Malta, Gambia, Guyana, Botswana, Lesotho, Barbados, Yemen, Mauritius, Eswatini (Swaziland).
If what you're saying is right then you'd have to admit Jamaican and Barbados English are just the same as Kenyan or Nigerian... but they're not. They're radically different because they're radically different regions. Uganda and Kenya being similar is what I would expect, but not necessarily Nigeria...
>They're radically different because they're radically different regions.
They're radically different predominantly at the street level and everyday usage, but the kind of professional English of journalists, academics and writers that the author of the article was surrounded by is very recognizable.
You can tell an American from an Australian on the beach but in a journal or article in a paper of record that's much more difficult. Higher ed English with its roots in a classical British education you can find all over the globe.
ChatGPT does not “write well” unless your standard is some set of statistical distributions for vocabulary, sentence length, phrase structure, …
Writing well is about communicating ideas effectively to other humans. To be fair, throughout linguistic history it was easier to appeal to an audience’s innate sense of authority by “sounding smart”. Actually being smart in using the written word to hone the sharpness of a penetrating idea is not particularly evident in LLM’s to date.
If you're using it to write in programming language, you often actually get something that runs (provided your specifications are good - or your instructions for writing the specifications are specific enough!) .
If you're asking for natural language output ... yeah... you need to watch it like a hawk by hand - sure. It'd be nice if there was some way to test-suite natural language writing.
The last time I asked it to write something in a programming language, it put together a class that seemed reasonable at first blush, but after review found it did not do what it was supposed to do.
The tests were even worse. They exercised the code, tossed the result, then essentially asserted that true was equal to true.
When I told it what was wrong and how to fix it, it instead introduced some superfluous public properties and a few new defects without correcting the original mistake.
The only code I would trust today's agents with is so simple I don't want or need an agent to write it.
I think it depends on what models you are using and what you're asking them to do, and whether that's actually inside their technical abilities. There are not always good manuals for this.
My last experience: I asked claude to code-read for me, and it dug out some really obscure bugs in old Siemens Structured Text source code .
A friend's last experience: they had an agent write an entire Christmas-themed adventure game from scratch (that ran perfectly).
Like most other tools, it can take some experience to become good at using them. What you’re describing suggests a lack of that, assuming you used a good coding model or reasonably recent frontier model.
Add "Always use dash instead of em dash" to the developer/system prompt, and that's never an "issue" anymore. Seems people forget LLMs are really just programmable (sometimes inaccurate) computers. Whatever you can come up with a signal, someone can come up with an instruction to remove.
> That doesn't work, they beat it so hard into ChatGPT
I don't think you're able to set either the developer or system prompt on ChatGPT, you're gonna have to use the OpenAPI (or something else) to be able to set that. Once you have access to setting text in those, you can better steer how the responses are.
ChatGPT has personalization settings that you can use to set part of the system prompt. Other chatbots usually have this too.
How much they follow it depends. Sometimes they know you wrote it and sometimes they don't. Claude in particular likes to complain to me its system prompt is poorly written, which it is.
> ChatGPT has personalization settings that you can use to set part of the system prompt
That's not true, which field do you believe this to be? Because all of the fields I currently see in ChatGPT do have an effect on your conversations, but they're not just raw injections into system/developer prompts, it's something else.
Try using the API with proper system/developer prompts, then copy-paste that exact same thing into ChatGPT's "personalization settings" and try to have the same conversation, and you'll get direct evidence that it isn't actually the system prompts, but they're injected somewhere into the conversation.
Except for your poor editor who then has to manually replace your hyphens with proper em dashes. Still, if you're already disrespecting your editor enough to feed them AI slop...
Huge assumption on their side then, isn't the context "humans writing for other humans"? Not sure how "publication editors" entered the conversation nor from where.
I was referring to a human editor, which I thought was obvious enough from context. I assumed the reply was in jest. My original comment was light-hearted, so I don't think it needs to be rigorously analysed, but plenty of humans write for other humans but still have an editor involved in the process.
Obviously not, computers are the true programmable computers. But I'd still think it's accurate to say they're like programmable computers that are sometimes inaccurate, for most intents and purposes it's a fine mental model unless you really wanna get into the weeds.
I would use an actual em dash if there were a keyboard key for it. On my macbook, I have an an action script set up on the touchbar for emdash and a few other unicodey glyphs, but the (virtual) buttons are like 2 inches wide each so I can't fit more than 5 or 6 across it. Sucks.
LLMs don't even write as well as people do. If you talk to them long enough, you'll notice they produce the same errors careless people do. Sometimes they wrongly elide the article 'a'. They occasionally mess up 'a/an' vowel agreement. The most grating thing of all is that the fully-elided 'because' (as in 'because traffic') lives on in LLM output, even though you rarely see it anymore because people rightly got the sense it was unfair for a writer to offload semantic reconstruction to the reader.
I have a confession to make: I didn't think lulcat speak was funny, even at the time.
It's pretty annoying and once you catch them doing it, you can't stop.
Depends on your definition of "well". I hate that writing style. It's the same writing style that people who want to sell you something use and it seems to be really good at tiring the reader out - or at least me.
It gives a vibe like a car salesman and I really dislike it and personally I consider it a very bad writing style for this very reason.
I do very much prefer LLMs that don't appear to be trained on such data or try to word questions a lot more to have more sane writing styles.
That being said it also reminds me of journalistic articles that feel like the person just tried to reach some quota using up a lot of grand words to say nothing. In my country of residence the biggest medium (a public one) has certain sections that are written exactly like that. Luckily these are labeled. It's the section that is a bit more general, not just news and a bit more "artsy" and I know that their content is largely meaningless and untrue. Usually it's enough to click on the source link or find the source yourself to see it says something completely different. Or it's a topic that one knows about. So there even are multiple layers to being "like LLMs".
The fact that people are taught to write that way outside of marketing or something surprises me.
That being said, this is just my general genuine dislike of this writing style. How an LLM writes is up to a lot of things, also how you engage with it. To some degree they copy your own style, because of how they work. But for generic things there is always that "marketing talk" which I always assumed is simply because the internet/social media is littered with ads.
I’m highly skeptical. At one point the author tries to argue this local pedagogy is downstream of “The Queen’s English” & British imperial tradition, but modern LLM-speak is a couple orders of magnitude closer in the vector space to LinkedIn clout-chasing than anything from that world.
Yes they are, or rather, we were when I was in primary school. My essays (we called them composition) were filled with these these red check marks for every esoteric word, proverb, metaphor or simile you used. The more you had the higher you'd score. So I did my homework with a dictionary open. I remember writing some document at work in the US and everyone commenting on how Queen's English it was. This was before ChatGPT. I know know it was all silly, and I've spent a bunch of time learning to write simply. But then I've listen to too many tech podcasts, and now I find silicon valley tech-speak creeping in, and I hate it. The one that I hear everywhere now that I swear not to ever use is let's double-click on that point. Just why?
You write a record to disk before applying it to your in-memory state. If you crash, you replay the log and recover. Done. Except your disk is lying to you.
This is why people who've lost data in production are paranoid about durability. And rightfully so.
Why this matters: Hardware bit flips happen. Disk firmware corrupts data. Memory busses misbehave. And here's the kicker: None of these trigger an error flag.
Together, they mean: "I know this is slower. I also know I actually care about durability."
This creates an ordering guarantee without context switches. Both writes complete before we return control to the application. No race conditions. No reordering.
... I only got about halfway through. This is just phrasing, forget about the clickbaity noun-phrase subheads or random boldface.
None of these are representative (I hope!) of the kind of "sophisticated" writing meant to reinforce class distinctions or whatever. It's just blech LinkedIn-speak.
I agree. I think the point here was the self-appointed AI detectives, who will declare any writing style unfamiliar to them a product of ChatGPT. You might remember the Paul Graham "delve-gate" controversy on twitter last year. It was exactly this.
Yeah. But I will die on the hill that ChatGPT (today, at least) is a bad writer, and makes prompted writing worse in a way that isn't anything like the way schematic style or vocabulary rules might for an over-eager student.
For whatever combination of prompt and context, ChatGPT 5.2 did some writing for me the other day that didn't have any of the surface style I find so abrasive. But it could still only express its purported insights in the same "A & ~B" structure and other GPT-isms beneath the surface. Truly effective writers are adept with a much broader set of rhetorical and structural tools.
And good students are getting in trouble (meaning "have to explain themselves") to lousy teachers just because they write well, articulate ideas and can summarize information from documents where other regular people would make mistakes.
> they used something similar to the way ChatGPT speaks, so they got accused of it and the accusers got massive upvotes
Outrage mills mill outrage. If it wasn't this, it would be something else. The fact that the charge resonated is notable. But the fact that it exists is not.
ChatGPT writes a particular dialect of good writing. Always insisting on cliffhangers towards the summary, or "strong enumerations", like "the candidate turned out to be a bot. Using ChatGPT. Every. Single. Time." And so on.
I saw this described as LLMs writing "punched up" paragraphs, and every paragraph must be maximally impacting. Where a human would acknowledge some paragraphs are simply filler, a way to reach some point, to "default" LLMs every paragraph must have maximum effect, like a mic drop.
I just saw someone today that multiple people accused of using ChatGPT, but their post was one solid block of text and had multiple grammar errors. But they used something similar to the way ChatGPT speaks, so they got accused of it and the accusers got massive upvotes.