Yes, I'm getting increasingly confused as to why some people are broadening the use of "vibe" coding to just mean any AI coding, no matter how thorough/thoughtful.
It's because the term itself got overapplied by people critical of LLMs -- they dismissed all LLM-assisted coding as "vibe coding" because they were prejudiced against LLMs.
Then lots of people were introduced to the term "vibe coding" in these conversations, and so naturally took it as a synonym for using LLMs for coding assistance even when reading the code and writing tests and such.
> "vibe coding" only sounds cool if you don't know how to code but horrific if you do.
Disagree. Vibe coding is even more powerful if you know what you're doing. Because if you know what you're doing, and you keep up with the trends, you also know when to use it, and when not to. When to look at the code or when to just "vibe" test it and move on.
If you know how to program, vibe coding is useless. It only ever can produce worse results than you could've made yourself, or the same results but with more effort (because reviewing the code is harder than creating it).
Depends on what you're doing. I've found it extremely useful for creating the boilerplatey scaffolding I'm going to be copying from somewhere else anyway. When I actually get into the important logic and tests I'll definitely write those by hand because the AI doesn't understand what I'm trying to do anyway (since it's usually too novel).
I stick by the og definition, in that when vibe coding I don't look at the code. I don't care about the code. When I said "vibe test it" I meant test the result of the vibe coding session.
Here's a recent example where I used this pattern: I was working on a (micro) service that implements a chat based assistant. I designed it a bit differently than the traditional "chat bot" that's prevalent right now. I used a "chat room" approach, where everyone (user, search, LLM, etc) writes in a queue, and different processes trigger on different message types. After I finished, I had tested it with both unit tests and scripted integration tests, with some "happy path" scenarios.
But I also wanted to see it work "live" in a browser. So, instead of waiting for the frontend team to implement it, I started a new session, and used a prompt alongt he lines of "Based on this repo, create a one page frontend that uses all the relevant endpoints and interfaces". The "agent" read through all the relevant files, and produced (0 shot) an interface where everything was wired correctly, and I could test it, and watch the logs in real-time on my machine. I never looked at the code, because the artifact was not important for me, the important thing was the fact that I had it, 5 minutes later.
Fun fact, it did allow me to find a timing bug. I had implemented message merging, so the LLM gets several messages at once, when a user types\n like\n this\n and basically adds new messages while the others are processing. But I had a weird timing bug, where a message would be marked as "processing", a user would type a message, and the compacting algo would all act "at the same time", and some messages would be "lost" (unprocessed by the correct entity). I didn't see that from the integration tests, because sometimes just playing around with it reveals such weird interactions. For me being able to play around with the service in ~5 minutes was worth it, and I couldn't care less about the artifact of the frontend. A dedicated team will handle that, eventually.
This is one of the things I've seen it be very useful for: putting together one-off tools or visualizations. I'm not going to maintain these, although I might check them into version control for historical reference.
I recently ran across a package in my team's codebase that has a bunch of interrelated DB tables, and we didn't already have a nice doc explaining how everything fits together - so I asked the AI to make me a detailed README.md for the package. I'm currently reviewing that, removing a bunch of nonsense I didn't ask for, and I'm going to run it by my team. It's actually pretty good to start with because the code and DB models are well documented, just piecemeal all over the place, and half of what the AI is doing is just collating all that info and putting it in one doc.
Right but there are tons of examples of things that started out as insults or negative only to be claimed as the proper or positive name. Impressionism in painting, for a start. The Quakers. Queer. Punk. Even "hacker", which started out meaning only breaking into computer systems -- and now we have "Hacker News." So vibe coding fits in perfectly.
> Even "hacker", which started out meaning only breaking into computer systems
No. The Etymology of Hacker in the technical scene started at MIT's Tech Model Railroad Club in the late 1950s/early 1960s, "hack" described clever, intricate solutions, pranks, or experiments with technology.
A hacker is one who made those clever solutions, pranks, and technology experiments. "Hacker News" is trying to take it back from criminal activity.
TIL, thanks! Growing up I was only aware of the criminal version -- I didn't realize it grew out of an earlier meaning. I just saw the shift in the tech scene in the 1990s and more broader culturally in the 2000s with "life hacks" and hackathons. What's old is new again...
Like people using “bricked” to signal recoverable situations. “Oh the latest update bricked my phone and I had to factory-reset it, but it’s ok now”. Bricked used to mean it turned into something as useful as a brick, permanently.
I'm not sure how common this is in other countries, but Americans would rather add another definition to the dictionary for the misuse before they'd ever tolerate being corrected or (god forbid) learning the real meaning of a word. I got dogpiled for saying this about "factoid" the other day here, but sometimes when people misuse words like "bricked" or "electrocuted", the ambiguity does actually make a difference, meaning you have to follow up with "actually bricked permanently?" or "did the shock kill him?", meaning that semantic information has been lost.
Yes but before 2005 we didn't have Reddit, so we didn't have people who learned about prescriptivism from there and think it means all discussion about taste and style is immoral.
The two examples the grandparent post mentioned are not really evolution, but rather making everything sound bombastic and sensationalist. The end game for that trend is the cigarette brand ad billboard in Idiocracy, where a half-naked muscular man glares at you angrily and going "If you do not smoke our brand, f* you!"
There is a world of difference between natural semantic drift and blatant disregard for accuracy by someone seeking to create drama for attention.
Not to mention all the attempts we see nowadays at deliberate redefinition of words, or the motte-and-bailey games played with jargon vs. lay understandings of a concept.
Typically it is defined by the collation. For the default collation, where all the weights are as in the file, it's none/accent/accent+case. But if you go to e.g. Japanese, you can have a fourth level of “kana-sensitive” (which distinguishes between e.g. katakana and hiragana).
Usually the implication of this (very common) analogy is that people in the past were somehow behaving wrongly, despite the fact that anybody is right to fight savagely against dramatic disruption to the life they've built, regardless of what the best solution is theoretically. Though even beyond that, the comparison is thin. With AI disruption, the size of the total affected jobs in comparison to the entire economy, as well as the speed of the change, is much more significant.
I think they were behaving wrongly yes because the one constant in life is change whatever you do and whatever species you are. Adapt or die surely? The universe isn't a museum.
> anybody is right to fight savagely against dramatic disruption to the life they've built
Yeah, I'd built a whole lifestyle around armed robbery, and the cops had the gall to arrest me. It was dramatically disruptive!
Seriously, you do not have a "right" to keep doing whatever you've been doing, even if it wasn't destructive. Nobody owes you that. People aren't your serfs.
The problem is that in most cases businesses can afford you, but they choose to be "unable to". It's called budgeting, and the ceiling only represents existential limits for small or dying businesses. The rest of the time, it is defined only to maximize profit, which means using their power to shift the negative part of economic changes onto individuals as much as mathematically possible, rather than the business suffering proportionately.
Engineers (both HW and SW) are often fantastically bad at understanding how business works, including where their salary comes from and how much value they are producing, versus how small the % of the value they produced is which gets returned to them as their salary.
This problem is acute with older hardware and manufacturing engineers who drank all the corporate propaganda they've been fed for decades. I once worked with a senior manufacturing engineer who didn't clock his overtime because he didn't want the huge, multinational corporation we worked for to go bankrupt.
> How do the wealthy get so wealthy? Mostly by some form of cheating. One way that's relevant to one current case is depicted in Philip Roth's 2004 novel 'The Plot Against America':
> "Every subcontractor when he comes into the office on Friday to collect money for the lumber, the glass, the brick, Abe says, 'Look, we're out of money, this is the best I can do,' and he pays them a half, a third -- if he can get away with it, a quarter -- and these people need the money to survive, but this is the method that Abe learned from his father. He's doing so much building that he gets away with it..."
I have to agree strongly with my sibling commenter. Every other language gets it horribly wrong.
In app dev (Swift's primary use case), strings are most often semantically sequences of graphemes. And, if you at all care about computer science, array subscripting must be O(1).
Swift does the right thing for both requirements. Beautiful.
OK, yes, maybe they should add a native `nthCharacter(n:)`, but that's nitpicking. It's a one-liner to add yourself.
I don't think Rust gets this horribly wrong. &str is some bytes which we've agreed are UTF-8 encoded text. So, it's not a sequence of graphemes, though it does promise that it could be interpreted that way, and it is a sequence of bytes but not just any bytes.
In Rust "AbcdeF"[1] isn't a thing, it won't compile, but "AbcdeF"[1..=1] says we want the UTF-8 substring starting from byte 1 through to byte 1 and that compiles, and it'll work because that string does have a valid UTF-8 substring there, it's "b" -- However it'll panic if we try to "€300"[1..=1] because that's no longer a valid UTF-8 substring, that's nonsense.
For app dev this is too low level, but it's nice to have a string abstraction that's at home on a small embedded device where it doesn't matter that I can interpret flags, or an emoji with appropriate skin tones, or whatever else as a distinct single grapheme in Unicode, but we would like to do a bit better than "Only ASCII works in this device" in 2025.
> In Rust "AbcdeF"[1] isn't a thing, it won't compile, but "AbcdeF"[1..=1] says we want the UTF-8 substring starting from byte 1 through to byte 1 and that compiles, and it'll work because that string does have a valid UTF-8 substring there, it's "b" -- However it'll panic if we try to "€300"[1..=1]
I disagree. IMO, an API that uses byte offsets to substring on Unicode code points (or even larger units?) already is a bad idea, but then, having it panic when the byte offsets do not happen to be code point/(extended) grapheme cluster boundaries?
How are you supposed to use that when, as you say ”we would like to do a bit better than "Only ASCII works in this device" in 2025”?
It's often the case that we know where a substring we want starts and ends, so this operation makes sense - because we know there's a valid substring this won't panic. For example if we know there's a literal colon at bytes 17 and 39 in our string foo, foo[18..39] is the UTF-8 text from bytes 18 to 38 inclusive, representing the string between those colons.
One source of confusion here, is not realising that UTF-8 is a self-synchronising encoding. There are a lot of tricks that are correct and fast with UTF-8 but would be a disaster in the other multi-byte encodings or if (which is never the case in Rust) this isn't actually a UTF-8 string.
You can do better than "only ASCII works in this device", and making the default string type to be Unicode is the wrong way to do that. For some applications, you might not need to interpret text at all, or you might need to only interpret ASCII text even if the text is not necessarily purely ASCII; other times you will want to do other things, but Unicode is not a very good character set (there are others but what is appropriate will depend much on the specific application in use; sometimes none are appropriate), and even if you are using Unicode you still don't need a Unicode string type, and you do not need it to check for valid UTF-8 for every string operation by default, because that will result in inefficiency.
In 1995 what you describe isn't crazy. Who knows if this "Unicode" will go anywhere.
In 2005 it's rather old-fashioned. There's lots of 8859-1 and cp1252 out there but people aren't making so much of it, and Unicode aka 10646 is clearly the future.
In 2015 it's a done deal.
Here we are in 2025. Stop treating non-Unicode text as anything other than an aberration.
You don't need checks "for every string operation". You need a properly designed string type.
I think using "extended grapheme clusters" (EGC) (rather than code points or bytes) is a good idea. But, why not let you do "x[:2]" (or "x[0..<2]") for s String with the first two EGCs? (maybe better yet - make that return "String?")
That's what I meant by "must be O(1)". I.e. constant time. String's Index type puts the non-constant cost of identifying a grapheme's location into the index creation functions (`index(_:offsetBy:)`, etc). Once you have an Index, then you can use it to subscript the string in constant time.
Like I said, you can easily extend String to look up graphemes by integer index, but you should define it as a function, not a subscript, to honor the convention of using subscripts only for constant-time access.
It's also just not a normal use case. In ten years of exclusive Swift usage, I've never had to get a string's nth grapheme, except for toy problems like Advent of Code.
Because that implies that String is a random access collection. You cannot constant-time index into a String, so the API doesn't allow you to use array indexing.
If you know it's safe to do you can get a representation as a list of UInt8 and then index into that.
I disagree. I think it should be indexed by bytes. One reason is what the other comment explains about not being constant-time (which is a significant reason), although the other is that this restricts it to Unicode (which has its own problems) and to specific versions of Unicode, and can potentially cause problems when using a different version of Unicode. A separate library can be used to deal with code points and/or EGC if this is important for a specific application; these features should not be inherent to the string type.
In practice, that is tiring as hell, verbose, awkward, unintuitive, requiring types attached to a specific instance for characters to do numeric indexing anyway and a whole bunch of other unnecessary ceremony not required in other languages.
We don't care that it takes longer, we all know that, we still need to do a bunch of string operations anyway, and it's way worse with swift than to do an equivalent thing than it is than pretty much any other language.
In Swift (and in other programming languages) it does use Unicode, but I think it probably would be better not to be. But, even when there is a Unicode String type, I still think that it probably should not be grapheme clusters, in my opinion; I explained some of the reasons for this.
This trope is being worn to the point of absurdity. Yes, people don't like things. All throughout history. Sometimes reasonably, sometimes unreasonably.
It's not about "like" or "dislike". It's that people are unsettled by new technology that they can't immediately get their heads around. But today, it sounds kind of silly to be unsettled by the concept of a database.
That's like saying "the problem isn't the unmaintainable cost of healthcare, it's that we haven't eliminated all diseases and aging". I.e. the latter is a long way off, and might not ever be 100% feasible, so it's horrifying and inhumane to imply we should allow the suffering caused by the former in the meantime.
I think it's a stretch to call having to make a living in a career other than your preferred job "suffering". Even before AI, there were surely millions of people who grew up wanting to be an artist, or an astronaut, or an architect, or any number of things that they never had the skills or the work ethic or the resources to achieve. I'm sure before cars there were people who loved maintaining stables of horses and carriages, and lamented the decline of the stable master profession. It's no different now.
No we shouldn't allow the suffering. Nor should we force people to work bullshit jobs. That's my point. Treating humans with dignity isn't even that hard but people need to believe it's important or it won't happen
So what? Mandate that AI can’t be used to do jobs? That will increase the cost of everything (relative to the world where we are allowed to use AI) and that cost will be bore by everyone in society.
Compare with something like unemployment benefits. The cost of benefits can be covered by taxes (which unlike the example above) can be progressively targeted and redistribute wealth to those most in need.
A social safety net is progressive, feasible (countries all around the world have them), and does not hinder technological or economic progress. What are the alternatives?
It's just a semantic disagreement. In my experience, "vibe coding" means "software made with genAI, casually iterated until it passes tests and appears to work, without exhaustive or experienced review of the output, and is therefore often bad." It doesn't have to mean that, but in practice that seems to be the dominant definition currently.
I watched YouTube howtos, and read 1000 stackoverflow results around 2016 and made my own saas for the construction industry in PHP/jQuery. Was a car salesman before that.
I have 34 companies using it today.
I promise you, a vibe coded app would be an improvement.
So what's the problem really?
reply