> That turned into about 10 hours of conversation with Claude to pull it all together.
Did the author write an actual parser, or does this mean they spent 10 hours coaxing Claude into writing this blog post?
There's not a lot of depth here, and this doesn't really feel like it says much.
The blog post mostly compares Postgres, MySQL, SQL Server... and then flips between comparisons to BigQuery occasionally, and Snowflake other times. Is that intentional (and is it accurate?), or did the LLM get confused?
Yeah, I am disappointed by how shallow it is. Lexing, Parsing, AST would apply to nearly every programming language and not SQL alone. There’s no mention of how the parsers actually work on the code level, which would have made it an interesting read.
I want to believe people who feel they are 10x more productive with agentic tools but I can’t help notice how there’s lot of doing things that don’t need to be done at all. Either that, or doing them superficially as the article shows.
You can assume that already-published open weights models are available at $0, regardless of how much money was sunk into their original development. These models will look increasingly stale over time but most software development doesn't change quickly. If a model can generate capable and up-to-date Python, C++, Java, or Javascript code in 2025 then you can expect it to still be a useful model in 2035 (based on the observation that then-modern code in these languages from 2015 works fine today, even if styles have shifted).
Depending on other people to maintain backward compatibility so that you can keep coding like it’s 2025 is its own problematic dependency.
You could certainly do it but it would be limiting. Imagine that you had a model trained on examples from before 2013 and your boss wants you to take over maintenance for a React app.
You're all referencing the strange idea in a world where there would be no open-weight coding models trained in the future. Even in a world where VC spending vanished completely, coding models are such a valuable utility that I'm sure at the very least companies/individuals would crowdsource them on a reoccurring basis, keeping them up to date.
The value of this technology has been established, it's not leaving anytime soon.
I think faang and the like would probably crowdsource it given that they would—according to the hypothesis presented—would only have to do it every few years, and ostensibly are realizing improved developer productivity from them.
I don’t think the incentive to open source is there for $200 million LLM models the same way it is for frameworks like React.
And for closed source LLMs, I’ve yet to see any verifiable metrics that indicate that “productivity” increases are having any external impact—looking at new products released, new games on Steam, new startups founded etc…
Certainly not enough to justify bearing the full cost of training and infrastructure.
2013 was pre-LLM. If devs continue relying on LLMs and their training would stop (which i would find unlikely), still the tools around the LLMs will continue to evolve and new language features will get less attention and would only be used by people who don't like to use LLMs. Then it would be a race of popularity between new language (features) and using LLMs steering 'old' programming languages and APIs. Its not always the best technology that wins, often its the most popular one. You know what happened during the browser wars.
Your media consumption may be particularly biased if you didn't hear of this! I recommend following outlets from "both sides" even if you find the "other side" offensive. I hate to shill for Ground News, but it's great for this.
You are spreading typical misinformation/propaganda. Temporarily freezing accounts until the law is played out is not the same as debanking someone globally and permanently.
As far as I'm concerned, it's on the same order of badness. "Temporarily freezing" until when? The whim of some government official? No practical difference from using debanking as a political weapon.
> Yet software developed in C, with all of the foibles of its string routines, has been sold and running for years with trillions of USD is total sales.
This doesn't seem very relevant. The same can be said of countless other bad APIs: see years of bad PHP, tons of memory safety bugs in C, and things that have surely led to significant sums of money lost.
> It's also very easy to get this wrong, I almost wrote `hostname[20]=0;` first time round.
Why would you do this separately every single time, then?
The problem with bad APIs is that even the best programmers will occasionally make a mistake, and you should use interfaces (or...languages!) that prevent it from happening in the first place.
The fact we've gotten as far as we have with C does not mean this is a defensible API.
Sure, the post I was replying to made it sound like it's a surprise that anything written in C could ever have been a success.
Not many people starting a new project (commercial or otherwise) are likely to start with C, for very good reason. I'd have to have a very compelling reason to do so, as you say there are plenty of more suitable alternatives. Years ago many of the third party libraries available only had C style ABIs and calling these from other languages was clumsy and convoluted (and would often require implementing cstring style strings in another language).
> Why would you do this separately every single time, then?
It was just an illustration or what people used to do. The "set the trailing NUL byte after a strncpy() call" just became a thing lots of people did and lots of people looked for in code reviews - I've even seen automated checks. It was in a similar bucket to "stuff is allocated, let me make sure it is freed in every code path so there aren't any memory leaks", etc.
Many others would have written their own function like `curlx_strcopy()` in the original article, it's not a novel concept to write your own function to implement a better version of an API.
If I remember correctly, the namespaces feature (now released as Ruby::Box) had some pretty severe performance penalties (possibly even for code that doesn't use it?).
I work in a large Sorbet codebase (though it isn't a Rails one) and it's a huge boon IMO. The number of tests we don't need to write because of Sorbet is really nice.
It does occasionally require structuring your code differently, but I find the type-system-encouraged approach often gives a more elegant and harder-to-misuse interface in the end.
Very curious to hear about the specific cases where types make tests unnecessary.
I spend my working life swapping between Ruby and typescript projects and the typescript project is utter garbage with poor test coverage that needs a day of human QA for every build whereas the Ruby project is well tested such that we know that CI passing means it’s good to be released.
Types don't make testing in general unnecessary, but it removes a class of error handling that runtime type checking handles for you. You can really trust the types when using Sorbet.
(I also work in a 40m+ loc non-rails ruby codebase that is almost entirely typed with Sorbet.)
It's a bit surprising they did that, to be honest. I work at a similarly-sized, HN-popular tech company and our security team is very strict about less-trusted (third party!!) code running on another domain, or a subdomain at the very least, with strict CSP and similar.
But in the age of AI, it seems like chasing the popular thing takes precedence to good practices.
Did the author write an actual parser, or does this mean they spent 10 hours coaxing Claude into writing this blog post?
There's not a lot of depth here, and this doesn't really feel like it says much.
The blog post mostly compares Postgres, MySQL, SQL Server... and then flips between comparisons to BigQuery occasionally, and Snowflake other times. Is that intentional (and is it accurate?), or did the LLM get confused?
reply