I found a project that looks at how machine learning shows up in scientific papers. It covers about 5000 ML related articles from the Nature family of journals and tries to map out trends across countries, institutions, and research areas.
Link: https://airesearchtrends.com
Some of the early takeaways are interesting, like how different countries approach ML driven research and how often papers end up being multi org collaborations. Also classical ML still seems to be used a lot more than I expected.
The digital camera analogy is flawed. Digital sensors had a clear and measurable path to improvement: megapixels, ISO, dynamic range. LLMs have no such clear path to 'understanding' and 'reliability'. It's entirely possible we've hit a fundamental ceiling of their capabilities, not that we're just in an early stage
That works great for a small greenfield project. Now try applying it to a million-line monorepo with three competing architectural patterns and a CI/CD pipeline that breaks if you look at it wrong. The real world of development is much messier
If you have such a beast you have n problems. Not being able to apply AI to it is just (n+1)th.
It works great when dealing with microservices architecture that was all the rage recently. Of course it doesn't solve it's main issue that is that microservices talk to each other but it still lets you sprint through a lot of work.
It's just that if you engineered (or engineer) things well, you get immediate huge benefits from AI coders. But if all you did last decade was throw in more spaghetti into already a huge bowl of spaghetti you are out of luck. Serves you right. The sad thing is that most humans will get pushed out into doing this kind of "real development" so it's probably a good time to learn to love legacy, because you are legacy.
Exactly. That last 20% is engineering. Handling edge cases, integrating with quirky APIs, optimizing for performance under load. An LLM excels when all conditions are perfect, but the real world is a mess of imperfections
Classic. We sped up one stage of the assembly line and are now wondering why more finished cars aren't coming off the end. Development isn't just coding; it's also review, QA, deployment, maintenance, and gathering requirements, after all. An LLM can generate 10,000 lines of code in an hour, but those 10k lines will then get stuck in the review queue, swamp QA engineers, and likely bring so much technical debt that it will take longer to clean up than it would have to write from scratch
Fine, but then my critique moves over: the article should do a better job of conveying what the argument is and why it matters.
It opens with rms complaining about the names in the emacs ecosystem not being descriptive enough. OK. But the author argues (in these comments) that their argument isn't against names that aren't descriptive, it's just that the name ought to be relevant, and the reason why is because that is more professional.
Now I am paraphrasing so maybe I am not understanding the argument correctly, but I don't think that strengthens the case for this at all. If anything, it begs the question... why? (And I'm not sure rms would particularly buy this argument either, given that he beckons from hacker culture and seems perfectly happy to break social conventions. rms does not hit me as someone who is highly 'professional' in a traditional sense. This is not an indictment.)
The article's complaint (as I read it) is more about incidental load: names that force you to context-switch just to figure out what category of thing you're dealing with
It's more that they weren't random. There was a convention, a lineage, or a rule behind them. Modern projects often skip that step entirely and jump straight to branding, even when the thing is just plumbing
Descriptive naming absolutely helps at first contact, especially when you’re scanning a dependency list or onboarding someone new. No argument there. But in practice, names stop carrying meaning pretty quickly anyway
Some of the early takeaways are interesting, like how different countries approach ML driven research and how often papers end up being multi org collaborations. Also classical ML still seems to be used a lot more than I expected.
reply