If there are no security problems affecting your project, and you don't benefit from new features, there is no urge to update. And even then, it's a cost/benefit ratio.
Updating is a constraint, one you follow either because it solves a problem or avoid a pitfall, not a goal.
I migrated most of my stuff to Python 3 years ago, but for some of my clients, despite that 2.7 is not supported anymore, I advice to just download a copy of everything they need offline as a safety measure and keep the system running as-is forever.
Raw HTML + jQuery + Django 1.x + VPS still deliver fine. Just because I can do react + microservices does not mean it should be a default response.
There are servers running barely touched from one or two decades ago and they do their job perfectly.
So yes, some client side libs are going to be there for years and years. If the UI works, it's alright.
Software is not an end, it's a tool. Being passionate about it, it's easy to forget that, but we use software to help the real world, not the other way around.
Not saying of course you should never update nor use modern technology when appropriate: technical debt must be paid. But it's important to recognize when you need to do it, and when you just want a shiny toy or to look good.
This applies to libraries installed using <script src="://cdnjs.com/path-to-lib.js">. Most cdns have a 'latest' path but using that can result in your site breaking when a dependency changes. Web developers need to stop taking the lazy option and test more.
JS libraries installed using npm to be bundled in are pinned to the major version, so they usally update to the latest minor or patch whenever the thing is deployed. That's still a problem because it only works if an app is actively maintained and deployed regularly, but it's better than nothing.
It is a myth that all websites (or applications) have an active development team continuously deploying updates. I would even expect that the vast majority of the code in production in the world is old code that is not looked after. This is particularly true for small non-tech companies that outsourced the development, and for internal systems in large organisations where the developers have been re-assigned to other projects.
This is why technologies that assume active development, like containers, cloud APIs, etc are in my opinion a disaster in the making.
> This is why technologies that assume active development, like containers, cloud APIs, etc are in my opinion a disaster in the making.
I can agree with this. I've already been burned a few times _by my own tools_ trying to push an update 8 months after building it the first time. A docker build as part of my CI/CD will download some new package and bork everything.
I know that this can be prevented with pinning, container registries, etc. but I often don't build all of that overhead into these stupid little web services.
I know this may cause immeasurable pain for someone in the future but there's only so much time in a day.
If this was just some *nix box that "just worked" and never got patched it would probably work better long term.
> A docker build as part of my CI/CD will download some new package and bork everything.
Really? I'm not much of an ops guy but that's precisely the problem docker _fixed_ for me. We must have very different usage patterns.
I used to think bare linux guy, but when I finally bit the bullet and first fully dockerized the personal stuff I was working on at the time was after the Nth time of running into library/service version problems every time I stood up a new AWS (plain-linux EC2) instance.
In fairness I never really "pinned" to specific AWI, which was probably the root of my problem, but, you know, security updates and stuff seem helpful. But every time I installed on a fresh box I'd run into incompatibilities with really core, basic stuff (it seemed to me) like PostgreSQL versions and so on.
I'm still not much an ops guy, but version/install/configuration stability is by far the most appealing thing about docker for me personally.
I do fair bit of audio and visual processing stuff, typically developed on Mac and deployed to Linux. With some services every 6 months I would need to hunt down some wholly new instructions for installing some third-degree dependency I was using (indirectly) and an entirely different set of build problems to work out when deploying to a (linux) server.
But since switching to Docker I have had little to no (surprise) compatibility problems. sox? alsa-utils? libvips? ffmpeg? I can't remember the last time I had one of those "oh no, I guess I need to spend 4 hours spelunking forum posts to figure out which precise combination of dependencies need to make this work like it did a week ago" moments. I am surprised by (a) how _few_ problems I have making sure that native-code-heavy A/V that runs in docker on my OSX laptop works without modification when I move to a headless linux server.
Fully agree. I think it means small non-tech companies are better off sticking to SaaS offerings instead of rolling out their own (e.g. wordpress.com vs hosting your own). It's limiting in some cases but at least it's actively developed.
Because to update the components of a container you need to recreate it from source image, assuming where you took the source image from will have been updated 10y later, and assuming someone will have both the knowledge and the right version of the tools to deploy your container.
And if you have a container orchestration ecosystem then we are in cloud API territory. (Change and deprecated)
Who is going to pay for that? Most customers want a "fire and forget" experience, no ongoing maintenance budgets.
My company has no clients who want a 'fire and forget' experience any more. We've gone after clients who want something better than that. We've educated old clients to become better clients. We have a steady stream of reliable revenue, good things to work on, and we're proud of what we build.
There's always going to be companies that cater for clients who really do want the 'fire and forget' experience, but that doesn't mean people who want to write great software have to work for them. Leave that work for people who want it and find a role in a company that does better work with clients who understand why it's better and want to pay for better software. Don't just accept "no one pays for testing" as a fact. It's not universal.
there's certainly a place for applications that require continuous updates, I actively work on one, but I don't think "fire and forget" style is inherently bad either. I built one (well, multiple, but I was personally fond of that one) and inherited one when someone left the company and I haven't had to touch the first since a minor update shortly after release and the second I never had to touch at all. Both still functioning and doing what they were intended to do.
I don't think a client/group requiring a set and forget (well, not quite forget, but to be hands off) is necessarily a bad client nor is one that wants continuous updates a good one. In some cases, it could be reversed because the set and forget client who disappears was most likely able to clearly define requirements as they needed them and they had the foresight to include anything that might come along ahead of time (or probably more likely, their use case is rigid and doesn't change).
It needs to work more like libraries on Linux. They have an API, the api _may_ expand, but will never, ever, change or break without a major version rev.
Then for that stable API a URL could target the latest version that fulfills it.
this is basic semver, and what a lot of Javascript will follow, but you definitely still end up with outdated libraries, especially for library authors that enjoy what seem like arbitrary and capricious breaking changes and major version bumps
I never understood the need (desire) to update your dependencies as much and often as possible. As long as a project builds and runs without any issues, why put effort in upgrading libraries and exposing yourself to possible issues.
The main reason is security. When a security vulnerability is patched in a new version of one of the libraries you use, you should upgrade, and if you haven't upgraded it in two years it is gonna be a pain in the ass.
On one of our projects we've got what I think is the best take on this I've seen. Once a week, CI updates all our dependencies and makes fresh PR. That PR then goes through CI and can be merged by an engineer if it looks good.
> why put effort in upgrading libraries and exposing yourself to possible issues.
Isn't this what the combination of a "package manager" with semantic versioning scheme actually should automate?
Personally, I think this is where NPM failed and still fails. They should enforce a binary format that ships with header files (and method signatures), and enforce semantic versioning.
If any library doesn't play by the semantic rules, don't let it publish. That's the authority and responsibility that NPM failed to include.
If everybody plays by the semantic rules ... then libraries can be upgraded automatically without breaking anything. And a huge plus: They can be installed as _shared_ libraries, which is such an old concept that it hurts my fingers having to type its advantages.
Cargo (rust) considered it, using the vastly more information it has about whether signatures have changed than NPM, but rejected it because you can still make breaking changes without changing a function signature, so why claim to detect it if only a subset can be.
You can get pretty far with a statically typed, purely functional language. For example, Elm's package manager enforces semantic versioning: https://elm-lang.org/
I always update libraries when I'm revisiting a project. This help me to engage with the project because the thing that motivates me most is if something doesn't work. And after updating libraries something usually doesn't.
I get where you're coming from, however, I think having an update cycle, even if it's something like once in 18 months, is important, especially as a responsibility one has to their project's successors.
There's nothing worse than coming to a project riddled with deprecation warnings and when one tries to update, breaking changes in libraries result in a ton of issues.
Also, it can (and most likely will at some point) affect your release dates when unexpected issues arise due to deprecated APIs.
That's a smell. It means someone only did a partial upgrade of the stack. This isn't limited to libraries, it also includes everything else, like having the latest version of $language installed instead of an earlier - compatible - version.
When you work on legacy, your first step would be trying to get it to run on the intended version of the stack before trying to move towards a recent target.
> when one tries to update, breaking changes in libraries result in a ton of issues
That's par for the course with any technology. Try replacing the spark plugs of an 1980's car model with their 2020 counterparts. A car restorer would look for matching parts of that era.
Updating legacy code is much like Theseus' Ship. You change parts of the ship while sailing. So, whatever you do: you make sure the ship doesn't sink en route.
If you're working on legacy, the stakeholders - clients, users,... - aren't interested in specific versions of the underlying libraries. They simply want to get to their destination, and they need a seaworthy ship i.e. usable software.
Now, a stakeholder may go "Oh! I want shiny feature X!", but then you might need to go through the pain of upgrading the entire ship. As a programmer, it's your job to put that choice succinctly in front of a PO or PM: "Either it's spending a lot of time & money upgrading, or not having shiny feature X." It's NOT your job or responsibility to decide whether or not the upgrade actually needs to be done.
The same is true about security or safety. If a client (or your boss) isn't willing to invest in an upgrade path, then there's not much you can't do about that. Except for walking away - jump ship - if you can't bear to see the storm ahead. There's an iron triangle of trade-offs: cheap / good / fast. Pick two, you can never have all three.
As a programmer, your goal isn't to write code with the language du jour. It's solving a problem your boss and / or your clients put in front of you.
Thanks for putting it that succintly. It got me thinking if that cost of change is always a bad thing. A lot of programs tend to fall into one of two ends of the spectrum:
1) Everyone is afraid to change anything.
2) The program is constantly improving, to the point that I never use that word other than sarcastically.
Now, threat (1) is older than computers. But threat (2), though not exactly new, is greatly facilitated by the web platform. Now, when there is real competition, do you find yourself preferring constantly changing apps or crufty ones?
Because it is a major security issue to not keep up with library upgrades and general maintenance. It's in the OWASP top 10 for a reason. The Equifax breach is a perfect example of why ignoring library upgrades is a high risk behavior for any organization.
Security and availability. If you’re at a large company, not updating your dependencies could mean an outage, losing customer or other data you don’t want exposed to the world.
A simpler way to state it is it could be the death of your company.
"JavaScript Libraries Are Almost Never Updated Once Installed"
...on an website.
I imagine the ease with which one can type `npm i` or `yarn install` means that server-side JavaScript libraries are updated frequently. Wasn't that the whole problem with that leftpad thing?
Out of curiosity I went to look for the most frequently downloaded (installed) library on npm, which apparently is not directly available, but here's a pretty arbitrary selection of things that came to mind:
* express - last release 8 months - 11.4m downloads this week
* React (which is pretty browser-oriented, right?) - last release 3 months ago - 6.5m downloads this week
* Underscore - last release 1 month ago - 6.7m downloads this week
* jquery (almost exclusively browser-oriented, certainly DOM oriented) - last release 9 mos ago - 3.1m downloads this week
It seems like someone must be keeping up-to-date.
Also "installed" is a misleading statement in the web context, JavsScript libraries aren't _installed_ they are _integrated_ with the site, which means an upgrade isn't free, it's a semi-new integration effort.
I try to run `npm-check -u` at least once a week. It takes some time when a dependency has a breaking change or a project that I don't trust to respect semantic versioning is updated and I have to check its git diff, but in practice my projects have up to date dependencies without much efforts.
Updating is a constraint, one you follow either because it solves a problem or avoid a pitfall, not a goal.
I migrated most of my stuff to Python 3 years ago, but for some of my clients, despite that 2.7 is not supported anymore, I advice to just download a copy of everything they need offline as a safety measure and keep the system running as-is forever.
Raw HTML + jQuery + Django 1.x + VPS still deliver fine. Just because I can do react + microservices does not mean it should be a default response.
There are servers running barely touched from one or two decades ago and they do their job perfectly.
So yes, some client side libs are going to be there for years and years. If the UI works, it's alright.
Software is not an end, it's a tool. Being passionate about it, it's easy to forget that, but we use software to help the real world, not the other way around.
Not saying of course you should never update nor use modern technology when appropriate: technical debt must be paid. But it's important to recognize when you need to do it, and when you just want a shiny toy or to look good.