Services generally have a stateless request/response architecture, inherited from the success of HTTP, and contra the experiences of CORBA and COM, the latter being much better suited to Office Object Linking and Embedding and VB visual controls - local, assumed reliable, not across a network boundary.
Creating an object-oriented API for a shared library which encapsulates a module to the degree a service boundary would is not trivial and it's very rarely done, never mind done well. Most OO libraries expose an entire universe of objects and methods to support deep integration scenarios. Maintaining that richness of API over time in the face of many consumers is not easy, and it (versioning) is a dying art in the eternal present of online services. The 90s aren't coming back any time soon.
If you own a library which is used internally, and you add a feature which needs a lot more memory or compute, how do you communicate the need to increase resource allocation to the teams who use the library? How do you even ensure that they upgrade? How do you gather metrics on how your library is used in practice? How do you discover and collect metrics around failure modes at the module boundary level? How do you gather logs emitted by your library, in all the places it's used? What if you add dependencies on other services, which need configuring (network addresses, credentials, whatever) - do you offload the configuration effort on to each of the library users, who need to do it separately, and end up with configuration drift over time?
I don't think binary drops work well unless the logic they encapsulate is architecturally self-contained and predictable; no network access, no database access, no unexpected changes in CPU or memory requirements from version to version.
There's plenty of code like this, but it's not usually the level of module that we consider putting inside a service.
For example, an Excel spreadsheet parser might be a library. But the module which takes Excel files uploaded by the user and streams a subset of the contents into the database is probably better off as a service than a library, so that it can be isolated (security risks), can crash safely without taking everything down, can retry, can create nice logs about hard to parse files, can have resource metrics measured and growth estimated over time, and so on.
People keep regurgitating this “you aint going to be google” mantra but I worked there and in reality generic microservice stack is in a totally different league of complexity and sophistication of what google and co have. This argument is basically reductio ad absurdum
The argument is simply the empirical observation that the vast majority of microservices deployments don't operate anywhere near the scale that actually requires microservices and is not going to operate at that scale in the foreseeable future. When scalability is the primary argument in favor of microservices, how is that absurd?
Creating an object-oriented API for a shared library which encapsulates a module to the degree a service boundary would is not trivial and it's very rarely done, never mind done well. Most OO libraries expose an entire universe of objects and methods to support deep integration scenarios. Maintaining that richness of API over time in the face of many consumers is not easy, and it (versioning) is a dying art in the eternal present of online services. The 90s aren't coming back any time soon.
If you own a library which is used internally, and you add a feature which needs a lot more memory or compute, how do you communicate the need to increase resource allocation to the teams who use the library? How do you even ensure that they upgrade? How do you gather metrics on how your library is used in practice? How do you discover and collect metrics around failure modes at the module boundary level? How do you gather logs emitted by your library, in all the places it's used? What if you add dependencies on other services, which need configuring (network addresses, credentials, whatever) - do you offload the configuration effort on to each of the library users, who need to do it separately, and end up with configuration drift over time?
I don't think binary drops work well unless the logic they encapsulate is architecturally self-contained and predictable; no network access, no database access, no unexpected changes in CPU or memory requirements from version to version.
There's plenty of code like this, but it's not usually the level of module that we consider putting inside a service.
For example, an Excel spreadsheet parser might be a library. But the module which takes Excel files uploaded by the user and streams a subset of the contents into the database is probably better off as a service than a library, so that it can be isolated (security risks), can crash safely without taking everything down, can retry, can create nice logs about hard to parse files, can have resource metrics measured and growth estimated over time, and so on.