The old adage: "the first rule of distributed objects: don't distribute your objects". DCOM and EJB also came unstuck by failing to observe this rule.
It's impossible for young'uns to appreciate just how obsessed the software world was by 'objects', OO and the chimera of reusability in those days; I subscribed to 'Object' magazine, and still recall one article breathlessly predicting that in the future bespoke development would become bunk as folks would just buy e.g. an Aircraft object off the peg and plug it into their application. The fact that such blatant silliness actually met with knowing nods gives the insight into how technologies that promised to wire this brave new world together became hot properties almost regardless of their details.
It is telling that J2EE became at least mildly sane when people started ignoring entity beans and working exclusively with session beans, inching towards the realisation that the api was the thing that needed to be remote. Imo the current status quo with REST (in its usual bastardised guise) and json as the language neutral representation, alongside aids like Json schema and OpenApi, while by no means perfect, are workable enough and certainly light years ahead of these earlier fumbling efforts.
> "I subscribed to 'Object' magazine, and still recall one article breathlessly predicting that in the future bespoke development would become bunk as folks would just buy e.g. an Aircraft object off the peg and plug it into their application. "
One drum I keep beating is that COM in Windows is amazing for scripting and interoperability between programs. Before PowerShell and "everything is an object [but you're stuck in one .NET process]" there was COM. From Python, from VBScript, from ActivePerl, even in Powershell to break out of .NET, from C# and Java, you could do their language equivalent of:
and voila, pluggable SNMP engine into any script in any Windows scripting or programming language. Use it in Excel in Visual Basic for Applications and update your spreadsheet with SNMP results. Then embed the Excel spreadsheet in a Word document. Ppen PowerShell on Windows and run this code[1] to query all registered ones, 1744 on my system, surely some are useful or fun?
Want to read JPG headers? Lean on Explorer to do it, without needing a JPG metadata library in every language. Want to integrate with Windows Text to Speech, or Excel, or Dyalog APL or send a fax from JScript or automate a browser or whatever? COM objects are there.
And it's a world being thrown away in favour of "simply download a Selenium wrapper for every language" and "simply do an OAUTH login to a web service to interface with a styled sluggish CRUD text system". And that's a shame, because web developers, macOS users, Linux users, smartphone users, don't know what they're missing. I'm sympathetic to it being apparently horrible to program the backend in C++, but this world you're scoffing at actually existed and has good features especially for the casual hacker who wants to use system-wide standard interfaces to large and powerful engines, with client and server written in differing languages.
On the other hand, programing those dynamic dispatch COM interfaces was really not so fun. There must have been 20-30 interfaces that all implemented IUnknown, but there's only one REAL IUnknown.
Take the video player MPV, it can be controlled by sending commands into a named pipe, which is good, so you can control it from a Python script - but now instead of having a system-wide standard like COM where you have PyWIN32 which works with any COM object to any COM program and you call methods from the MPV documentation with Python numbers and strings, now you have to have a Python-to-MPV-JSON-Schema serializer in your code, and if you want to use C# you need a C#-to-MPV-JSON-Schema-serializer in your code. All you're really doing is triggering some code inside MPV - calling some library functions - but instead of a convenient system-wide standard way to do it, there's a different inconvenient non-standard way of doing it from every language to every program. Strictly worse.
COM was awesome tech! I wish we had an equivalent today.
I did a ton of python-com projects for a big aerospace company in the late 90's and it was amazingly easy to get a lot of well-integrated functionality for little time and effort. Really sad that it's not more common now.
The J2EE 1.0 spec was so goddamned infuriating, owing to the fact it was introduced by the company who penned the 8 Fallacies of Distributed Computing. At least half of which it was obviously in violation of. I walked away from it. Or tried to. Every successful implementation violated the spec, because the spec was idiotic. So we were all doing “J2EE” and with each revision what they said we should do looked a bit more like what were were doing already. But by the time it was almost good it was captured by industry.
Then Rod Johnson lambasted it famously and then lived long enough to become J2Ee.
I still have his book here somewhere. I keep meaning to sit down with it and the Spring spec and see how badly the abyss stared into him.
I'd post a 10 page specifically detailed diatribe, but you don't deserve it because you're just pathetically sealioning about a lost cause. So first read and respond in detail to my 10 page diatribes about CORBA instead, which is the actual topic of this discussion, then you can post your own 10 page detailed diatribe full of specific idiocies about what you love so much about J2EE. Deal? Then let's hear it.
And FYI, go read what Rod Johnson has to say if you're really interested, know what you're talking about, and not just sealioning (and about J2EE, really??! Of all the things to choose to sealion about, have some fucking taste and self respect, and post something you're not embarrassed to put your real name by!):
Reminds me of how little love the SOAP that came between had for the XSD that actually did all the heavy lifting. A bazillion ways of reinventing elaborate communicative acts and surrounding ritual, and a big handwavy "yeah, xsd, whatever" to the actual content.
REST was a huge change in perspective in how it put the representation in the spotlight. It seems ironic that this happened on a schemaless baseline, but perhaps that was a fire it needed to go through, "this is the important part and if we put too much automated tooling in this corner you won't give it the attention it deserves".
I often think about many things that were explored in that time as Birds of paradise. The desktop as it was seen then was mostly solved, and new challenges of internet didn't yet materialized - many engineers need to do something. So they started to do very weird things
Totally different things, distribution of objects is stupid but as a USER I do love what OLE and it's descendants (and what's inside f.ex. Open/Libreoffice) actually provides in terms of actually working plugin API's.
The best example of OLE the ideas that OLE enabled (even if not implemented by it anymore?) is how you can embed spreadsheets and drawings into a Word document, it becomes super useful for me doing my annual report since I can stick a proper spreadsheet that has automatic calculations in the middle of running text.
(Would I be fond of having to implement all of it? Probably not, and there was also probably many stupid usages of it but for what it was for initially it was a win)
> one article breathlessly predicting that in the future bespoke development would become bunk as folks would just buy e.g. an Aircraft object off the peg and plug it into their application
Every time I consume a third-party’s REST API, I’m doing that, no? Albeit the article may have oversold the idea in the work ending there.
Well yes that's true, however that's in the spirit of a 3rd party providing a specific service to your program, whereas the object zealots intended objects in the sense of entities with identity being these off-the-peg consumables. Imagine if in a code review every time you declared a class to represent some feature you were challenged as to whether there wasn't some component you could shop for instead. The Aircraft example was egregious - does a company doing in flight catering want the same thing as an air traffic control simulation?! It's also about boundaries - a typical 3rd party REST api has a clean and obvious boundary, but how would one of these Aircraft objects sensibly interoperate with my program, as it can't know anything about the rest of the application. The hand-wavers never thought to explore these critical problems.
What about when I install a library or use a framework? When I use Spring Boot, it's calling my code. I install libraries which dispatch events to which I subscribe and that come with pre-defined objects I pass around. I use SDKs to talk with REST APIs. I even use objects that persist their data in my database, and have their own migration files.
I'm being somewhat disingenuous asking this. I realise most of the code in any application will be bespoke, but I don't think we should deny that the vision you describe has become reality in part.
Yes, it had became reality in many environments like .NET, Delphi and even Windows COM. The problem is however, they are only good in homogenous environments. Services on the other hand, could be easily consumed by all systems because they don’t need to know your system to work with and you don’t need to take care of the memory, threading and life cycle issues as you need to with objects.
In other words, it boils down to the objects vs interfaces, OO vs functional programming arguments in the last decades.
In reality, we still do both. But it’s clear by now that we don’t want remote objects anymore but remote services.
DCOM was far more practical, mostly because the scope was smaller. I used very successfully with a trading system back in the nineties.
In the same institution, the corba guys just built themselved giant messes trying to unify 4 different languages and the kind of network dependency hell that only corba could allow.
DCOM was never cool, but it also wasn't as ambitious, so good systems got built with it.
I vaguely remember those spinning balls in Windows 2000, XP etc. to know your DCOM components were working. Are they still a thing? We did the 3 tier DNA thing, using VB because in VB it was simple, in C++ it looked hellish.
VB was denounced as infantile nonsense fit only for the GUI by us C++ jocks, until we'd done COM in both C++ and then in VB - then we realised what a huge favour VB was doing in insulating you from so much of the complexity. Sure there were Don Box-level things you could probably only do in C++, but for us working stiffs doing business apps it was like day following night.
>Earlier I posted an article by Don Box comparing SOM and COM, and I mentioned that WebAssembly is going through a similar evolution, and might benefit from some of the lessons of COM and SOM: [...]
>This article comparing SOM and COM was written by Don Box. (first archived in January 1999, but doesn't say when published): [...]
>Don Box wrote an excellent in-depth book about COM: "Essential COM": "Nobody explains COM better than Don Box" -Charlie Kindel, COM Guy, Microsoft Corporation. [...]
>Here's a synopsis of COM I wrote in response to "Can someone link to a synopsis describing what "COM" is? It's hard to search for. (e.g. microsoft com visual studio)":
>COM is essentially a formal way of using C++ vtables [1] from C and other languages, so you can create and consume components in any language, and call back and forth between them. It's a way of expressing a rational subset of how C++ classes work and format in memory, in a way that can be implemented in other languages.
>Apple's OpenDoc based browser, CyberDog, was also quite amazing and flexible, because it was completely component based and integrated with OpenDoc. But that plane never got off the ground, because Steve Jobs rightfully focused on saying "No" and "put a bullet in OpenDoc's head".
Apartment threading means that your component has a thread in, which doesn't call into other components. It receives the requests via messages, same if it were running in another process. I believe there is marshaling of arguments to an apartment thread, like in the out-of-process case. Thus you don't have to deal with the possibility that two or more calling threads are executing the component's code.
I remember how CORBA was all going to be the future.
I wrote dozens of programs in C++ and Python using IDL.
Funny, though, that we now find ourselves doing the exact same thing but it is "an API", has no type-checking, no error handling, requires more work, is limited to particular languages and is slower.
Distributed objects were ahead of their time by maybe 15 years. Now we're stuck in async API-over-JSON hell forever.
We've finally come full circle with gRPC, which is basically a simplified CORBA. I went through the whole cycle, and while I don't exactly miss CORBA, I do wonder why it took us so long to get back to that baseline of functionality.
A huge influx of “programmers” that knew only JavaScript and thought that HTTP is the only network protocol, that’s why.
But seriously, the root cause of the regression is the web and its associated limitations, quirks, and bad habits.
For example, many web developers started out with eval() and maybe JSON which they thought was simple because it is text based and can be interactively experimented with in the browser console. They could “get started quickly” and they were “moving too fast” to think about codegen tooling, protocol efficiency, typing, or security. Doesn’t matter, they hooked up the form, push it to prod!
It’s a cycle that happens over and over in IT, a variant of the eternal September.
You can mock the "moving fast" trope, but the truth is that moving fast has actually been very important for most web-related software development since the beginning. The reason is that the web is a super-competitive environment where the speed of experimentation and adaptation wins most of the time - even if we, as engineers, might not like that.
The problem is that the "fast" part is in quotes because it's glacially slow as soon as you get past the initial MVP or demo.
I see this idiocy everywhere, to the point that many clouds are now advertising "Day 2 operations" like it's a new thing to be doing things for more than one day.
Everywhere you turn, it's: "Get started quick", "Quickstart", "Deploy to <cloud>!", etc...
What do you do after deployment is... crickets chirping, wolves howling, and a tumble weed rolling slowly past.
CORBA, Java Remoting, .NET WCF, WS-*, etc... are complex technologies that can't be trivially poked and prodded with curl or a repl. What they provide is tooling and long-term velocity and safety even with hundreds of developers on the team.
Heck, even as a solo developer I strongly favoured the "proper" RPC systems. I could define a class type once, and then Visual Studio or IDEA or whatever would spit out tens of thousands of lines of error-free boilerplate code that otherwise I would have to hand-roll.
You can't imagine how depressed it makes me when I see some Web API guide that starts off with a cheery "this is a simple..." and then there's five hundred pages of English text.
Look. Sure, if you're an Indian outsourcer developer, this is great. You can bang out monotonous repetitive code like a meat robot and collect a pay check your subsistence farmer parents could only dream of. You can do this for years, and never have to think, or be creative, or risk your job security.
But people that need to get things finished, past day two into day five hundred? We use the good stuff, with automation.
Half the world still builds roads with hand tools. Where I live, we build roads with heavy machinery.
That's the difference. Any idiot can pick up a hammer and say "road building is easy". They'll still be building that road with an army of workers a year later.
I upvoted your comment because it's largely accurate.
However, I would caution against blanket stereotypes like this:
> Look. Sure, if you're an Indian outsourcer developer, this is great. You can bang out monotonous repetitive code like a meat robot and collect a pay check your subsistence farmer parents could only dream of. You can do this for years, and never have to think, or be creative, or risk your job security.
A lot of the outsourced indian devs do indeed match that description, BUT the majority of them that I know of don't want to "bang out monotonous repetitive code like a meat robot and collect a pay check ".
They want to create novel and creative things like everyone else. That they're stuck in the modern equivalent of the assembly line is really not their fault, and most devs, outsourced or not, are in that space anyway.
> BUT the majority of them that I know of don't want to "bang out monotonous repetitive code like a meat robot and collect a pay check ".
Well, majority of you know are not majority of actual IT workforce in India. As some one who worked for a decade or so with Indian IT vendor and worked with outsourced developer till today. Most of them have no interest beyond meeting client requirement which is just code word for a massaged resume contain exact keywords that clients put in job requirement.
> They want to create novel and creative things like everyone else. That they're stuck in the modern equivalent of the assembly line is really not their fault, ..
Well they could've take low paying job that's creative. But IT jobs offer high income while sitting in A/C office and following instructions of manager/client.
Nothing wrong with that I am doing same. It is not a fault it is explicit choice they have made.
You have some points but this scenario is also an industry symptom of what sloppy and/or inexperienced developers cause.
As an experienced developer being in this environment you have to be humble to the fact that money is still what rules interactions and apparent non-progress with vague promises can be seen from a _client_ perspective as no progress at all from seemingly total beginners.
Moving "fast" is often needed to assure people and for grunt-work it's absolutely fine as long as it doesn't build in bad requirements into the system...
And that brings us to the IMPORTANT point, moving fast gives you a good prototype ground and a chance to check assumptions with the client/end-users, but you need experienced people to put in brakes and remove bad assumptions out of the codebase before it leads to second-order bloat that makes the bad assumptions impossible to weed out.
In "enterprise" dev the core task that you don't want to hand off to juniors is often database models, data invariants, synchronization semantics,etc that inexperienced people will just try to paper over with increasing amounts of code when wrong and thereby creating huge swaths of code that just cements the bad assumptions into place and creates this glacial progress that you mentioned.
gRPC arguably isn't anything like CORBA. It's just RPC.
The thing that CORBA (and COM/DCOM) gives you on top of RPC is object references, what's called location transparency in DCOM. That is, if you have an API like this:
*Pie getPie()
then the remote server simply returns an object reference. What you have on the client-side is a "stub" (what COM/DCOM calls a proxy), which might look like this:
class Pie {
int getSize()
}
The whole idea is that Pie pretends to be a local object, but it is in fact just a thin wrapper; calling getSize() invokes in a remote network call to the server.
gRPC and other modern RPCs don't have object identity or objects. They just return structs, and you have to invent your own IDs. So in gRPC, the server may expose:
rpc GetPie(PieRequest) returns (PieResponse) {}
But PieResponse is just data. It has no actions.
Where things become insane is where you have big graphs of objects, some of which may be remote references (stubs), some local in your own app. Maybe you accidentally store a reference in a global/static variable, which means your application is effectively holding the remote object "open". Performance typically goes out the window because just calling a method, even if it's data like getSize(), results in a network roundtrip.
I still think the underlying ideas were sane: Defining type-safe, portable interfaces using an IDL, then generate client and server code from that IDL. RPC has been reinvented many times, and gRPC won't be the last iteration. I do suspect we could have something CORBA-ish working over the Internet if we can get the technical design right.
That's true, but my impression back when I was using CORBA and EJB and Java RMI is that nobody really cared about remote object identity and really just wanted a convenient RPC solution. Ie use EJB stateless beans instead of stateful beans.
So while CORBA could do a lot more, it wasn't necessarily being used that way. At least not anywhere I saw; maybe there were shops out there doing more exotic things.
I don't think of gRPC as object oriented, but I don't have a ton of experience with it. It seems like a service oriented system, with endpoints that accept parameters & return data. There's not, to my knowledge, much in the way of enduring objects.
You're not going to have dozens of Todo objects that you are callinng methods on. You're going to have a TodoService that has some methods. This is isn't very object oriented; the things sent in & out are just dumb data, it's just request/response.
I think cap'n'proto gets a little closer to having an actual notion of objects, in that the protocol has actual identifiers built in. But it's still more about services than objects.
Corba seemed to really be object oriented; you'd get back todo1 and call todo1.done().
> I think cap'n'proto gets a little closer to having an actual notion of objects, in that the protocol has actual identifiers built in. But it's still more about services than objects.
Cap'n Proto is all about the objects! I would say this is the whole point of the design, and my main motivation in creating it.
(Unlike CORBA, though, Cap'n Proto does not try to pretend that RPCs are equivalent to local calls. In particular it has promise pipelining to compensate for latency, rather than pretending there is no latency. https://capnproto.org/rpc.html)
Why no type checking? Generate an Open API client for target languages. Done.
Assuming you need an API. I'll never build an app that way again, it's not worth my hands. I'd rather use Hotwire, or LiveViews, or Django's Unicorn for the web part. Zod looks nice too.
The problem isn't so much with checking as it is with what kinds of things you can check. OpenAPI offers a very powerless type system. So bad, it's almost not worth using. Not only that, it's designed for JSON, which is an awfully bad format -- so, even if you could check that somehow, you'd still be very limited by what you can actually send.
Also, yeah, all kind of hot garbage you listed will work on Web, because the expectations to the quality of the output is so low... We came to expect Web to be the dumpster fire of programming. Most unfortunately, this kind of garbage spreads into other areas because Web is a universal interface to many things, and there are plenty of programmers who know how to do it.
> Why no type checking? Generate an Open API client for target languages.
I dunno. Why aren't more people using openapi?
What I see in practice is that almost everyone is using JSON as the transport mechanism and not generating that JSON serialisation code from open API specs.
The type-checking in CORBA was not opt-in, it was mandatory.
Type-checking in browser fetch() calls are optional. To opt-in you have to go outside of the standard.
We use OpenAPI/Swagger specs whenever possible for new projects and integrations (some usages or integrations are older).
Generating a OpenAPI/Swagger spec in .NET is as easy as telling a lib like NSwag to generate and publish documents from your API's (and you can filter out to only show public API's in the docs). Probably as easy in Java,go,etc.
Consuming services can usually be done almost as easily by downloading the spec and pointing a code-generator to it.
HOWEVER the mandatory vs opt-in part is why we are stuck with JSON based api's, consuming/providing the typings becomes extra work when interfacing/playing with an API from any dynamic language such as JS, Python, Ruby, etc whilst you really only pay a relatively minor performance penalty from typed languages that people won't care about in dev up until it hits in production (Assuming you have an modern JSON serializer library built for speed).
Anything resembling "bloat"/complexity (sadly types is in this category from the perspective of many anti-TS JS developers, esp as API-typings still won't come from the host language but having to be provided separately) will be offputting and lead to any such spec being ignored by a substantial chunk of developers.
OpenAPI isn't worth using. It's so pathetically bad, that it's not worth the effort.
Those who use it (eg. Kubernetes) do it for show, to put another badge on their Github repo page, to sound more sophisticated and "in the know" than they actually are.
From my experience, code generators arent stable and reliable enough. Whenever I was in a project where either side (Backend/Frontend) generated code from OpenAPI/Swagger spec, they broke at some point - weeks after introducing, by using some new feature/syntax in the specfile.
Given that they produce human unmanageable/readable code, ejecting was a terrible solution.
Out of curiosity, what front/backend languages was involved in this? OpenAPI/Swagger has various functionality for supporting polymoprhism but this is usually where problems crop up in my experience due to the mismatch between how languages support it.
In our setup, the server is usually C# and we restrict the API's to primitives, records/objects (without inheritance) and lists of the previous. None of this has ambigious mappings when it then comes to the frontend (Typescript) side, the Typescript side then mostly benefits from client-side typings for developers but actually validating isn't always done if we only have SPA consumers (Incompatible upgrade -> new cloned endpoint for the duration of the upgrade period).
It was mostly TypeScript and C#. Stuff gets worse when you use a generated swagger specfile from code, and then try to feed this into any other generator.
But when hand-writing openapi specs, there are a lot of features that are not understood by generator or lead to subtle bugs. Stuff like intersections, mixins (oneOf, allOf, anyOf, not), and union types (with named discriminators) are a thing in OpenAPI but kind of are hard to map to C# or Java for that sake. And the way you can map it, is strongly opinionated and might be different between generators (or their templates). But also other stuff, like non-supported `$ref` statements where giving us a headache.
If you start out with generators and are willing to compromise on API cleanliness (i.e. workaround by changing the OpenAPI spec when a bug in the generator arises) you might be fine.
But in cross-department or even cross-company teams, with hand optimized API definitions (specfiles), I wouldn't take the risk. I cannot emphasize risk here enough. Yes, you might be fine for a while. But you receive a new version of the specfile, and suddenly the output generated code breaks, won't even compile. This is a huge problem. You can fight whoever gave you the specfile to revert the changes, but if it passes the OpenAPI test suite, you will have a hard time arguing. So you suddenly have to eject the generated code and hope you can make changes - usually its unmaintainable, unreadable generated code. Alternatively, this is the moment when you rip out the generator and re-implement everything by hand.
Just beware, when chosing a generator, you make a bet that it is mature and bug-free enough to handle all cases that will come during project lifetime - and you bet a significant amount risk to that maturity. And if you know of any rock-solid, battle-tested generator for C# and/or TypeScript, please share. I only see either outdated ones with 100+ github issues per year, or relatively new players that lack a track record. And neither is company/money backed, so you rely on someones open-source work.
It helps a lot to keep your interfaces simple and flat. Probably too late for your project but I always advocate for this - then integrating with other systems and languages is much easier.
In practice, SOAP was the one that brought the best functionality.
What is telling because it's a badly defined beast that mostly doesn't work. But it did the type checking, automatic (client) endpoint creation, self-documentation, reflection-based (server) endpoints, support for lots and lots of languages (as long as you don't use something like an array)...
People keep promising new protocols that do one or two of those things on the modern world, but they always fail to deliver. AFAIK, the last promise was grpc, but doesn't even use a widely supported subset of HTTP so it has no chance of ever going anywhere.
On the one hand the failure is legit, success foiled by rampant vendorization & byzantine inscrutable tooling pumping out forsaken impossible to introspect generated code.
But also CORBA was maybe as far as I can tell the first programming tech to ever get canceled. That everyone went Vampire Castle on & decided collectively to crap on & lambast. What before CORBA had such a loyal hate club, was so bandwagon to disdain? It was a boogeyman tale, one we collectively whispered ourselves away from, even though most people had little sense.
It'd be interesting to try to dig up some of the old GNOME 2 efforts that did have Corba stuff in various apps & systems. Try to find some authors to say how they felt at the time. Go look at the code & see how it was.
The idea of having objects that can connect acriss boundaries is indeed super interesting. I love webdev & it's great, but we have noticeably not gotten very far in 20 years. GraphQL was one would be normalizing upstart, and ideas like resolvers have some merit, but it's still chiefly a state transfer system (which also has real time. Subscribe mode). It's still not really a well paved way to connect systems.
Even if we don't have distributed objects, just expanding the range of normative things we can do with resources on the web feels like it hasn't greatly expanded. Our efforts are still artisinal, handcraft by each team. We need to start figuring out, even if not distributed objects per se, how to grow the capabilities of online/connected systems pervasively.
It solved a few problems but the strictness of the bindings was not helpful. The C++ heavy tooling focus was neither and don‘t get me started on the gap between promise and actual implementation. Once you dealt with different ORBs in a large scale installation (read 3G networks) it became madness. In my first job I fought and kept it out. On my third it caught up with me and I had the fun to rip one vendor out and swap an open source in as the former did not work with the C++ compiler and we were a tier 1 telco vendor and the Orb vendor was on site with the source code. Total mess.
Yes, but not everything should be. Synchronous RPC works better than async RPC with fewer failure points simply because there's only single line that is the point of call. Async RPC quickly devolves into a hairy mess.
> Unless you block on every call, which REST doesn't stop you doing.
But the current standards do. If you want `fetch()` to be synchronous, you have to wrap it and fake it.
The article doesn't spend enough time on the fact that actual _interoperability_ didn't actually come to CORBA until it was already past its peak. So a key feature of the technology never really worked.
I worked at a CORBA vender for a year or two in the 1990s and even interoperability between languages, platforms and versions OF OUR OWN CORBA implementation was barely supported and down to luck more than anything else.
Note that CORBA existed as a spec for years before actually specifying an actual wire protocol (IIOP) which you'd imagine would be a fairly fundamental part of any distributed tech which had the goal of offering interoperability. And even years after IIOP appeared, getting CORBA to work between vendors' products was basically impossible.
"Interoperability" in effect meant buying 100% into a single vendor's offering and paying a load of money to consultants to manage version upgrades, getting fixes for "niche" platforms, etc. CORBA's notion of "interoperability" was more along the lines of the way old IBM offered "interoperability" between their different mainframes and minis, and not in the way understood today which is vendor neutral and open.
Another poster says that IONA (one of the biggest vendors) believed that they were going to topple Microsoft. In my opinion, it was actually IBM and the old "big vendor" model where they saw their future. They started offering (mostly broken) Transaction Managers, Service Discover agents, a Message Broker, etc. - as what they hoped would be alternatives to the then dominant big-enterprise tech like CICS/MQ. They imagined a future for big enterprises based around C++/CORBA replacing COBOL/CICS/MQ.
I have some sketchy memories for which I have trouble filling in the blanks about an internship at a bank where I wrote a Java POC for getting data from one of the other departments.
It must have been in 1998 or in 1999 and I was brought in to solve a specific problem, only to find out that the higher ups had no idea that there was a problem and there was no documentation. Eventually I wrote a specification and that POC.
While doing that, I learned that the Java CORBA implementation they used at that time was not yet able to talk to the C++ CORBA implementation from the same software producer. How is that even possible? That's the whole raison d'être of CORBA.
As I said I can't get the details together. I'm pretty sure the hardware was Sun but I'm not sure who made the CORBA implementations. Possibly IBM? It would fit the landscape.
> In my 25+ year career I haven't seen it in use anywhere. I only ever encountered it while getting my CS degree.
Too bad you didn't join the company I left in the 90s. It was used in production.
I learned about it in my CS degree too, and (because I was a fulltime dev at the time as well as a part time student) I went ahead and used it at work[1].
From what I remember of using it in production, the only difficulty I had was in sourcing a free ORB (server software) that supported inter-ORB routing so that I could load-balance.[2]
My employer eventually shelled out a small fortune (it was the 90s, we were drowning in VC money) for something from, IIRC, IBM[3] that ran on Sun Enterprise Servers (another small fortune).
My experience with CORBA was GREAT! I mean, compared to the way we do it now with browser tech:
1. I could use any language to write the client software, not limited to only Javascript.
2. Making a server call from the client was transparent. It looked like any other function call, unlike how it has to be done now using promises/futures or callbacks.
3. The tech supported exceptions which were also transparent to the programmer. In C++ you could do the following and it would work as expected.
try { myObj->foo() } catch { /* ... */ }
4. It was all strongly typed; if you used an argument with the wrong type you'd get either (in compiled languages) a compilation error or (in Interpreted languages) a runtime exception before the call is made.
5. Developer velocity was great. I wrote my object specification, the tools generated both the server-side and the client-side wrappers, and all I had to do was call them.
[1] I was young, still in the phase of resume-driven-development.
[2] I used a free ORB written in C++ called either Mico or Micro; I don't remember the specifics.
> Making a server call from the client was transparent. It looked like any other function call
That is a very bad idea and one of the reasons this kind of thing rightfully died out.
Because a server call isn't like any other function call. It has orders of magnitude higher latency, and additional failure modes that you actually have to take care of.
I'm sorry, This is the stupidest thing I see commonly repeated in public discourse about software.
Every single distributed application I've ever worked on in my 30-year career (including working for multiple companies you've heard of) wrapped remote calls in something that looks like a normal function call. It doesn't matter if your low-level RPC stub throws RemoteException or returns RemoteError, somewhere up the call stack someone has wrapped this into a simple method that looks like this:
doSomethingUseful();
In real life, you make a call to a function and you live with the consequences. If you're lucky, the docs let you know the performance and failure characteristics. If not, you make some assumptions. When those assumptions are wrong, you spend some time debugging and profiling.
Adding a bunch of syntactic noise to the callsite doesn't help. The first thing any competent programmer will do is abstract your noise away in convenience methods. Because 99% of the time, it doesn't matter that your call is remote.
Take a look at your own codebase that makes client REST calls to some other service - you may hand-wire a bunch of http calls, but somewhere up the stack there's a function that hides the http mess. Everything below that function is accidental complexity.
> Every single distributed application I've ever worked on in my 30-year career (including working for multiple companies you've heard of) wrapped remote calls in something that looks like a normal function call.
This isn't a problem as long as the returned value provides the right failure semantics (like futures). The problem with trying to encapsulate the network is that deep call chains lead to cascading failures for problems that are common in networks (partitions, latency, etc.). These failure modes also lead to more pervasive use of timeouts in deep call chains, which then introduces non-determinism, which itself makes issues impossible to debug.
This is also nonsense. 99% of the time these failure modes are irrelevant. A remote call fails, the error propagates up the call stack, and someone gets an error message. Just like any of the thousands of other things that can produce errors in complex systems.
In the rare case you need to harden a particular call, you add caching or retries or whatever other logic fits your use case. It matters not one bit whether you're using futures or synchronous rpc stubs. Actually it does - synchronous code is easier to harden because it's easier to reason about.
Even javascript added await because it's better to pretend that async code looks synchronous. The failure semantics of "throws an exception" are just fine.
> A remote call fails, the error propagates up the call stack, and someone gets an error message.
Uh-huh, but did the message actually get through? Can they safely just retry? These are very uncommon failure modes on local systems but very common on networked systems. Without a proper stateful abstraction beyond just "procedure call", like a promise, you can't address these failure modes properly.
> In the rare case you need to harden a particular call, you add caching or retries or whatever other logic fits your use case
Which now makes your system nondeterministic like I said.
> Even javascript added await because it's better to pretend that async code looks synchronous
Yes, linear code is easier to read. I don't see what this has to do with anything. The use of promises and await indicates a possibility of failure semantics that would otherwise not be apparent in the program's control-flow.
Yes, you can superficially make this look like synchronous code, but it's not synchronous code.
> Uh-huh, but did the message actually get through? Can they safely just retry?
...
> The use of promises and await indicates a possibility of failure semantics that would otherwise not be apparent in the program's control-flow.
They don't, though. They don't indicate if the message got through. They don't indicate if you can safely retry. Their failure mode is exactly as opaque or as transparent as synchronous calls.
The reason for their existence and mandatory use in Javascript is due to a deficiency of the platform (single thread, so all synchronous calls block).
If the platform was better they would never have existed.
> They don't, though. They don't indicate if the message got through. They don't indicate if you can safely retry. Their failure mode is exactly as opaque or as transparent as synchronous calls.
The promise is at the remote end. Promise resolution is idempotent, so retries always resolve to the same value. These are correct promise semantics as pioneered in the E language.
Nothing personal, but people not aware of where these concepts came from in distributed computing history is annoying, and I don't like repeating myself:
> Regardless of where promises come from, the fact is that they are locally resolved, not remotely resolved.
Idempotent operations are implicit promises at the protocol level. We're talking about distributed systems here where abstractions and semantics cross machine boundaries, so your local-only focus is not valid. I suggest reading up on the E language via the link I provided.
> Using promises in JavaScript is a hack around the fact that the platform has some pretty large deficiencies.
A single threaded event loop is not necessarily a deficiency.
Again, total nonsense. There's nothing special about a promise. Whatever logic you can build on promises is easier to build synchronously. Everything that applies to building distributed systems applies whether you use rpc stubs or promises. Promises are just noisier and harder to reason about.
Maybe we're speaking past each other. Idempotent operations are implicit promises. Network partitions require idempotency at some level to ensure robustness. That means any robust distributed protocol requires promises at the core protocol level.
Trying to hide the promises behind a synchronous client interface is unnecessarily constraining and inefficient, like requiring large stack contexts that can't be restarted or persisted, and so can't be simply resumed after partitions.
> That is a very bad idea and one of the reasons this kind of thing rightfully died out.
I agree, but, like everything else that is a bad idea, naming conventions help.[1] Namespacing helped too.
I used naming conventions to ensure that network calls looked different, and namespaces that kept network objects in their own module.
[1] Right now we rely on naming conventions in most codebases to differentiate between #define'd literals and variables (C), between constants and variables (Java), between variables and methods (Kotlin, Java, everything else), between interfaces and classes (C#, C++, everything else too, probably), for everything in Python (pep8).
Using naming conventions to identify remote calls is no different than using naming conventions to identify interfaces.
2: You can do HTTP/Json requests "transparently" just fine, it's an implementation issue. The issue is if you're doing that from JS you're blocking the only thread and even in a server/desktop language like Java, C# or C++ you're going to be blocking the entire thread and possibly degrade performance. This wouldn't have been any different with CORBA (But people ignored thread cost issues back in those days).
3: again implementation issue.
4: swagger/openapi will help on the compile side (sadly not runtime with all languages but again, language/impl issue)
5: generators are still a thing and available if you look.
For each thing you list as an implementation issue, it's a problem.
My complaint is not "these things don't exist now", it's "they're not part of the standard". For each "it's an implementation issue", you have multiple incompatible competing mechanisms. With CORBA, while the standard was large and stupid, the very basic thing (make an RPC call, get the response and handle any errors) was supported by the standard in a non-ambiguous and practical way.
So, yeah, the fact that something that was available in the 90s is now available depending on implementation is the problem.
I graduated in 2018 and it was only mentioned off hand in a systems programming class in a discussion of RPC frameworks. That was basically all I knew about it until I became a maintainer of a CORBA implementation as part of my job. It's not the main part of my job and honestly I still don't know a lot about actually using CORBA. The only part I know a lot about is the interface description language (IDL) used for code generation, which we use a ton of in another framework that I'm actually paid to maintain.
I worked at a defense contractor small business for a short while that used CORBA in some of its products. Previous engineers declared it impossible to update, so they asked me to do it. I had to upgrade the C++ version for the first time in a decade. I didn't find it very fun, but it was satisfying to see an ancient program compile on a more modern toolchain.
I am still working on a code base that has slowly evolved from around 2000 with as its core CORBA as the internal communication bus between actors, so we still use it. We likely want to remove it because it is a bit an overkill in our application, but at the same time it is working and no one wants to pay for a rewrite.
KDE tried and discarded CORBA in KOM/KParts well before GNOME got around to putting it in production. DCOP was a response to CORBA's complexity and fragility, and was almost "reinvented" in the form of DBUS.
And, yeah, if memory serves CORBA worked well enough for Gnome but not KDE because the C++ story was not great. Whatever bindings being used were dependent on features not well supported by G++, templates being painfully slow to compile, etc.
if my memory serves me right, corba (if form of orbit) worked well enough for gnome because almost nothing used it.
kde in it's turn were heavily reliant on it (mico) for embedding and communication. but compilation was indeed painfully slow (i still remember never ending kde compiles in 99-2000) so they came up with dcop/kparts after having few drinks and deciding that they can do better
Apart from the template bloat, at the time CORBA meant exceptions IIRC. The bits of GNOME that used orbit dealt with the same errors mostly by ignoring the return codes.
In the end, both teams decided that assuming components running in remote processes was the wrong default. I'm not sure what GNOME replaced them with, but KParts rescued KDE 2.0
yea. kparts were great. compilation went down from neverending to something reasonable. and konquerors ability to embed any other application (because they were written as kparts) was great.
in gnome i think they did something like "gparts" based on glib. but because gnome desktop was composed in large part from applications written in gtk and not in gnome libs it never had too much uptake (unlike in kde). but i might be wrong - i been kde user from 1.0 alpha4 and gnome was parallel universe
> SOAP had serious technical shortcomings, but, as a market strategy, it was a masterstroke.
Wrong. SOAP descended quickly into the "whatever happened to" category. Like CORBA. I'm happy to say I threw out SOAP and used REST instead at Google, in practically my first act there:
Why are you taking the sentence out of context to change its meaning? TFA says it was a good business move by Microsoft for Microsoft because it fragmented the competition which had been consolidating around CORBA. What happened to it afterwards is irrelevant.
We might see a comeback of something like CORBA. One of the things CORBA was supposed to do was to enable a mechanism to ask a server "How do I talk to you"? While that wasn't really used much, it has real potential for the LLM era. We need technologies where a client asks the server how to talk to it, then generates the appropriate requests automatically.
For example, you should be able to ask anything with a shopping cart how to buy stuff. Especially for B2B E-commerce.
I think negotiation can be a bad thing. Instead of just making a standard so a server can say "This server complies with the BuyCrap 2.0 protocol", and a way to discover this fact, people make up garbage like "This server has a method called AddToCart which takes a string and an integer in this range".
The introspection mostly is used to ask "do you have this feature I already know about from the spec that really should be a mandatory part of the profile".
It just becomes a way to have 850 variants and optional features instead of a true standard, meanwhile the discovery layer risks taking more effort than a one size fits all protocol that covers a use case would have.
Sometimes it's cool for development though, to get a detailed list of a servers capabilities, like some level of built-in documentation, but in practice it seems to be about the same level of effort as just reading a REST API document anyway, at least for basic use cases.
> about the same level of effort as just reading a REST API document anyway, at least for basic use cases.
The idea is to automate that, using a LLM to read a description and create the appropriate reply messages, retrying until it works. You need at least a rough specification and semi-useful error messages. Then let the system work until it has established communication.
My first exposure to CORBA was 1996/97 where it was used to deploy a major distributed infrastructure. It worked quite well to manage and integrate this large scale deployment, some of it expanded beyond just one datacenter, so it was nice to have a really good control system to manage thousands of services across such a large estate with minimal user intervention. Applications were patch and upgraded through the CORBA services and we scaled the system to provide national and international infrastructure services. I would wager nearly 20 years since I had insight into how that infrastructure worked that this is still running… Obvious use case is telcos.
There have been many Silver Bullets in software development, and they probably all had some real value. But it seems they get latched onto and over-hyped, particularly for decision makers and funders.
In reality, what is probably most missing is consistent effort toward better organization and complexity management of problems (leading to effective solutions).
Without naming current 2-letter hype trains, we're doing it again after the blockchain stuff finally fizzled. Blockchain didn't magically solve our organizational and complexity problems, and so we need a new great hope.
New tech and patterns are exciting (probably mostly because of the promise of making our lives easier), but eventually one stops accepting them with big expectations. Ultimately it seems like instead of addressing the real difficulty in our technical solution development we create new distractions and use them to hide ourselves from the bigger problem that we can't seem to deal with.
"In the early ’90s, persuading programs on different machines to talk to each other was a nightmare ...
(Other early middleware, such as Sun ONC... was tied to C and Unix and not suitable for heterogeneous environments.)"
I disagree here. SUN RPCs (ONC) based on XDR worked well and were quite enjoyable to use.
CORBA was victim of the Second-system effect.
And CORBA meant C++ in the early 90ies. Memory management was a nightmare, C++ and CORBA Apis don't go well together.
I was about to opine that this title was an homage to the 2009 GI Joe movie, but then I realised it had been written prior to the film's release. I can only assume that in actual fact, "GI Joe: The rise of COBRA" was in fact an homage to this article.
It is being overlooked that one of the big impediments to CORBA was the literal cost of license from the vendors. They all wanted to make big bucks. It kind of reminds me of the early days of LISP and smalltalk. Venders were more interested in cashing in than spreading the technology. To be sure, for a big company it wasn't that huge, but when the Dot-com boom went bust and company's were trying to find ways to save money, that was a big flag on the expense sheet given that alternatives were becoming available.
There's TAO which is an open source CORBA implementation that works just as well as any of the commercial implementations. Nobody in their right mind would use CORBA for a new project of course.
My father's business partner was one of the founders of Iona Technologies, the makers of the Orbix Corba Request Broker. Back in the mid 1990s they had visions of being bigger than Microsoft, but events and progress overtook them, and the company was sold for a pittance in the mid 2000s.
I still use CORBA with huge success. For example, it is a backbone of several commercial systems that connect vending units in different regions and even countries.
For what it's worth, CORBA is hard to use from languages without a proper reflection. But with something like Java or .NET - it's a breeze.
And before you start asking questions why - no, REST or gRPC do not cut it. They are too primitive and come with poorly defined semantics. Their HTTP/1.1 transport is rudimental, though HTTP/2.0 made it better by allowing multiplex streams with an elusive possibility of a proper bidirectional communication. But CORBA already had all that, and more, since 2005! And I use it to the fullest extent in the distributed systems my company designs.
gRPC-esque protocols started to appear as protobuf serialization (for procedure calls) on top of a REST endpoint (for tunneling). It officially became gRPC and "coupled" with HTTP/2 much later.
I worked with what was arguably the first large-scale CORBA product (IBM/Tivoli's TME10 product).
The main problem with COBRA was speed. At the time, authentication was brutally slow...and there was a separate problem with figuring out which object was canonical.
OTOH, being able to patch/override any function with any language was pretty handy. Of course, that because a configuration and operational nightmare, because custom configured overrides would get wiped out when upgrades happened. And it made it difficult for support to figure anything out, since many places were heavily customized by consultants.
It's an interesting idea that foundered on the technological limitations of the era, like OpenDoc, Telescript, etc.
I interviewed with AT&T for a job and the interviewer was sure a piece of concurrent code was sound and I was sure it was not, but couldn’t articulate it at the end of a long day.
I would have been working in CORBA when it was already on its way out. Silver lining that was a bullet I dodged entirely, except of course having the pay the opportunity cost all early Java devs paid because there was an ORB in the JDK for ten+ years.
I played with various orbs on and off for many years. The only system I ever found that somewhat worked was VNC which used omniORB. However VNC is mainly about remote frame buffers so I think it's fair to say VNC achieved a modicum of success in spite of using CORBA.
The biggest problem I ran into was that ORBs from different manufacturers just didn’t play nice with one another. And then SOAP and Enterprise Service Bus repeated the whole damn thing.
I am pretty sure ten years from now people will be rolling their eyes on the acronyms of today: so-called REST, json schema and whatnot.
Lets face it, the problem that people try to solve (frictionless, large scale exchange of complex information between computing devices) and the conditions in which they try to solve it (as part of a few dominating corporations with random business models) are making for a very difficult challenge.
There is pressure to find workable solutions as the benefits are tremendus but so far it has not worked.
CORBA is one of the first technologies that I came across and saw that it was kind of 'hot', but decided I'd learn it if I ended up really needing it. And I never did.
It's important to learn when to ignore things in this field. It's never possible to be 100% sure, but there are always going to be 'new! hot!' things.
Just this headline is triggering repressed memories of building a component in a simulation federation for a defense subcontractor in the early 2000s on an awful realtime CORBA system.
My first grey hairs started popping out during that project.
DonHopkins 6 months ago | parent | context | favorite | on: The Dawn and Dusk of Sun Microsystems [video]
>Sun started to turn into DEC when the manufacturing people started getting hired from DEC into Sun.
That is precisely what happened. Sun also hired a whole bunch of frat boy brogrammers and incompetent bozogrammers from HP and AT&T, too.
I have a lot of respect for the old HP and DEC, but the charlatans that Sun hired from HP and DEC who perpetrated Project DOE (Distributed Object Everywhere) and CORBA were a completely incompetent turkey farm who sabotaged Sun and dragged it into the ground.
We used to call it Project DOPE (Distributed Object Practically Everywhere), and the OMG (Object Management Group) was better described as OMFG, then it took so long to ship NEO that they should have called it NEOLD.
>SunSoft is delivering the first component against its vision of Project DOE. In February 1991, SunSoft and Hewlett-Packard (HP) developed the industry's first Distributed Object Management Facility (Distributed OMF). This was submitted to the Object Management Group (OMG). In June, SunSoft added to its object technology foundation with the introduction of ToolTalk. The product has been endorsed by a number of leading software vendors including Lotus Development Corp., Cadence, Valid and Clarity Software. Other elements of Project DOE will be introduced later this year.
>New York City -- Perhaps the only vaporware touted for a longer period of time before its release than Windows 95 was Sun's Project DOE. This ambitious object-oriented programming toolkit and distributed operating environment that offers built-in network awareness has arrived at last. The company chose a hastily planned morning press event during Unix Expo to offer details on the software Sun's talked about for almost five years.
>The software and programs making up Project DOE (Distributed Objects Everywhere) are now under the umbrella term "Neo," a word Sun CEO Scott McNealy joked doesn't stand for anything in particular except it being the last three-letter word not trademarked in the US. (Apparently, the second to the last was "JOE," a term Sun picked up for its Java application development tools.)
Then once Java became popular, Sun was overrun by enormous hoards of minions jumping on the Java bandwagon, who just wanted to work effortlessly for a successful company instead of working hard to make a company successful (just as JWZ observed about Netscape, too).
McNealy's worst enemies weren't at Microsoft, they were only himself and the other useless idiots he hired after selling out to AT&T, letting all those DOEZOS on the bus, and rolling out the Java Juggernaut.
The only misguided lesson Scott McNealy learned from his tragic failure driving Sun into the ground was to put all his wood behind one arrow of Putin's useful idiot Trump, instead of so many useless idiots from AT&T, DEC, and HP.
DonHopkins on March 1, 2020 | parent | context | favorite | on: Sun's NeWS was a mistake, as are all toolkit-in-se...
Yes you're definitely in the ball park with a beer and a hot dog -- there was a huge amount of corporate baggage. The hype and corporate bullshit that surrounded Java is a good example of what that corporate baggage would have been like if it had been deployed for NeWS's benefit instead of Java's.
If Sun had put as much energy into promoting and supporting NeWS as they did with Java, we would probably live in a very different world today.
Sun turned a corner when they abandoned their Berkeley hippie BSD roots and got into bed with AT&T / SVR4 / Solaris, and that changed a lot of stuff for the worse, making it a lot harder to do things like give away the source code to X11/NeWS. A lot of people from different companies who used to be Sun's enemies, and who had extremely different philosophies and antithetical approaches to "open software", joined Sun and started influencing and managing its policies and projects. A disastrous example was the Distributed Objects Everywhere project and CORBA fiasco, which was originally the crazy idea of a bunch of people from HP and DEC, Sun's former nemesis's, who then came to Sun and started pushing it into everything, to the detriment of NeWS and other older projects at Sun. Some of the problematic people and armchair architectural astronauts that Sun imported and put in charge of DOE/CORBA, like Steve MacKay and Michael Powell, were worthless corporate bullshitters whose main goals were to establish and maintain a hegemony, and they kept their grandiose plans in their head and never wrote anything down or made any hard decisions or came up with anything concrete, because they didn't want to be pinned down to committing to something, when they were actually in way over their heads. The whole point of the incredibly complex software they finally developed was interoperability with other company's compatible software, but in reality none of it actually worked together. It only talked to itself. SLOWLY.
Since DOE was intended to run everywhere and talk to everything but actually didn't, they should have called DOPE for Distributed Objects Practically Everywhere.
DOPE was a complete failure at its stated mission, and it had ridiculously costly overhead and complexity. When they finally delivered something years behind schedule and lacking crucial promised features, it actually required TWO CDROMs to install. (You'd think they could have distributed a distributed network object system over the network, instead of via CDROM, but nooooo: it was just too big to download.) And in the end, nobody actually used "DOE" or "NEO" for anything consequential. They wasted a spectacular amount of time, energy, money, careers, and good will on that crap.
And then when Java finally came along, the same meddlesome corporate baggage handlers and armchair architectural astronauts went into overdrive to evangelize and promote the Java Juggernaut. And even more of them flocked in droves to Sun to jump on the Java bandwagon. If it was bad after the invasion of System V / AT&T / HP / DEC minions, things got much worse once the Java zombies started arriving in teaming brain-eating hoards to get their part of the action in response to all the hype. The original Java team was brilliant, and there were some extremely excellent people working on it, but they were totally outnumbered by the dead weight of all the hangers-on who didn't want to work hard to make a struggling company great, but just wanted an easy job at a secure company that was already great.
If Sun had shown the commitment and dedicated the resources to NeWS that they did to DOE and Java, things would be a lot different. And it would have probably also turned out terribly, for all the same reasons.
JWZ said the same kind of thing happened at NetScape, too.
>This is starting to sound familiar (Score:4, Interesting) by gothzilla ( 676407 ) on Thursday March 10, 2005 @11:10AM (#11899790)
>I remember reading JWZ's blog back in the Netscape days. I remember one entry in particular where he noted that Netscape had changed. It used to be full of people who wanted to help create a great company. It turned into a place full of people who just wanted to work for a great company. The people who live to help create get replaced by those who want to ride on their coat-tails. This happens when businesses become successful. Everything changes. Like the band that was good friends and partied together every night. They get signed, shit gets serious, and suddenly they're fighting and arguing about things till they break up and go their separate ways.
>From an old post in his blog:
>What is most amazing about this is not the event itself, but rather, what it indicates: Netscape has gone from ``hot young world-changing startup'' to Apple levels of unadulterated uselessness in fewer than four years, and with fewer than 3,000 employees.
>But I guess Netscape has always done everything faster and bigger. Including burning out.
>It's too bad it had to end with a whimper instead of a bang. Netscape used to be something wonderful.
>The thing that hurts about this is that I was here when Netscape was just a bunch of creative people working together to make something great. Now it's a faceless corporation like all other faceless corporations, terrified that it might accidentally offend someone. But yes, all big corporations are like that: it's just that I was here to watch this one fall.
>Perhaps the same fate awaits Mozilla. Hopefully not, but when your product becomes as successful as Mozilla and Firefox have, things do change and change is inevitable. It all comes down to how the people involved with the projects handle the change.
>Mozilla did rise from the ashes of Netscape though. Hopefully some of the original Netscape people are still around to help lead Mozilla in the right direction, using their experience from the crashing and burning of Netscape in the late 90's.
``I have yet to come across so much self-righteous bullshit as when I gaze upon the massive heap of crap that is the jwz web experience.''
-- an anonymous poster to slashdot.org, 1998.
I'm not saying it always has to end in tragedy: C# and TypeScript turned out beautifully, given the constraints they had to deal with, in spite of the fact that they came from a giant corporate behemoth like Microsoft. (Although I'm sure there's a lot of bullshit going on behind the scenes, the trend is to make them more open and community driven.)
Yes, I am aware. I was at a Usenix conference where Rob Pike presented a paper on it, back when it was a bright idea out of Bell Labs. It is the curse of brilliant people that they see too far into the future, get treated as crazy when they are most lucid and get respect when they are most bitter [1]. I was working for Sun Microsystems at the time and Sun was pursuing a strategy known as "Distributed Objects Everywhere" or (DOE) but insiders derisively called it "Distributed Objects Practically Everywhere" or DOPE, it was thinking about networks of 100 megabits with hundreds of machines on them. Another acquaintance of mine has a PDP 8/s this was a serial implementation of the PDP-8 architecture, Gordon Bell did that in the 70's well before serial interconnects made sense. It was a total failure, the rest of the world had yet to catch up. Both Microsoft and Google have invested in this space, neither have published a whole lot, every now and then you see something that lets you know that somebody is thinking along the same lines, trying to get to an answer. I suspect Jeff Bezos thinks similarly if his insistence on making everything an API inside Amazon was portrayed accurately.
The place where the world is catching up is that we have very fast networks in very dense compute. In the case of a cell phone you see a compute node which is a node in a web of nodes which are conspiring to provide a user experience. At some point that box under the table might have X units of compute, Y units of IO, and Z units of storage. It might be a spine which you can load up with different colored blocks to get the combination of points needed to activate a capability at an acceptable latency. If you can imagine a role playing game where your 'computer' can do certain things based on where you invested its 'skill points' that is a flavor of what I think will happen. The computers that do shipping, or store sales will have skill points in transactions, the computers that simulate explosions will have skill points in flops. People will argue whether or not the brick from Intel or the Brick from AMD/ARM really deserves a rating of 8 skill points in CRYPTO or not.
[1] I didn't get to work with Rob when I was at Google although I did hear him speak once and he didn't seem particularly bitter, so I don't consider him a good exemplar of the problem. Many brilliant people I've met over the years however have been lost to productive work because their bitterness at not being accepted early only has clouded their ability to enjoy the success their vision has seen since they espoused it.
>To create quality software, the ability to say “no” is usually far more important than the ability to say “yes.” Open source embodies this in something that can be called “benevolent dictatorship”: Even though many people contribute to the overall effort, a single expert (or a small cabal of experts) ultimately rejects or accepts each proposed change. This preserves the original architectural vision and stops the proverbial too many cooks from spoiling the broth.
“Focusing is about saying no.” -Steve Jobs, WWDC ‘97 -- As sad as it was, Steve Jobs was right to “put a bullet in OpenDoc’s head”.
DonHopkins on Jan 24, 2018 | parent | context | favorite | on: Ted Nelson on What Modern Programmers Can Learn fr...
In the ideal world we would all be using s-expressions and Lisp, but now XML and JSON fill the need of language-independent data formats.
>Not trying to defend XSLT (which I find to be a mixed bag), but you're aware that it's precursor was DSSSL (Scheme), with pretty much a one-to-one correspondence of language constructs and symbol names, aren't you?
The mighty programmer James Clark wrote the de-facto reference SGML parser and DSSSL implementation, was technical lead of the XML working group, and also helped design and implement XSLT and XPath (not to mention expat, Trex / RELAX NG, etc)! It was totally flexible and incredibly powerful, but massively complicated, and you had to know scheme, which blew a lot of people's minds. But the major factor that killed SGML and DSSSL was the emergence of HTML, XML and XSLT, which were orders of magnitude simpler.
There's a wonderful DDJ interview with James Clark called "A Triumph of Simplicity: James Clark on Markup Languages and XML" where he explains how a standard has failed if everyone just uses the reference implementation, because the point of a standard is to be crisp and simple enough that many different implementations can interoperate perfectly.
A Triumph of Simplicity: James Clark on Markup Languages and XML:
I think it's safe to say that SGML and DSSSL fell short of that sought-after simplicity, and XML and XSLT were the answer to that.
"The standard has to be sufficiently simple that it makes sense to have multiple implementations." -James Clark
My (completely imaginary) impression of the XSLT committee is that there must have been representatives of several different programming languages (Lisp, Prolog, C++, RPG, Brainfuck, etc) sitting around the conference table facing off with each other, and each managed to get a caricature of their language's cliche cool programming technique hammered into XSLT, but without the other context and support it needed to actually be useful. So nobody was happy!
Then Microsoft came out with MSXML, with an XSL processor that let you include <script> tags in your XSLT documents to do all kinds of magic stuff by dynamically accessing the DOM and performing arbitrary computation (in VBScript, JavaScript, C#, or any IScriptingEngine compatible language). Once you hit a wall with XSLT you could drop down to JavaScript and actually get some work done. But after you got used to manipulating the DOM in JavaScript with XPath, you being to wonder what you ever needed XSLT for in the first place, and why you don't just write a nice flexible XML transformation library in JavaScript, and forget about XSLT.
Excerpts from the DDJ interview (it's fascinating -- read the whole thing!):
>DDJ: You're well known for writing very good reference implementations for SGML and XML Standards. How important is it for these reference implementations to be good implementations as opposed to just something that works?
>JC: Having a reference implementation that's too good can actually be a negative in some ways.
>DDJ: Why is that?
>JC: Well, because it discourages other people from implementing it. If you've got a standard, and you have only one real implementation, then you might as well not have bothered having a standard. You could have just defined the language by its implementation. The point of standards is that you can have multiple implementations, and they can all interoperate.
>You want to make the standard sufficiently easy to implement so that it's not so much work to do an implementation that people are discouraged by the presence of a good reference implementation from doing their own implementation.
>DDJ: Is that necessarily a bad thing? If you have a single implementation that's good enough so that other people don't feel like they have to write another implementation, don't you achieve what you want with a standard in that all implementations — in this case, there's only one of them — work the same?
>JC: For any standard that's really useful, there are different kinds of usage scenarios and different classes of users, and you can't have one implementation that fits all. Take SGML, for example. Sometimes you want a really heavy-weight implementation that does validation and provides lots of information about a document. Sometimes you'd like a much lighter weight implementation that just runs as fast as possible, doesn't validate, and doesn't provide much information about a document apart from elements and attributes and data. But because it's so much work to write an SGML parser, you end up having one SGML parser that supports everything needed for a huge variety of applications, which makes it a lot more complicated. It would be much nicer if you had one SGML parser that is perfect for this application, and another SGML parser that is perfect for this other application. To make that possible, the standard has to be sufficiently simple that it makes sense to have multiple implementations.
>DDJ: Is there any markup software out there that you like to use and that you haven't written yourself?
>JC: The software I probably use most often that I haven't written myself is Microsoft's XML parser and XSLT implementation. Their current version does a pretty credible job of doing both XML and XSLT. It's remarkable, really. If you said, back when I was doing SGML and DSSSL, that one day, you'd find as a standard part of Windows this DLL that did pretty much the same thing as SGML and DSSSL, I'd think you were dreaming. That's one thing I feel very happy about, that this formerly niche thing is now available to everybody.
It's impossible for young'uns to appreciate just how obsessed the software world was by 'objects', OO and the chimera of reusability in those days; I subscribed to 'Object' magazine, and still recall one article breathlessly predicting that in the future bespoke development would become bunk as folks would just buy e.g. an Aircraft object off the peg and plug it into their application. The fact that such blatant silliness actually met with knowing nods gives the insight into how technologies that promised to wire this brave new world together became hot properties almost regardless of their details.
It is telling that J2EE became at least mildly sane when people started ignoring entity beans and working exclusively with session beans, inching towards the realisation that the api was the thing that needed to be remote. Imo the current status quo with REST (in its usual bastardised guise) and json as the language neutral representation, alongside aids like Json schema and OpenApi, while by no means perfect, are workable enough and certainly light years ahead of these earlier fumbling efforts.