Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I remember how CORBA was all going to be the future.

I wrote dozens of programs in C++ and Python using IDL.

Funny, though, that we now find ourselves doing the exact same thing but it is "an API", has no type-checking, no error handling, requires more work, is limited to particular languages and is slower.

Distributed objects were ahead of their time by maybe 15 years. Now we're stuck in async API-over-JSON hell forever.



We've finally come full circle with gRPC, which is basically a simplified CORBA. I went through the whole cycle, and while I don't exactly miss CORBA, I do wonder why it took us so long to get back to that baseline of functionality.


A huge influx of “programmers” that knew only JavaScript and thought that HTTP is the only network protocol, that’s why.

But seriously, the root cause of the regression is the web and its associated limitations, quirks, and bad habits.

For example, many web developers started out with eval() and maybe JSON which they thought was simple because it is text based and can be interactively experimented with in the browser console. They could “get started quickly” and they were “moving too fast” to think about codegen tooling, protocol efficiency, typing, or security. Doesn’t matter, they hooked up the form, push it to prod!

It’s a cycle that happens over and over in IT, a variant of the eternal September.


You can mock the "moving fast" trope, but the truth is that moving fast has actually been very important for most web-related software development since the beginning. The reason is that the web is a super-competitive environment where the speed of experimentation and adaptation wins most of the time - even if we, as engineers, might not like that.


The problem is that the "fast" part is in quotes because it's glacially slow as soon as you get past the initial MVP or demo.

I see this idiocy everywhere, to the point that many clouds are now advertising "Day 2 operations" like it's a new thing to be doing things for more than one day.

Everywhere you turn, it's: "Get started quick", "Quickstart", "Deploy to <cloud>!", etc...

What do you do after deployment is... crickets chirping, wolves howling, and a tumble weed rolling slowly past.

CORBA, Java Remoting, .NET WCF, WS-*, etc... are complex technologies that can't be trivially poked and prodded with curl or a repl. What they provide is tooling and long-term velocity and safety even with hundreds of developers on the team.

Heck, even as a solo developer I strongly favoured the "proper" RPC systems. I could define a class type once, and then Visual Studio or IDEA or whatever would spit out tens of thousands of lines of error-free boilerplate code that otherwise I would have to hand-roll.

You can't imagine how depressed it makes me when I see some Web API guide that starts off with a cheery "this is a simple..." and then there's five hundred pages of English text.

Look. Sure, if you're an Indian outsourcer developer, this is great. You can bang out monotonous repetitive code like a meat robot and collect a pay check your subsistence farmer parents could only dream of. You can do this for years, and never have to think, or be creative, or risk your job security.

But people that need to get things finished, past day two into day five hundred? We use the good stuff, with automation.

Half the world still builds roads with hand tools. Where I live, we build roads with heavy machinery.

That's the difference. Any idiot can pick up a hammer and say "road building is easy". They'll still be building that road with an army of workers a year later.


I upvoted your comment because it's largely accurate.

However, I would caution against blanket stereotypes like this:

> Look. Sure, if you're an Indian outsourcer developer, this is great. You can bang out monotonous repetitive code like a meat robot and collect a pay check your subsistence farmer parents could only dream of. You can do this for years, and never have to think, or be creative, or risk your job security.

A lot of the outsourced indian devs do indeed match that description, BUT the majority of them that I know of don't want to "bang out monotonous repetitive code like a meat robot and collect a pay check ".

They want to create novel and creative things like everyone else. That they're stuck in the modern equivalent of the assembly line is really not their fault, and most devs, outsourced or not, are in that space anyway.


> BUT the majority of them that I know of don't want to "bang out monotonous repetitive code like a meat robot and collect a pay check ".

Well, majority of you know are not majority of actual IT workforce in India. As some one who worked for a decade or so with Indian IT vendor and worked with outsourced developer till today. Most of them have no interest beyond meeting client requirement which is just code word for a massaged resume contain exact keywords that clients put in job requirement.

> They want to create novel and creative things like everyone else. That they're stuck in the modern equivalent of the assembly line is really not their fault, ..

Well they could've take low paying job that's creative. But IT jobs offer high income while sitting in A/C office and following instructions of manager/client.

Nothing wrong with that I am doing same. It is not a fault it is explicit choice they have made.


Well said. And they take like a duck to water when given the chance to do that novel creative tool driven work (and paid well).

Though in my direct example I’m talking about Malaysian outsourcing devs!


You have some points but this scenario is also an industry symptom of what sloppy and/or inexperienced developers cause.

As an experienced developer being in this environment you have to be humble to the fact that money is still what rules interactions and apparent non-progress with vague promises can be seen from a _client_ perspective as no progress at all from seemingly total beginners.

Moving "fast" is often needed to assure people and for grunt-work it's absolutely fine as long as it doesn't build in bad requirements into the system...

And that brings us to the IMPORTANT point, moving fast gives you a good prototype ground and a chance to check assumptions with the client/end-users, but you need experienced people to put in brakes and remove bad assumptions out of the codebase before it leads to second-order bloat that makes the bad assumptions impossible to weed out.

In "enterprise" dev the core task that you don't want to hand off to juniors is often database models, data invariants, synchronization semantics,etc that inexperienced people will just try to paper over with increasing amounts of code when wrong and thereby creating huge swaths of code that just cements the bad assumptions into place and creates this glacial progress that you mentioned.


I disagree. From what I observed it speeded up developers for about two months and then they started to get stuck in their own webs.


gRPC arguably isn't anything like CORBA. It's just RPC.

The thing that CORBA (and COM/DCOM) gives you on top of RPC is object references, what's called location transparency in DCOM. That is, if you have an API like this:

    *Pie getPie()
then the remote server simply returns an object reference. What you have on the client-side is a "stub" (what COM/DCOM calls a proxy), which might look like this:

    class Pie {
      int getSize()
    }
The whole idea is that Pie pretends to be a local object, but it is in fact just a thin wrapper; calling getSize() invokes in a remote network call to the server.

gRPC and other modern RPCs don't have object identity or objects. They just return structs, and you have to invent your own IDs. So in gRPC, the server may expose:

    rpc GetPie(PieRequest) returns (PieResponse) {}
But PieResponse is just data. It has no actions.

Where things become insane is where you have big graphs of objects, some of which may be remote references (stubs), some local in your own app. Maybe you accidentally store a reference in a global/static variable, which means your application is effectively holding the remote object "open". Performance typically goes out the window because just calling a method, even if it's data like getSize(), results in a network roundtrip.

I still think the underlying ideas were sane: Defining type-safe, portable interfaces using an IDL, then generate client and server code from that IDL. RPC has been reinvented many times, and gRPC won't be the last iteration. I do suspect we could have something CORBA-ish working over the Internet if we can get the technical design right.


That's true, but my impression back when I was using CORBA and EJB and Java RMI is that nobody really cared about remote object identity and really just wanted a convenient RPC solution. Ie use EJB stateless beans instead of stateful beans.

So while CORBA could do a lot more, it wasn't necessarily being used that way. At least not anywhere I saw; maybe there were shops out there doing more exotic things.


I don't think of gRPC as object oriented, but I don't have a ton of experience with it. It seems like a service oriented system, with endpoints that accept parameters & return data. There's not, to my knowledge, much in the way of enduring objects.

You're not going to have dozens of Todo objects that you are callinng methods on. You're going to have a TodoService that has some methods. This is isn't very object oriented; the things sent in & out are just dumb data, it's just request/response.

I think cap'n'proto gets a little closer to having an actual notion of objects, in that the protocol has actual identifiers built in. But it's still more about services than objects.

Corba seemed to really be object oriented; you'd get back todo1 and call todo1.done().


> I think cap'n'proto gets a little closer to having an actual notion of objects, in that the protocol has actual identifiers built in. But it's still more about services than objects.

Cap'n Proto is all about the objects! I would say this is the whole point of the design, and my main motivation in creating it.

(Unlike CORBA, though, Cap'n Proto does not try to pretend that RPCs are equivalent to local calls. In particular it has promise pipelining to compensate for latency, rather than pretending there is no latency. https://capnproto.org/rpc.html)


gRPC isn't a simplified CORBA, it's almost a complete copy of ONC-RPC, the backbone RPC protocol for NFS and other OG Unix services from the 1980s.


> I do wonder why it took us so long to get back to that baseline of functionality.

Scaling networks provided more value, and HTTP over networks was always going to work so protocols over HTTP just had to catch up, feature-wise.


Why no type checking? Generate an Open API client for target languages. Done.

Assuming you need an API. I'll never build an app that way again, it's not worth my hands. I'd rather use Hotwire, or LiveViews, or Django's Unicorn for the web part. Zod looks nice too.


The problem isn't so much with checking as it is with what kinds of things you can check. OpenAPI offers a very powerless type system. So bad, it's almost not worth using. Not only that, it's designed for JSON, which is an awfully bad format -- so, even if you could check that somehow, you'd still be very limited by what you can actually send.

Also, yeah, all kind of hot garbage you listed will work on Web, because the expectations to the quality of the output is so low... We came to expect Web to be the dumpster fire of programming. Most unfortunately, this kind of garbage spreads into other areas because Web is a universal interface to many things, and there are plenty of programmers who know how to do it.


Reactive UI development is hot garbage now? I think not :)


> Why no type checking? Generate an Open API client for target languages.

I dunno. Why aren't more people using openapi?

What I see in practice is that almost everyone is using JSON as the transport mechanism and not generating that JSON serialisation code from open API specs.

The type-checking in CORBA was not opt-in, it was mandatory.

Type-checking in browser fetch() calls are optional. To opt-in you have to go outside of the standard.


We use OpenAPI/Swagger specs whenever possible for new projects and integrations (some usages or integrations are older).

Generating a OpenAPI/Swagger spec in .NET is as easy as telling a lib like NSwag to generate and publish documents from your API's (and you can filter out to only show public API's in the docs). Probably as easy in Java,go,etc.

Consuming services can usually be done almost as easily by downloading the spec and pointing a code-generator to it.

HOWEVER the mandatory vs opt-in part is why we are stuck with JSON based api's, consuming/providing the typings becomes extra work when interfacing/playing with an API from any dynamic language such as JS, Python, Ruby, etc whilst you really only pay a relatively minor performance penalty from typed languages that people won't care about in dev up until it hits in production (Assuming you have an modern JSON serializer library built for speed).

Anything resembling "bloat"/complexity (sadly types is in this category from the perspective of many anti-TS JS developers, esp as API-typings still won't come from the host language but having to be provided separately) will be offputting and lead to any such spec being ignored by a substantial chunk of developers.


OpenAPI isn't worth using. It's so pathetically bad, that it's not worth the effort.

Those who use it (eg. Kubernetes) do it for show, to put another badge on their Github repo page, to sound more sophisticated and "in the know" than they actually are.


From my experience, code generators arent stable and reliable enough. Whenever I was in a project where either side (Backend/Frontend) generated code from OpenAPI/Swagger spec, they broke at some point - weeks after introducing, by using some new feature/syntax in the specfile.

Given that they produce human unmanageable/readable code, ejecting was a terrible solution.


Out of curiosity, what front/backend languages was involved in this? OpenAPI/Swagger has various functionality for supporting polymoprhism but this is usually where problems crop up in my experience due to the mismatch between how languages support it.

In our setup, the server is usually C# and we restrict the API's to primitives, records/objects (without inheritance) and lists of the previous. None of this has ambigious mappings when it then comes to the frontend (Typescript) side, the Typescript side then mostly benefits from client-side typings for developers but actually validating isn't always done if we only have SPA consumers (Incompatible upgrade -> new cloned endpoint for the duration of the upgrade period).


It was mostly TypeScript and C#. Stuff gets worse when you use a generated swagger specfile from code, and then try to feed this into any other generator.

But when hand-writing openapi specs, there are a lot of features that are not understood by generator or lead to subtle bugs. Stuff like intersections, mixins (oneOf, allOf, anyOf, not), and union types (with named discriminators) are a thing in OpenAPI but kind of are hard to map to C# or Java for that sake. And the way you can map it, is strongly opinionated and might be different between generators (or their templates). But also other stuff, like non-supported `$ref` statements where giving us a headache.

If you start out with generators and are willing to compromise on API cleanliness (i.e. workaround by changing the OpenAPI spec when a bug in the generator arises) you might be fine.

But in cross-department or even cross-company teams, with hand optimized API definitions (specfiles), I wouldn't take the risk. I cannot emphasize risk here enough. Yes, you might be fine for a while. But you receive a new version of the specfile, and suddenly the output generated code breaks, won't even compile. This is a huge problem. You can fight whoever gave you the specfile to revert the changes, but if it passes the OpenAPI test suite, you will have a hard time arguing. So you suddenly have to eject the generated code and hope you can make changes - usually its unmaintainable, unreadable generated code. Alternatively, this is the moment when you rip out the generator and re-implement everything by hand.

Just beware, when chosing a generator, you make a bet that it is mature and bug-free enough to handle all cases that will come during project lifetime - and you bet a significant amount risk to that maturity. And if you know of any rock-solid, battle-tested generator for C# and/or TypeScript, please share. I only see either outdated ones with 100+ github issues per year, or relatively new players that lack a track record. And neither is company/money backed, so you rely on someones open-source work.


It helps a lot to keep your interfaces simple and flat. Probably too late for your project but I always advocate for this - then integrating with other systems and languages is much easier.


I fully agree with you. If you keep your API simple, reasons for code generators disappear and are quickly drafted by hand.


In practice, SOAP was the one that brought the best functionality.

What is telling because it's a badly defined beast that mostly doesn't work. But it did the type checking, automatic (client) endpoint creation, self-documentation, reflection-based (server) endpoints, support for lots and lots of languages (as long as you don't use something like an array)...

People keep promising new protocols that do one or two of those things on the modern world, but they always fail to deliver. AFAIK, the last promise was grpc, but doesn't even use a widely supported subset of HTTP so it has no chance of ever going anywhere.


On the one hand the failure is legit, success foiled by rampant vendorization & byzantine inscrutable tooling pumping out forsaken impossible to introspect generated code.

But also CORBA was maybe as far as I can tell the first programming tech to ever get canceled. That everyone went Vampire Castle on & decided collectively to crap on & lambast. What before CORBA had such a loyal hate club, was so bandwagon to disdain? It was a boogeyman tale, one we collectively whispered ourselves away from, even though most people had little sense.

It'd be interesting to try to dig up some of the old GNOME 2 efforts that did have Corba stuff in various apps & systems. Try to find some authors to say how they felt at the time. Go look at the code & see how it was.

The idea of having objects that can connect acriss boundaries is indeed super interesting. I love webdev & it's great, but we have noticeably not gotten very far in 20 years. GraphQL was one would be normalizing upstart, and ideas like resolvers have some merit, but it's still chiefly a state transfer system (which also has real time. Subscribe mode). It's still not really a well paved way to connect systems.

Even if we don't have distributed objects, just expanding the range of normative things we can do with resources on the web feels like it hasn't greatly expanded. Our efforts are still artisinal, handcraft by each team. We need to start figuring out, even if not distributed objects per se, how to grow the capabilities of online/connected systems pervasively.


It solved a few problems but the strictness of the bindings was not helpful. The C++ heavy tooling focus was neither and don‘t get me started on the gap between promise and actual implementation. Once you dealt with different ORBs in a large scale installation (read 3G networks) it became madness. In my first job I fought and kept it out. On my third it caught up with me and I had the fun to rip one vendor out and swap an open source in as the former did not work with the C++ compiler and we were a tier 1 telco vendor and the Orb vendor was on site with the source code. Total mess.


Surely anything over a network is async? Unless you block on every call, which REST doesn't stop you doing.


> Surely anything over a network is async?

Yes, but not everything should be. Synchronous RPC works better than async RPC with fewer failure points simply because there's only single line that is the point of call. Async RPC quickly devolves into a hairy mess.

> Unless you block on every call, which REST doesn't stop you doing.

But the current standards do. If you want `fetch()` to be synchronous, you have to wrap it and fake it.


If you want fetch to be synchronous, why not just write await in front of it?

RPC is the same, except you can't opt for async when you want it.


And here I am learning an Erlang language…




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: