Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Rome will be written in Rust (rome.tools)
122 points by steveklabnik on Sept 21, 2021 | hide | past | favorite | 99 comments


The authors mention they will be more productive with Rust.

I'm no Rust expert and I take their word for trustworthy, but nonetheless, they won't build Rome in a day.


It's also interesting because the author of esbuild actually started out with Rust and eventually switched to Go because it was more productive. I think that was mainly due to how much garbage a compiler generates and therefore how much a garbage-collector helps with ergonomics, but still, interesting data point

Edit: I'm not saying the Rome devs necessarily made the wrong decision. I really like Rust and I dislike some of Go's major choices. But I'm also a pragmatist and thought this would be an interesting point to discuss.


Rust's pedantic safety rules (enforced by the compiler) give you a hard time as somebody who expects to be productive quickly. It's as if you learn programming a second time in your life, but this time with different focus. This is not an experience for everybody, to understand that you did dangerous things in the past and now are enforced to avoid them. It is much easier if you have a Mentor or a ively seek out for the helpful online community by the way.

About Go, well if you are not tageting C-like performance and not looking for pedantic savety, maybe it might be a better option.

Interesting is that the wise people behind both languages decided against inheritance and deal with it in their own ways. It's worth giving Rust and Go a chance imao. Especially as C/C++ developer you can learn a lot to get better when coming back with more sensibility about ownership, lifetime and composition. It will make you grow as a programmer, especially Rust if you ask me.


It's not my area at all, but to my non-expert knowledge the differences between generics and class based inheritance can border on the semantics at times.

The high-level concept I always use to think about it is "coupling."

Highly coupled data structures, people applying the very complicated solutions from Gang of 4 where it isn't needed.

That's the core problem, and Go solves for the problem of "programmers misapplying highly-coupled data structures and doing bad things with Gang of 4 patterns" by explicitly removing that language feature.


Yes and I like inheritance being removed in Rust too.

I too think thight coupling is a core problem of many programs, not only introduced by inheritance, but also multiple mutual references, for example in global state based C architectures (most I have seen, the Linux Kernel is better in this regard, CIs used very responsible there).

Dependency Inversion (by interface reference or generics or any other technique) is the one pattern that is missing in the Gang of 4 Book and should be the most remembered, not Singleton (the tightest coupling one can introduce in his OOP system is by introducing Singletons).

With loose coupling you can run the very same application with a few switched out dependencies on your bare metal embedded device with SPI driven display, on PC with minifb, as a websevice and with mocks in unit tests.

I think it's preferable to always aim for loose coupling to enable testing, easier developing and later change.

In that respective "Agile Software Development, Principles, Patterns, and Practices" is a much better old school programming book than "Design Patterns" imao (given the latter one is a decade older and the Gang could not have known)

Languages that make thight coupling harder definitely teach a good lesson to learners, even if they have then go back to their day job in another language. Thanks for pointing that out.


It's funny because I am so far from the OOP world right now, I'm barely qualified to even comment on this topic given how little OOP I've done in the last 3 years.

I've literally never done this stuff, but the idea of "coupling causes problems" is hugely important in my domain.

I simply think it's important to understand the actual root cause of concepts, and put it into words.

Clearly defining your ideas in concrete terms like "coupling" and "K.I.S.S." is really critical to the big picture. Having them as "fuzzy ideas" in the back of your head isn't good enough.

Plus coupling as an issue is REALLY clear if you have a good mental model of "every single object is actually a line of hexadecimal going into the evaluator,"

In which case "a huge bunch of pointers between hexadecimal blocks is bad" makes perfect sense.

I guess the other high level concepts would be stuff like "naming things is hard," "pointers = bugs," "design by iterating on MVPs," "mock tests are never useless, write them every time because they *force* you into single responsibility functions."

But I'm still figuring out my bank of these ideas.


As Rome was not built in a day, at least they can't complain about taking forever to build.


No one should complain if rewrites blow up, because they often tend to: https://www.joelonsoftware.com/2000/04/06/things-you-should-...

Requires more discipline, patience, and humility than most teams could muster up.


Take my upvote and get out! Normally Reddit style puns are not welcome here, but this one is too good and is on point. In all seriousness it is a valid concern that not only Rust is not the native language for this JS team, it is a language that has a steep learning curve on its own.


Ha ha, thanks for the appreciation!


I see a lot of discussion on Rust vs $LANG, but nobody seems to be discussing what I think is the meat of this announcement:

    1. Fewer tradeoffs for third-party dependencies
       Most JavaScript/npm packages have to balance a lot of different concerns for many different kinds of users. They are forced to make tradeoffs around code size and performance that we aren’t interested in. Rust crates, on the other hand, are generally making tradeoffs that much closer align to our needs.
    
    2. Correctness is built-in to the standard library and in most popular crates
       We were creating our own APIs that focused on correctness instead of using third-party JavaScript dependencies. Rust, and its community, places a lot more focus on correctness without paving over edge cases that we need to worry about for Rome.
    
    3. Trait/Module system allows us to make better use of dependencies
       Rust’s trait system is a powerful way to create abstractions over any piece of data without overhead. It allows us to deeply integrate third party libraries. It also allows libraries to create APIs that are much more incremental, safely exposing more surface area without creating a need to make breaking changes.
My takeaway here is that the Rome team was getting tired of rewriting stuff from scratch but the nature of the JS/NPM ecosystem made it hard to use 3rd party code without losing control over Rome's performance.

I'm not sure how valid this claim is, but it is well known that the history of NPM modules puts the ecosystem in a weird place where libraries are expected to optimize for usage on both server environments (Node, Deno, etc) and browser environments (Firefox, Chrome, etc).

This tradeoff is not without costs--I believe it's one of the big reasons we see dependency bloat in the NPM ecosystem. I'd love to see more discussion around this topic and what other folks' experiences with these kinds of problems are.


Something I didn't mention in the post that might deserve calling out specifically is that Rust maintainers actively decide to work in crates instead of in std for many things.

This means that you have the language designers themselves maintaining some of the most important crates in the ecosystem, and baking it into their process to iterate on the language itself. The result is really high quality crates and the community is better for it.

I'm not trying to dunk on JavaScript/npm packages though, or anyone involved in developing the language. Clearly the JS community has found success and is meeting a lot of people's needs. We can accept that Rome's requirements are different than most people using JavaScript.


Hi Jamie! I thought the article was pretty thoughtful and I'm sorry it's gotten such a mixed response in the comments here.

It was interesting seeing your commentary on Red-Green trees and how they're used. I'm excited at the prospect of a rust-analyzer level of type-checking and correction for TS/JS. As a technical security person, I also see a lot of potential for such tooling to do things like data-flow analysis, which is pretty much the gold standard for detecting things like untrusted input or dangerous interpolation.

Are there plans to support that kind of security tooling in Rome? If not, is there any interest in supporting something like that in it's eventual feature-set?


One big issue with NPM is that JS stdlib is extremely bare bones and the ecosystem swings far into the other direction to compensate, which leads to many libraries pulling in convenience utils that pull in their own convenience utils ad infinitum. So you, as a consumer of a toolkit like Rome, might be forced to download crap like function-bind@1.0.1 even though that is only pulled in as a deep transitive dep for some ancient obscure thing that wanted to support IE6 or something.

With Rust and other compiled languages, this isn't as much of an issue since even a "bloated" binary for a single architecture is still orders of magnitude faster to download than a constellation of packages each supporting an arbitrary subset of Node runtime versions.

with this said, there may also be a novelty factor in play here. Rust can definitely be considered the "new shiny thing" for this team, and with only a few weeks of Rust under their belt, they might still be well within the honeymoon phase. I can think of at least one major aspect in this space that becomes harder with a move to Rust: the only fully complete Typescript type checking implementation in existence is written in JS/TS, and I expect that type-aware tooling is going to sprout there first. The Typescript team in particular seems fairly uniquely positioned to be able to deliver some sort of type-aware compiler mode similar to google closure advanced mode.

In comparison, IMHO it's not a super compelling story for Rome to say they're going to provide a fast, unified web dev toolkit that... doesn't typecheck (due to poor interop with the JS ecosystem/community). The esbuild ecosystem is already there today and forging ahead with the force of the OSS community behind it. Bun.sh is taking a completely different approach by standing on the shoulders of giants. Last I heard, they were leveraging the ubiquity of C ABI - via zig - to leverage JavascriptCore, and looking into V8 bindings, and focusing on lightning fast hot reloading instead of reinventing parsing. Rome seems to be approaching implementation from a first-principles angle, which seems most similar to esbuild's approach, but that first movers advantage is long gone though.


Hi, author of Bun here

> they were leveraging the ubiquity of C ABI - via zig - to leverage JavascriptCore, and looking into V8 bindings, and focusing on lightning fast hot reloading instead of reinventing parsing

This isn’t entirely correct. Bun embeds JavaScriptCore similarly to how Node.js embeds V8. Bun doesn’t use JavaScriptCore for parsing and there are no plans for V8 bindings.

Bun uses a new JavaScript parser based on esbuild’s JavaScript parser but ported (by me) to Zig. There are very few large external libraries in use by Bun — just JavaScriptCore.

Extremely fast hot module reloading is a priority though :)


Rust's stdlib is similarly bare-bones. Rust/Cargo has the same mindset of many small packages as JS/npm. The difference is that in JS abstraction and indirection has a run-time cost, so doing things via packages adds overhead. In Rust, you can have zero-cost abstractions that optimize away. The runtime and code-size cost is often the same whether you call code from a generic 3rd party crate or write it yourself inline in your function.


Related: Next.js is rewriting its core with a Rust-based compiler. SWC (JavaScript and TypeScript compiler written in Rust) will replace two toolchains used in Next.js: Babel for individual files and Terser for minifying output bundles. Early tests are showing twice as fast builds, with further optimizations to come. We're betting on Rust as the future of JavaScript infrastructure.


Super exciting, thanks for sharing!


I think tools for a programming language should be written in the said programming language if possible.

Rome developers will not use Rome to develop Rome, and the Rome users are less likely to contribute because they are more likely web developers and not familiar with Rust.


That's a nice ideal, but esbuild has managed to reduce build times 100x by building JS tooling in Go. Given how widely used JS it, the amount of time that would be saved by faster tools in this area is humungous.


esbuild is the current darling leading the pack, but there are also various other projects in the space (swc[0] is written in Rust, fjb[1] is written in C, bun[2] is written in zig, leveraging JavascriptCore's engine).

The most significant performance-oriented effort in this space still leveraging JS that I know of is kataw[3], and while that's quite fast compared to babel, it's still within an order of magnitude from babel. Kataw itself is a CST-based implementation that was created to outperform seafox (a AST-based parser by the same developer), which itself is an iteration over several other perf-oriented parser projects.

Babel gained popularity due to the crazy amount of churn in grammar over the past few years, but more and more I think the dust is settling, and flexibility is no longer the name of the game, making an AST-based implementation less appealing. The Rome team must be feeling the heat if the data structure design choices are being informed by performance. I highly doubt someone will be able to compete in performance using a JS implementation in today's landscape.

[0] https://github.com/swc-project/swc

[1] https://github.com/sebbekarlsson/fjb

[2] https://bun.sh/

[3] https://github.com/kataw/kataw


Is there a benchmark and comparison of these new tools somewhere?


I jumped from C# to Ruby to JS/TS to Go easily. Rust has taken a lot longer to learn. Today, if I need a statically compiled binary, I use Go, simply because I can crank out a solution quickly.

My point is that Go seems like a reasonable choice, if you want to easily onboard developers who are familiar with JavaScript. I don't think Rust would be as accessible.


I personally find Rust easier than Go. The compiler helps more, rust-analyzer is way better than the Go language server, the Rust docs are great, package management is better, and the richer type system means that it's usually easier for me to "glue the types" in my head to know what I need to do. Of course that's not universal, some people will find Go easier, some will find something else easier. But my point is just that some people will find Rust accessible.


I wonder if perhaps your OOP background is holding you back with regards to Rust. I see this quite a lot in /r/rust. People from a c#/Java background struggling.

I personally learnt Rust quite quickly and easily coming from a JS background. There was some new stuff to learn, but the Book covered that well, and what I consider idiomatic JS - lot's of functional iterator chains, minimal mutation, duck typing, etc - translated into Rust pretty much 1:1. Cargo is also familiar to the JS dev, and the availability of high quality libraries means that a beginner Rust dev doesn't need to learn the most complex parts until later.


I learned OCaml, Erlang, and Clojure easily enough. I generally prefer a functional style, so i don’t think that’s it. Haskell and Rust are the two most difficult languages I’ve learned.

That said, Rust has come a long way in terms of ergonomics.


There is a big difference between writing in Go vs Rust. Any intermediate/advanced webdev will feel competent in Go in a matter of days. This is definitely not true with rust.


esbuild is also basically becoming the new standard, taking the crown from Webpack.


ESbuild currently replaces Babel in most workflows. Webpack is still used as the bundler. My current gripe is the lack of Babel macros support in ESbuild. I need them for translations. Afaik ESbuild does not have macros.


Any project / tool should be written in the language most suited for the problem, while also pragmatically factoring in prior experience of those who would do said work.

Is there really a significant overlap between programming tooling usage, and development of the programming tools?

The case where I see most rampant abuse of this is npm/node commandline tools. Written in/for an environment because it is mostly used by people in that environment, and not because it is particularly suited for it. If it wasn't for docker, all those tools would be tied to a exceptionally messy runtime, and wasted effort. They also won't be included in any normal distribution either. It makes any tool written in node to come across as an amateurish and hacked together attempt. Regardless of how well and robust it is. I'm sure there are good reasons for it, beyond my understanding, since node is something I've always avoided due to red flags all around.

But, I digress. If the memory and runtime model isn't suited for the requirement of the tool, and language doesn't allow for a paradigm that matches well with the problem domain... You've picked the wrong language.


This justification -- Rome should be written in JS otherwise Rome users are less likely to contribute -- irresponsibly focuses on a secondary goal of the project, at the cost of the primary goal, which is to be a end-to-end toolchain for one of the most popular languages in the world.

Using Rust promises performance and stability, key features for the thousands of engineers that will be running Rome as a critical part of their workflow. The vast majority of those developers would prefer that the tool runs faster and has fewer errors over having a lower barrier to entry for contributing.

"Contributing" may also encompass the plugin/extensibility ecosystem around the tool, and can be closely tied to the underlying implementation language. This functionality has been integral to the growth of things like Babel and Webpack, but it seems in this case Rome is trying to replace much of that very ecosystem.

I there are some toolchains that may prioritize "barrier of entry to contributions" -- new languages, or niche communities for example -- but I don't think it's an appropriate priority for a toolchain for a massively popular established programming language with $4.5 million in funding [1].

[1]: https://rome.tools/blog/announcing-rome-tools-inc/


Disagree - I think mypy for python and esbuild for js are great examples of why this is not always the case.


Maybe so for small languages/projects, but it won't matter if it gets popular enough. Some of the biggest languages like Python and PHP still have their core components written in C, as opposed to some 'write the core in our language then create a transpiler'. I'm sure speed being the focus instead of DX (developer experience) will be appreciated by all the people that won't ever touch Rome except as an end-user.


I tend to agree, but in this case I think it's okay. The runtime of the web (the primary target of most js today) is a very different animal to more traditional runtimes. It is a "gooey" place and highly specialized and just...weird. What other environment pulls components and data from around the world at runtime?

It's also tempting to think that Rust is just better than Javascript for the web too, and you can target the browser with a WASM bundle. I think that's wrong, too. The developer story in the web is weird, but it works and you give up a lot by using WASM. I also think from a societal standpoint (un-obfuscated) Javascript is the healthier choice, since it lets people learn from each other. So, basically: yes, Rome, a dev tool, will be written in Rust, and it will help people write better Javascript, and that's okay.


> I think tools for a programming language should be written in the said programming language if possible.

This is a popular sentiment which I've never understood. Should all JS tools be written in JS? What if you already have an 80% solution written in F# thanks to some other work that can be reapplied to JS, should that solution be rewritten? Should we have IDEs predominantly written in their target language?


In a twist of irony, the most popular IDE for Rust is written in Typescript (but of course powered by rust-analyzer, written in Rust)


I think that's actually one of the sweet spots of Rust. I find it way more accessible to web devs, while C++ is scary for me. This in turn allows you to have very performant tools that are easily distributed as a single binary. Go has the same qualities, as ESBuild proved. Both also can be compiled to WASM, so that makes the distribution even easier.


Are you saying that we should have a JS-based JS engine in browsers? Sometimes performance is more important than hypothetical additional external contributions.


A JS engine in a browser isn't built for JS developers but rather every web user. A better example would be the TypeScript compiler, which is written in TypeScript and still very fast.


I'd argue the JS engine in browsers is _also_ built for JS developers, and even though we are definitely a minority compared to all the internet users it's important to underline that most of the stuff currently available online, for better or for worse, wouldn't exist without developers writing it. Like the average user in fact couldn't care less about what languages websites are written in, but if browsers shipped only a brainfuck compiler I'm sure there would be a lot less development happening on the web, in some sense JS developers are the target users of JS engines.

I think this is literally the first time that I hear anybody saying that TypeScript is very fast. It's no coincidence that the top open issue in Deno's repo is literally titled "TypeScript compiler in Rust" [0]. The developer behind swc is even rewriting the whole type checker in TypeScript [1]. A lot of people, myself included, think that TS can get excruciatingly slow, in a large-enough but not really huge codebase, in a way that probably wouldn't happen if it were written in something like Rust with a big focus on performance.

[0] https://github.com/denoland/deno/issues/5432

[1] https://github.com/swc-project/swc/issues/571


If a language goal is not to support language parsers, what good does it do if you write your parser in it? It certainly does not benefit from the extra focus the developers dogfooding the language will bring. On the other hand, there is a real chance that the dogfooding will be harmful and make the language lose focus.

If it's goals do not include a good systems interface, the same applies. And Javascript's goals don't include either.


As a JS developer, I don’t trust the users of the ecosystem enough to develop the tools properly in JS.


It seems like a lot of JS developers are finding shelter in TypeScript and Rust. I think this is good news. TypeScript has an awesome type implementation on top of what we already know, and Rust obviously has huge performance benefits over JS. I'd not had much strongly-typed or compiled language experience before, but recently started using both simultaneously for a new project and it's really been a great learning experience and stepping stone. I chose Rust over the team's familiar C++ for deterministic and stochastic model simulations and now they loathe working on former projects. The transition to TypeScript for those of us who were primarily web developers was pretty seamless and the exposure certainly made understanding features of Rust, etc. a lot easier.

I don't really see any serious negatives in this move. If the likes of Rust and Go pull more web developers into the world of safer and more performant languages and systems, what's the issue?


I wonder if they will write a TypeScript type checker in rust? TypeScript currently only has one serious implementation that actually checks types (esbuild, for example, does no type checking). Getting another one that's written in rust would be tremendously helpful for type checking times!

But, on the other hand, if they don't write a type checker in rust, we may still need `tsc`. Since Rome won't be written in JS/TS, I worry that calling The official TypeScript compiler from rust may be prohibitively expensive, necessitating users to run a separate `tsc` process. That goes against Rome's goal of having a single AST generated once by a single tool


There already is one, and it’s used by Deno!

https://swc.rs/


That doesn't support type checking, see https://github.com/swc-project/swc/issues/571

You'd still need to run `tsc` to check types even if you used swc


oh, good point, apologies


This actually only builds and does not type-check similar to esbuild. The author is working on a proprietary type-checker in Rust though


We use swc a lot but not for type-checking, that is still done by tsc loaded into a snapshot (precompiled v8 state).

There is a type-checker in the works tho https://stc.dudy.dev/docs/status/


A good indication of someone's lack of experience is hearing something along the lines of "I used X, but now I'll rewrite it in Y", and then they proceed to explain all the hypothetical benefits of the two birds in the bush, instead of the one in the hand.

After working on a project for a while, it's natural for people to start seeing inefficiences or bad trade-offs in the code/architecture/language/project/etc. The first reaction is usually to think that if only they could throw that away and rewrite it, preferably in a better language, it'll be so much better, because it has to, right? All those shiny bew features will surely make your product so much better.

But it's never the tool that got you in that spot. It's you - your decision making process and your ability to create a product. If the language is actually a problem, you made a bad decision by picking that language. If the code is bad, you didn't code it well. If the architecture is bad, you didn't design it properly. So unless you really learnt your lessons, changing the tech is just going to get into a different swamp. Do not rewrite the project if all you can say is that you don't like the state it's in. Refactor.

If you know exactly what mistakes you've made and how to fix them, but fixing them requires a large portion of the project to be refactored, then that's a good case to say that since you've learnt from your mistakes you have a good chance of rewriting the project to fix those mistakes but _without introducing additional risk_.

If nothing else, changing the language discards your expertise in the previous language. There is also no reason to believe that changing the programming language will yield a better tool than before. Sure, you got some learnings from having written v1, but you're also throwing away everything that went right.

At the end of the day, just admit to yourself that you want to rewrite your project in Rust because you like Rust and want to write Rust code now. That's it. There is need to publicly justify that it's such a great choice and it will make everything better. It won't. Software is still software, and it's limited by your imagination and ability.

People have written excellent tools before Rust came along and will write them after Rust is long gone, because the language is a tool, and the only thing at the center of your project is you, alone with a text editor, wanting to make something happen. So make it happen.


> There is also no reason to believe that changing the programming language will yield a better tool than before.

Except similar tools like esbuild (Go) and swc (Rust) have achieved an order of magnitude improvement in performance compared to JS based fools. This is critical because developers will prefer the fastest tools to get work done.

Guessing you didn’t know this and that’s why you confidently and condescendingly proclaimed that

> just admit you want to rewrite in Rust because you like Rust.


It's irrelevant whether it is in principle possible to create faster tools in other languages. Joe Schmo, who ran into a wall with his project in Node.js, is lying to himself if he thinks that rewriting the whole thing in Rust is going to magically solve the problems that drove his project into a wall in the first place. A bad workman blames his tools.


It's irrelevant according to you. That's not an accepted fact.


I find your comment puzzling. What specifically is not an accepted fact?


Sorry, should have been more specific, my bad. I mean this part in particular:

> It's irrelevant whether it is in principle possible to create faster tools in other languages.

I disagree that it's irrelevant. There's a ton of development tooling nowadays that can be much faster. The accumulated slowdown and thumb-twiddling that ensues when 20 JS junior devs have to wait for a solid minute or two on their mid-range laptops can and has lead to real productivity losses.


In this scenario, the team uses dev tool X for project Y. I'm not arguing that X shouldn't be faster, or even that it's not beneficial for Y to run faster.

My original point was that the team that worked on project Y so far is unlikely to do a better job in a completely new language, just because some other team managed to write fast software in that language, because the other team isn't the one working on project Y.

I didn't think this would be a controversial point. I probably should have phrased it better.


Now that you explained it I actually agree with you so maybe another phrasing was needed indeed. But maybe I'm just dumb and didn't get it earlier. :)

I agree the odds are against the team employing a new language. Far from impossible though. I've seen it successfully pulled off a number of times (I've seen plenty of unsuccessful tries as well).


> There is also no reason to believe that changing the programming language will yield a better tool than before.

Why would you assume that? Languages are not all the same, if only for the reason that (all else being equal) a compiled language will run much faster than an interpreted one. You may be saying that, ie given an O(n^3) algorithm, a faster language will not cover up the slow algorithm, which is true, but again, all else being equal, including the algorithm, some languages will perform better than others.

Indeed, already there are examples of JS tooling like SWC and esbuild (that are not built in JS) that are much faster than tooling built in JS like Babel and tsc.


This isn't a blanket statement about languages in general. We have information that Joe failed to deliver his project in language X. Given that information, there is no reason to believe that rewriting the tool in language Y is going to be successful, because the problem may not be the language. This has nothing to do with someone else being able to write a better tool in that language - it only proves that a better tool is obtainable in general. In my experience, in such cases, Joe typically fails to deliver his new project as well.


> We have information that Joe failed to deliver his project in language X. Given that information, there is no reason to believe that rewriting the tool in language Y is going to be successful

Correct, which is why further analysis must be done. It seems you disagree with the analysis of the Rome Tools team?


That is correct. In fact I submitted an article just the other day called "We rewrote everything in Rust, and our startup still failed [0]." However, again, all else being equal, a faster language is, well, faster than a slower one.

[0] https://news.ycombinator.com/item?id=28573965


This is not a jab to you or your team but the title makes is sound as if you believed Rust guarantees business success?

(The link is dead btw, yields an empty screen. I'd be curious to read the article.)


The article is a satire piece. It's also not my article, I just found it on the web and thought it was funny. And the link works for me, you will have to enable JS though if it's disabled.


Ah. To me such articles are needlessly snarky and demeaning towards people who choose Rust (or any other $HOTLANG as those guys call it) for legitimate reasons and after a thorough analysis.

I am getting old, yeah.


A good indication of someone's experience is that they don't have hard rules and are able to analyze and see the nuance in each situation. This is to say, I wouldn't be so fast to judge.


At times, it's experience that allows one to judge fast.


A lot of people mistake "experience" with "becoming set in your ways as you get older", and you seem like one of them.


A wise man once said: "according to you. That's not an accepted fact".


Sure. We're allowed to disagree, both of us from our own point of view.

I still think you are too cynical in this instance but obviously won't try and make you change your mind. You do you. :)


Always excited to see more Rust usage, and this was actually a super interesting post on modern compiler design, but I failed to see how the two topics were very related? The post felt weirdly disjointed


Rome wants to be comprehensive js developer tooling, replacing the current panoply of eslint, webpack, and so on. They've determined Rust is a better language for writing that kind of tooling.


Right, I know that/it's clear, I just thought it odd the way the post jumped between these two different aspects of the project that are only loosely-related at most


I'm really excited about this direction, a few years ago I asked the question "who will be brave enough to write JS tooling not in JS" and it seems a lot of projects now exist.

That said, I do not believe Rome will be the winner here. If they were willing to undertake this giant project initially _without_ the benefits of using a faster language, I'm not sure what the vision was.

Really excited about esbuild, swc and bunjs


I suspect the focus was initially on creating a coherent toolset, but that is not going to be sufficient for Rome to compete performance-wise with current tooling, hence the switch to Rust.


> We quickly realized we might actually be more productive in Rust.

The compile times will catch up with you soon enough! enjoy them while iterating is fast! ;)


I'm sorry for this project. such an early rewrite will probably be the end of the project. i hope i'm wrong.


I wonder why they didn’t go with OCaml, this type of task seems like it is the bread and butter of the ML family.


Too esoteric.


Why not close the shop and join deno?


Is Deno trying to solve these problems (Linting, Formatting, Polyfilling and bundling)?

I was under the impression that Deno was a node-like runtime using Rust bindings to libuv instead of C++.


Deno linter: https://deno.land/manual/tools/linter

Deno format: https://deno.land/manual/tools/formatter

Deno bundle: https://deno.land/manual/tools/bundler

Not sure about polyfilling, but it might leverage TSC for that?

Added bonus is Deno compile, for creating standalone executables: https://deno.land/manual/tools/compiler


All of those packages are for running things in Deno or creating modules that will run in Deno.

Rome is for building frontend packages and apps.


Deno uses tokio instead of libuv. Deno comes with a large amount of tooling built-in: formatting, testing, linting, documentation, and other


Oh that's awesome! Thanks!


I'm not sure how moving from JS to an systems engineering language would make anyone more productive. I bet they just wanted to use Rust, because why not?


If you read the post, it’s because they’d made the decision not to use any third party dependencies in JS due to perceived problems with quality and correctness, and they found Rust crates to not suffer as much from these problems. So their productivity mostly stems from not having to rewrite their dependencies anymore, not so much just the switch to Rust itself but the whole ecosystem.


Why would it not make them more productive?


This reminds me of a blog post by 2ality [0] talking about how the rise of JS tooling in non-JS languages signals a future where JS becomes much faster, simply due to the languages being compiled and not interpreted.

However, I am also reminded of a post I recently submitted called "We rewrote everything in Rust, and our startup still failed. [1]" I don't necessarily think this is the case with Rome however, as they are actually solving a crucial need - performance - rather than most companies who'd undergo rewrites which don't focus on their customers' true needs but instead simply want to rewrite for the sake of rewriting, whether it's for fun or for keeping their devs happy.

I am curious though why they wouldn't merge with SWC which is also written in Rust, or other efforts to write JS tooling in other languages.

[0] https://2ality.com/2020/10/js-plus-other-languages.html

[1] https://outline.com/yr7ZBS


Whole Italy should be written in Rust. And the world is next.


ia anyone using roam tooling for their project instead of eslint, etc,.? would love to hear some feedback


I tried Roam's linter. I hit bugs with the CLI and the LSP server. I fixed and reported a few of the bugs, but eventually I gave up trying.


There’s not much the author listed that can’t be applied to typescript or any other JS compatible languages.

Rewriting their JS tools in a language as low level as Rust is asinine.

I have no doubt it’s a weak attempt to indulge in their fanboyism. I wouldn’t be surprised if this change causes massive technical debt, but people seem to love Rust, so there’s gotta be someone for this role.


First let me say I really dislike Rust. Personal preference.

However, having worked with Node for almost 10 years now, with a number of bundlers, most in Javascript but some in other languages, Javascript tools rarely win the performance race.

We switched our medium-sized app from Rollup (Javascript) to ESBuild (Rust). Our build times went from 5-15 minutes, to just under 1 second - with full feature parity.

I'm sure a lot of this boils down to algorithms and whatnot, sure. But I still would like to think it's in part the language choice, too.


Probably just a typo due to the topic of the thread, but esbuild is written in Go.


Yes, got it mixed up in my head :)


But how much do those type of performance benefits matter to JS tools like a linter?

The only positive they’re getting out of a native compiler language is unnoticeable performance boosts.


> But how much do those type of performance benefits matter to JS tools like a linter?

A lot, I don't see how this is really a question.

> The only positive they’re getting out of a native compiler language is unnoticeable performance boosts.

5 minutes to 1 second of build time for the same features isn't what I'd call "unnoticeable".


Spotting lint problems in real time as I type is way more useful than spotting them after I submit a pull request and wait 10 minutes for the CI system to kick in :)


Did you just call going from 5 minutes waiting to 1 second near-instant finish "an unnoticeable performance boost"?

lol if so. :D




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: