Things to think about for the near future of programming languages:
- The borrow checker in Rust is a great innovation. Previously the options were reference counts, garbage collection, or bugs. Now there's a new option. Expect to see a borrow checker in future languages other than Rust.
- Formal methods are still a pain. The technology tends to come from people in love with the theory, resulting in systems that are too hard to use. Yet most of the stuff you really need to prove is very dumb. X can't affect Y. A can't get information B. Invariant C holds everywhere outside the zone (class, module, whatever) where it is transiently modified. No memory safety violations anywhere. What you really need to prove, and can't establish by testing, is basically "bad thing never happens, ever". Focus on that.
- The author talks about "Java forever". It's more like Javascript Everywhere.
- Somebody needs to invent WYSIWYG web design. Again.
- Functional is OK. Imperative is OK. Both in the same program are a mess.
- Multithread is OK. Event-driven is OK. Coroutine-type "async" is OK. They don't play well together in the same program. Especially if added as an afterthought.
- Interprocess communication could use language support.
- We still can't code well for numbers of CPUs in triple digits or higher.
I disagree with a few of your thoughts, but they're good thoughts!
* Javascript everywhere is a function of low barrier-to-entry for it, but almost everybody agrees it is flawed as a language. If that's the future, we are screwed as an industry. One thing I've noticed (and I say this as a guy who wrote Ruby for 10+ years), is that type safety is becoming a hugely desired feature for developers again.
* WYSIWYG web design (a la Dreamweaver) died off a little because the tools saw a web page as standing in isolation. We know however that isn't interesting on its own - it needs to be hooked up to back-end functionality. Who is producing static HTML alone these days? In the case of SPAs it needs API integration. In the case of traditional web app dev, it needs some inlining and hooks to form submission points and programatically generated links to other "pages". Making that easier is the hard part - seeing a web document as an artefact of an output from a running web application container.
* Multi-threaded, event-driven, coroutine-type patterns are fine in Go, to my eye. What's making you think we can't mix this up with the right type of language and tooling support?
* Is it that we can't code well for CPU counts > 100 or that the types of problems we're looking at right now that need that level of parallelism tend to be targeted towards GPUs or even ASICs? I think I'd need to see the kind of problems you're trying to solve, because I'm not sure high CPU counts are the right answer.
* Talking to GPU-type engines is actually pretty simple, we will deal with it the same way we deal with CPU-type engines: abstraction through a compiler. Compilers over time will learn how to talk GPU optimally. GPU portability over the next 20 years will be a problem to solve as CPU/architecture portability was over the last 40.
> Javascript everywhere is a function of low barrier-to-entry for it, but almost everybody agrees it is flawed as a language
Everybody is everybody who has used other languages intensively or is into programming languages or, the most negative parties against JS, people who are into formal methods. But 'everybody'; I get often downvoted to hell for being negative on JS on Reddit. And i'm not using a baseball bat; i'm subtle about it as I don't care for language wars. Use what you want, but please don't say it's the best thing to happen to humanity. But no; 'everybody' (as in headcount) thinks it is the best thing that happened to humanity and that other languages should die because you can write everything in JS anyway.
A modern JavaScript engineer would point out that:
- The community is very accepting of new engineers.
- The ecosystem is huge and there are great solutions
available to many problems.
- It's easy to write consistent code while avoiding
many problems with the language if you use `eslint`,
`prettier` and `flowtype`. Tooling on the web is
excellent and rapidly improving.
- You can use one language on the back-end and front-end.
You can even use the same language to write desktop,
embedded, command-line and curses applications.
- `babel` drives the adoption of new features into the
language with greater vigor than many other languages.
- The VM is very fast due to lots of work by Google,
Mozilla, etc. There's also methods of interacting with
modules written in WebAssembly or if you're on the
back-end native modules written in Rust/C++.
- JavaScript is driving a lot of momentum for
new programming languages (Reason, Elm, PureScript, etc)
since as a transpilation target it gives you access
to many environments and libraries at the get-go.
In my opinion JavaScript in 2017 is a much better language than JavaScript in 2012. I wouldn't call it objectively good but I certainly wouldn't call it 'objectively bad'. There are many reasons to choose it beyond the syntax and semantics of the language itself, and there are effective treatments available for most of the warts it was once known for.
It's dominating because of the sheer drive within the community, and if there were award ceremonies it'd be winning 'Most Improved' year-upon-year.
How much of that is just solving a problem created by Javascript?
A community accepting of new engineers isn't. But it isn't something to brag about either, because it never means its literal sense. Instead, people say a community accepts new engineers when it has low enough standards that newbies feel empowered without learning anything new. The one language to rule them all mentality is also about low standards.
The ecosystem is just compensating the lack of native features and bad overall design; so is the tooling. Babel is a great tool, but it's just compensating for the lack of a good VM at the browsers; ditto for all the transpiled languages.
Besides, no JS VMs aren't very fast. They are faster than the slowest popular languages (Python, Ruby), but can't get anywhere near the moderately fast ones like Java and .Net.
Almost none of the things I mentioned are solutions to 'a problem created by JavaScript' therefore your question seems deliberately misleading.
An accepting community can be a large funnel: it doesn't need to mean that nobody improves or that there are no selection effects. More candidates means more chaff, but also more wheat.
Babel isn't merely contributing to a 'lack of a good VM'. It's more valuable to see it as a playground and streamlined process for large-scale testing of language-level features. (It solved this problem later on. It was originally meant only to solve the problem of running modern code on older VMs.)
I guess you could argue that the VM should have been complete in 1995 but what programming language has managed this?
Also, I don't think that comparing JavaScript's VM to the VMs of statically typed languages is fair. You mentioned yourself that in comparison to similar dynamically typed languages it's faster. Compare apples to apples.
I don't think I'm bending the truth here. There's been a lot of investment into JavaScript, and not all of the problems that have been solved are obvious or easy.
At this point it is ascendant because it has effectively solved lots of problems that relate to speed of change and it has done this seamlessly on a large number of platforms. I think people look at this and pattern-match to 'first-mover advantage' or 'JavaScript has a monopoly on browsers' but neither of these things were what got us to this place: it was innovation in maneuverability.
(It won't necessarily stay ascendant now that it is so easy to circumvent writing JavaScript but yet still plug into its ecosystem.)
> More candidates means more chaff, but also more wheat.
That needs substantiation. It's not immediately obvious in any way.
About Babel, notice how we never had a problem playing with language-level features that target the PC? That's the difference a good VM makes (although on the PC case, it's not virtual). Besides, you are conflating VMs and languages somehow - they have only a tenuous relation.
About the speed, Javascript misses most of the expressiveness of Python and Ruby. It's more on line with Java but if you want dynamically typed languages, with PHP and Perl too. Yet, it's not any faster than PHP and Perl. It has reasonably fast runtimes - those are not an issue for the language, but are not a big selling point either.
Overall, the reason Javascript still exists and is viewed as a serious language is all because it has a monopoly on the browsers. It was worked on enough that if that monopoly goes away it will still be a good choice for some stuff, but it is not an stellar language, and can't be coerced into becoming one unless it can pass through a Perl5 - Perl6 level transition.
>Overall, the reason Javascript still exists and is viewed as a serious language is all because it has a monopoly on the browsers.
Basically, this. ClojureScript and others are simply the efforts of sane, experienced developers that don't want to cope anymore with Javascript and its warts.
I don't think that comment requires more substantiation than yours that there are "low enough standards that newbies feel empowered without learning anything new". The more people that learn your language, the more likely it is that you will find some that can contribute to its state-of-the-art. A low bar doesn't effect people that would surpass a higher bar with ease: they still wish to learn and sate their natural curiosities.
In fact, the normal stereotype of JavaScript developers as people constantly chasing new technologies and libraries is actually true, but what you are claiming is the exact opposite: "people feeling empowered without learning anything new". People are empowered and learning new things because there is a low barrier to doing so and this is exciting.
> misses most of the expressiveness of Python and Ruby.
> It's more on line with Java
Have you actually written modern JavaScript code? I personally think it's more expressive than both Python and Ruby, and certainly much more so than Java.
> Overall, the reason Javascript still exists and is
> viewed as a serious language is all because it has a
> monopoly on the browsers.
As it stands JavaScript doesn't have a monopoly on the browser. You can transpile ClojureScript, Elm, Reason, PureScript and many other languages to it. Yet -- surprise! It is still in use. Do you honestly think this is just inertia? I'd argue that it's investment in the platform itself and particularly 'innovation in maneuverability' (number of environments, speed of prototyping, backward compatible code, ease of hiring) which keeps developers using the platform.
In my opinion, the existence of NPM and a broad range of environments that you can run your code on will likely mean that JavaScript would still be a productive environment even if the web was to die.
>You can transpile ClojureScript, Elm, Reason, PureScript and many other languages to it. Yet -- surprise! It is still in use. Do you honestly think this is just inertia?
No, it isn't inertia -- one important reason is that creating a transpiler to JS invariably ends up with a crippled (feature-restricted), slower version of the original language (Clojure, etc.)
My point is that those transpiled languages aren't that stymied. You can use them without a lot of problems, and they're being left on the bench for reasons other than their feature set.
Companies generally choose JavaScript because there are lots of developers to choose from, it runs everywhere and the ecosystem is huge.
Engineers choose something like PureScript as it's not a 'blub' language and people think that by choosing it they will be able to hire (or be seen as) "math geniuses". I'm sure the feature set is important, but it's not enough to unseat a language with the previously described properties.
You can't change the rules interpreting your source code so that the parser will expand a small command into an entire program, or that it will read a more fitting DSL that does not resemble your original language.
You can not inspect an object and change its type or the set of available properties based on some calculation.
You can not run some code in a controlled environment separated from your main code.
> Even the most ardent JS fans I work with call it objectively bad.
Counterargument: Read
Douglas Crockford - JavaScript: The Good Parts
Even though this book is somewhat "aged" (it is from 2008), I know no better book where it is presented so well what is so interesting about JavaScript and how this language is so often misunderstood.
If after reading this book you still consider JavaScript as "objectively bad", so be it. But first read the arguments that are presented in this book.
It is very telling that you don't see the irony of the fact that a language needs a book titled "Language X: the good parts".
Good tech mostly speaks for itself. When you need entire books to convince you that a language has good parts, then you know the language has a problem.
I can think of one: the fraction of design decisions that the language's creator, standardizers, and serious developers wish they could change but can't.
> but almost everybody agrees it is flawed as a language. If that's the future, we are screwed as an industry.
What language is not flawed? And why are we "screwed"? I don't get this FUD...there are more important things than language-choice such as dependency management system + community + ecosystem. JS lets you get on with the job and get things done quickly. You need performance - use C/C++ bindings. Its been clear for a long time that JS is the safest long-term choice and is slowly creeping into every other language's castle.
It doesn't bother you that huge portions of our tech stack are sitting on top of layers of terrible language design that our ancestors will have to deal with? Perhaps the parent post is looking more to the future. Of course it all still works, but when you think we could be doing the same job with Lisp, Smalltalk, (insert any non perfect, but much better than JS technology), it does make me cringe a little.
Well, you were right about one thing: you don't get it. ;)
Just look at the prototype hell. Look at the quirkiness of the comparison operator. Type inference. The mess piles up extremely quickly.
"Disciplined devs don't make those mistakes" is NOT an argument and never was. Decades later, people believing themselves to be godlike C/C++ devs still make stupid mistakes.
But I guess learning from the past won't happen for people who fanboy and have a survivorship bias. And have money on the line.
It's flawed because 99.999% of the time your code will work just fine. Job done. Then one day, "undefined is not a function" and your airplane falls out of the air.
This happens in all languages... look at the Toyota acceleration bug for a real world of example of your hyperbole. They were using MISRA C. Languages dont fix spaghetti, laziness, tight deadlines, bad engineers, etc.
Dynamic typing is objectively much more flexible and easier to prototype in. Static typing is much easier to build large and robust systems in. Eventually we'll get to the point where everything you can do in dynamicly languages can be done in staticly typed languages (more verbosely), but we're not there yet.
For example, functions that return a different type depending on the values passed in are a useful pattern, but not allowed in staticly typed languages, except those with dependent types (Idris) or runtime downcasting (Go). There's always a way to acheive the same thing, but usually at the cost of safety or much more verbosity.
This is actually, IMHO, one way Go strikes a happy balance between static and dynamic typing. It provides a (runtime) safe and low-verbosity way to write dynamically-typed code, i.e. `interface{}`. Rust is slated to get something similar with `impl Trait` values.
> For example, functions that return a different type depending on the values passed in are a useful pattern, but not allowed in staticly typed languages, except those with dependent types (Idris) or runtime downcasting (Go). There's always a way to acheive the same thing, but usually at the cost of safety or much more verbosity.
Look at this verbose OCaml:
type my_return_type = AFloat of float | AnInt of int | AString of string;;
let my_fun n =
if n < 0 then AString "negative"
else if n == 0 then AFloat 0.0
else AnInt n
# my_fun (-4);;
- : my_return_type = AString "negative"
# my_fun 0;;
- : my_return_type = AFloat 0.
# my_fun 42;;
- : my_return_type = AnInt 42
(But yes, to use these values you have to do pattern matching, and if you're in a FUD mood, that is "runtime downcasting" and incurs a "cost of safety".)
Yes, you can always generate a new union type for your return type and then pattern match on it. That is certainely better than C where you must use unsafe tagged unions or void pointers. It is more verbose than in dynamic duck-typed languages, though I'll give you that it's quite compact in OCaml.
In a dependently typed language, you could reduce verbosity at the use site as well as avoid the extra runtime branch.
The verbosity is not in the function definition, its in the match expression needed at _every single call site_. In Idris, the definition would be similar length or longer, but call sites would not need a match expression.
> provides a (runtime) safe and low-verbosity way to write dynamically-typed code, i.e. `interface{}`
Since Go 1.9 was released with its new Type Aliases feature a few months ago, the verbosity is even lower. By putting `type any = interface{}` somewhere in your package, you can just write `any` instead of the verbose `interface{}` everywhere you want some dynamic typing.
The response to the recent Rich Hickey talk definitely points toward "dynamic typing as its own solution" (instead of a stop-gap until someone makes static typing less "verbose") being a popular view nowadays.
It's really not that bad. Modern JS isn't ideal but it has many parts of it that are as good as or better than Ruby or Python, but most people don't act like they are horrible languages. I'd gladly use it over either.
What causes you to say that JavaScript is a flawed language? Not trying to be snarky or saying you're wrong, just want to better understand your reasoning.
It seems to me that at one point JavaScript had a lot of confusing/bad design decisions but that more recent changes have largely eliminated them. For example, I almost never have to worry about "this" anymore.
I recently worked on a project using TypeScript and I really appreciated how it changed a lot of the bugs from being runtime to compile time. I could definitely see how that is a big flaw, but it seems like the community is developing solutions.
I don't think "this" in JavaScript was a bad design decision. The bad decision was how functions create their own scope in unexpected situations. Also, I think modern JavaScript still requires a lot of "this," just much less ".bind(this)" or "var that = this."
But the thing is, just because arrow functions exist doesn't mean people are going to use them. Similarly, just because modern browsers allow "let" and "const" doesn't mean people will stop using "var."
I agree that JavaScript is generally fine if you stick to the new parts of the language, but that in itself is a pretty big problem. Maybe not for me and you, but for software development in JavaScript generally it's really not ideal.
About WYSIWYG, we are missing standards on our APIs. SOAP was going that way but it was way too much, way too early.
Either that, or an ASP.Net view where the backend is interacting with the user through a browser. But that doesn't work well. It's much better to standardize the backend API than the entire frontend.
I have been thinking about that and there is an issue with it; Delphi (and VB) where written in a time when devs paid a lot for software tools. Besides some niches (embedded) that is not really the case anymore. You expect to pay a few $10 at most in total if that. And a lot of people (but that might be the HN/Reddit echo chamber) demand all to be OSS as well they work with. Making 'a modern Delphi' is a lot of work; years of it. And much of that time is not 'fun', it's hard work polishing little parts and having user test groups feedback on it and polishing it some more. The time that you could be Borland seems gone (unfortunately imho) and i'm not sure how you can make the kind of polished tool you are talking about in the current climate. Maybe someone else here has some different views though.
JetBrains is probably a good example of a "Borland" like company.
Outside HN/Reddit bubble that are plenty of companies that are willing to pay for software, the supermarket cashier doesn't take pull requests.
Also, the back to native focus on mobile platforms, including Google having to integrate Android apps on ChromeOS, might make it less relevant, given that native IDEs do offer some Delphi like experience.
JetBrains never did any rapid UI IDE like Delphi did. In fact, all their IDEs are in Swing, which is a mess. I'd totally love having CLion/PyCharm with Qt UI designer, but it's not going to happen.
The easiest time I had writing GUIs was when I used PyQT. I designed the UI in Qt Designer, loaded it in the python code, set the bindings and voila, it was working.
Btw Visual Basic continues to exist under Visual Basic.NET and if you stick to the basics you could learn to write C# GUI programs pretty quickly.
I agree writing GUIs by hand is very counter-productive.
Outsystems and Jetbrains are these examples but on the other hand they are not; they are 'old'; they both exist since 2000 and at that time pushing into the market was a lot easier. I was more thinking of a company starting now, to which I'll check out Anvil.Works. There are more new companies working in the space, for sure, but they all miss the breadth that Borland had (they really had a lot of cash and developers on hand in those days).
But yes, Outsystems (I worked with them and their product quite a lot in the past) could be considered a Delphi. But still not modern; it's rather painful building apps/sites with it that people seem to want.
Jetbrains can be considered a Borland; I didn't think of that because I consider them more in the space of 'low level' programming tools (which, like you say, includes Delphi functionality, but a modern Delphi wouldn't be like the old Delphi; it would need a lot more innovation).
The crucial difference between Outsystems(/Bubble/etc) and Anvil is that Outsystems tries to be a "no-code" environment, and we think that's a mistake (or at least, a different market).
Delphi and Visual Basic proved that writing code isn't the problem - code is the best way to tell a computer what to do. But writing code in five different languages to produce "Hello World" on the web...now that will slow you down.
(Count 'em: JS, HTML, CSS, backend eg Ruby/Python, SQL. We do everything in Python, which gets you going much faster. We just got back from PyCon UK, where among other things we got an 8-year-old building database-backed web apps in an afternoon. That's the sort of thing that used to happen with VB/Delphi.)
Funnily enough I was looking at the title and just thinking in 25 years time I will still be programming in the same three languages C, Pascal/Delphi and SQL I learnt 25 years ago.
Imagine WebComponents finally becoming mature and having a programming environment for RAD on the Web, using toolbox with drop-in components for doing application development.
> Functional is OK. Imperative is OK. Both in the same program are a mess.
I guess you're referring to languages that are not-quite-without-side-effects, but I'd say the biggest influence the functional paradigm has had on other (imperative) programming languages is actually the addition of higher-level data manipulation operations. The functional utility libraries you see for languages such as Python and JavaScript exist solely because sometimes functional idiom like "map this onto this" or "take that only if this holds" or "let's make a new function by pre-filling these function arguments" is more intuitive than having for loops all over the place. And it mixes just fine with other imperative code.
Presumably animats is talking about things like "do notation" in Haskell, not innocuous cases of function composition or first-class functions in imperative languages.
I read it as being about Scala and similar languages, where you can not freely rewrite function compositions because they may have side effects.
It may not be big problem from the programmer's point of view. But the compiler surely suffers because of that.
Interestingly, on Haskell-land the debate is about merging "do notation" and normal notation. On practice the difference is limiting, and arguably the types alone are enough to represent the difference.
I don't know, it's far from perfect, but is there a better way to do sequencing in a pure functional language? You have to be able to specify order of execution to tackle real world tasks unless you do callback/continuation passing style right? I find that a lot less intuitive for most applications.
Do notation isn't about sequencing, it's syntactic sugar for the monadic 'bind' operator. Sequencing happens because of data dependencies. This is part of the confusing story about monads -- the fact that every monad is a monoid doesn't imply a straightforward linear sequencing, as something like the tardis monad illustrates.
More constructively, it could be argued that IO just does not fall in the remit of functional programming. SPJ doesn't refer to Peter Landin much, but Landin's perspective is valuable. In 1964 he wrote "Is there some way of extending the notion of [arithmetic expressions] so as to serve some of the needs of computer users without all the elaborations of using computers?" (The Mechanical Evaluation of Expressions).
IMO this is exactly what the Haskell tradition is pursuing. Landin, quite modestly, doesn't anticipate that entire computer systems will be expressible in functional form. Implicit data structures are just one example of a mechanism that is incompatible with the idea that everything is an expression whose internal representation is managed by the language runtime (itself presumably involving mutable data).
I don't think Landin was experiencing a failure of vision--I think he saw clearly that some of the "elaborations" of computing are remote from a functional model.
Fair enough; I knew do notation was sugar for bind, but I didn't realize/forgot about the time traveling monad.
Follow up questions bc I'm still a Haskell noob:
Do you know if async/wait from Control.Concurrent.Async ensure linear sequencing?
Could the sequencing problem be solved if linear types make their way into Haskell? (They seem to mainly be about memory management, but I'm not sure about other potential applications)
I guess you're referring to languages that are not-quite-without-side-effects, but I'd say the biggest influence the functional paradigm has had on other (imperative) programming languages is actually the addition of higher-level data manipulation operations.
I tend to agree. The two big wins from a more “functional” style, from my perspective, are the clear emphasis on the data and the way effects are more explicit and controlled.
I want things like higher order functions and algebraic data types and powerful interface/class systems. With those I gain many useful ways to represent and manipulate data that I don’t have in most languages today.
In a world where most mainstream languages are just discovering filter, map and reduce on their built-in list types, a language like Haskell gives me, out of the box, tools like mapAccumWithKey that work with any data structure as long as it provides the specific, clearly defined interfaces required for the algorithm to make sense.
In a world where most mainstream languages are worrying about accidentally derefencing nulls or whether there’s a proper type for enumerating a set of values, functional languages routinely use algebraic data types and pattern matching, and some go much further.
Arguably, these aren’t really functional concepts at all, in that you could have them just as well in an imperative language. However, in practice it is the functional-style languages that are far ahead in these areas, because they are a natural way to work in languages that emphasize composition of functions and careful, explicit handling of data.
I also want to know that I’m not applying effects on resources unintentionally, or sharing resources without proper synchronisation, or trying to apply effects on resources in an invalid order, or failing to acquire or release resources properly, or leaving resources in a mess if something aborts partway through an intended sequence of effects. This aspect goes a lot further than just making data constant-by-default, but it certainly doesn’t require trying to remove state and effects altogether. These things aren’t so much about making my code more expressive but about stopping me from making mistakes.
I want a language that will stop me from accidentally modifying a matrix in-place in one algorithm while some other algorithm has a reference to that matrix that it assumes won’t change. I don’t want a language that will stop me from ever modifying a matrix in-place. Sometimes modifying things in-place is useful.
I want a language that will be explicit about the initialisation and lifetime and clean-up of a locally defined cache or temporary buffer. I don’t want a language that tells me I can’t cache a common, expensively computed result 15 levels deep in my call hierarchy without changing the signature of every function on every possible path to that point in the code, or a language that will let me do whatever I want but only if I use some magic “unsafe” keyword that forfeits most or all useful guarantees about everything else in the universe as well.
In this respect, my personal ideal programming style for most tasks very much would be a hybrid of imperative/stateful and functional/pure styles, with the key point being that the connections between them should be explicit, obvious and deliberate.
Rust doesn't require use of unsafe for cacheing 15 levels deep if you use the right primitives, i.e. a Mutex. If you dont use safe primitives, then yes you need to use the unsafe keyword to mark the code as unsafe. How is that unreasonable?
I was commenting on the beneficial influences from functional programming on mainstream languages, and noting that a purely functional programming style with no effects isn’t (IMHO) particularly necessary or desirable.
I don’t know much about Rust so I can’t comment much on that. The intended point of my example was that in a purely functional environment, you can’t have a local, low-level cache, because updating a cache is inherently stateful. So either you need something like a mechanism that lets you break the rules locally, like say unsafePerformIO in Haskell, or you need to infect your entire call chain with whatever mechanism you use to manage effects top-down. While that is perfectly reasonable, given the constraints you choose by adopting a purely functional language, I don’t think it’s particularly helpful.
My conclusion was that I’d rather have a hybrid of imperative and functional styles, contrary to the suggestion in the original presentation and in general agreement with stdbrouw’s comment.
Unless I am misreading your question... Erlang / Elixir?
They are purely functional in the sense that NO VARIABLE inside your code can be ever mutable.
However, they do have ETS (which is an in-process cache inside the VM) which is fully mutable and people have long made wrapping libraries around it for transparently working with mutable arrays, double-linked lists, queues, matrices, graphs and what have you.
The philosophy basically is "always work with immutable data except when mutable is more performant or is otherwise more practical". They don't shut the door on you, they just force you to make your intention to work with mutable storage very explicit and clear. That helps a lot when you go hunting inside your code for side effects, too.
As a maturing Elixir dev I can say this philosophy works incredibly well in practice. Code is smaller, much more readable, you don't worry about side effects like ever -- except in very, and I mean VERY RARE occasions (for 1 year of working with it I only had to do that twice) -- and finding a bug is times faster compared to Ruby, Javascript, PHP, Java.
Yes, they exist and have been used for decades; two well known examples are Common Lisp and Scheme.
For example you can do functional, imperative and OO programming in Common Lisp, and it brings extensive features for working in all these three paradigms.
There are several reasonably well-known languages that bridge the functional and imperative worlds in one way or another. However, I’m looking for more than just that. In particular, I’d like to have good tools for both imperative/stateful and functional/data-transforming coding, but all within a safe, structured environment in terms of effects/external interactions/observable behaviour.
Now, I am by no means a Lisp expert, so it’s entirely possible that I’m completely unaware of something here. However, I’ve yet to encounter much of an effect system in any flavour of Lisp. Indeed, it’s hard to see how the sort of explicit visibility and control of effects that I’d find useful could be achieved in a language with primarily dynamic typing using any of the approaches I’ve encountered so far.
Great points, but I have a question about one of them in particular:
> - Functional is OK. Imperative is OK. Both in the same program are a mess.
What do you mean by that?
My experience is that functional alone is impossible, since the only useful thing a program can do is through state changes; imperative is a-OK, and functional+imperative in the same program is the best way to do things (i.e. well-defined stateful areas surrounded by lots of functional code).
Once you get over the initial learning curve of the functional/pure approach to state/IO, its far superior to imperative imo. You don't need to reason about global state - because everything is explicit, including passing around your state, you never have to worry about "what if someone else or some other code somewhere is touching this" again.
> My experience is that functional alone is impossible, since the only useful thing a program can do is through state changes
Some programs are simply pure functions. If your program entry point accepts a string and returns a string, then you can write useful things, such as compilers, image processing, grep, etc, entirely as pure functions.
Logging with timestamps is not in opposition to purity.
All purity does is force you to change the type of a function that wants to log something, to reflect that it’s returning a value that, when evaluated at run-time, will have the side-effect of printing to the console. It’s still a pure function since it returns the same description given the same argument(s), the only difference is that when this description is evaluated by the run-time system, something will be printed to the console (in accordance with the description).
In other words, you’re not restricting the effects your program can have, only how you can choose to describe them (as first-class values whose evaluation has a side-effect that is not observable by your code).
Sure, it's not impossible to write something as a pure function. But it's not an interesting statement to make unless you also apply the implied context that it may be desirable to do so. Do you think it is desirable to make these pipeline components as programs using pure functions?
Ok, to clarify - I didn't mean program as in a simple function. I meant a program as in application, something that accepts some real-world input and produces some real-world consequences.
Right - a compiler. That's a real-world application isn't it? A compiler can be a pure function - accepting source text as input, and producing machine code as output. Yes more complicated languages do more complicated things, but for several languages you could write a state-of-the-art compiler as a pure function.
Only if you ignore the dynamic state of the language, of associated libraries, of processor architectures, and so on - i.e. the context the compiler is working in and the mutable state of the language and development environment as a whole.
The idea of pure functions is a false friend in CS because it tries to solve the problem of state by wrapping it up and wishing it away.
It's true that state causes a lot of problems, but so many useful systems rely on mutable state - at practical application levels - that it might be interesting to design robust systems that manage state, context, and relationship instead of trying to create contrived examples of state-free systems.
There seems to be a cognitive bias against this in CS. Most developers appear to love puzzle systems made of hard little components with solid edges, and thinking in terms of context and relationships seems to be disturbingly alien. So there aren't many programming paradigms that explicitly work with contextual inference instead of trying to rigidly delimit interfaces.
But there are real prizes to be won from associative context-smart computing, and IMO the domain is wide open for innovation - because it may be possible to give up the pursuit of complete safety and predictability (which doesn't exist anyway) in return for new kinds of powerful, productive, and smart features.
> Only if you ignore the dynamic state of the language, of associated libraries, of processor architectures, and so on
I don't understand why any of this means you can't have a compile as a pure function. A pure function can cope with things changing internally - it just creates new data structures to represent things that have changed in the old data structures and then passes the new data structures onto the next phase.
A pure function just means you can't do something like a package manager that needs to read files from disk or download things.
> but so many useful systems rely on mutable state
You don't have to argue this to me - I wrote the first half of my PhD on the importance of mutable state.
I'm just arguing against the nonsense that it is impossible to write a useful real-world application as a pure function.
I work in the VM research group at Oracle, and guess what? Our JIT compiler is basically a pure function. It takes in some bytecode and produces some machine code. It's structured a little differently in reality, but it is logically, and almost in practice, a pure function from one to the other.
This is the best HN post i've read today. You should expand it into a blog post.
I also agree - context and state are important and useful, because many real-life problems or processes rely on such a model and thus translate naturally when state is a "first-class citizen."
Erlang / Elixir are 100% immutable inside the code (no var can ever be mutable).
However, they have a mutable in-process cache (living in the BEAM VM) that many people have written libraries around for stuff like mutable arrays, matrices, graphs, and many others.
It goes like this: do 99.5% functional programming and you have the imperative / stateful tools for when they are absolutely necessary.
> Interprocess communication could use language support.
+100
There have been a few attempts in this direction, but they have mostly been couched in the form of a whole new language that also embodies at least a half dozen other novel (i.e. unfamiliar) ideas as well. Contra the OP, I think this is an area where incrementalism does work. Extending a language people already know with a few constructs for IPC, much like fork/join or async/await have done for concurrency, is much more appealing. I've been thinking about this for a few years now. Maybe I should write some of that down and let people pick at it.
> - Interprocess communication could use language support.
I'm really interested in hearing more of your thoughts on this, since it touches on one of my personal research interests. What kind of language support for IPC are you looking for? Something in the vein of session types [1], which checks that two parties communicate in a "correct" sequence of messages?
Lower level than that. Languages should have marshaling support. Marshaling is a low-level byte-pushing operation for which efficient hard machine code can be generated.
I'd suggest offering two forms of marshalling - strongly typed and non-typed. Strongly typed marshaling means sending a struct to something that expects exactly that struct. That will usually be another program which is part of the same system. Structs should be able to include variable-length items for this purpose, so you can send strings. Checking involves something like function signature checking at connection start. This should have full compiler support.
Non-typed marshalling includes JSON and protocol buffers. The data carries along extensive description information, and the sender and recipient don't have to be using exactly the same definition.
Both are needed. Non-typed marshalling is too slow for systems which are using multiple processes for performance. Typed marshalling is too restrictive for talking to foreign systems.
I don't disagree but during my career I've found surprising amount of cases where ASN.1 and its practical encodings like DER, BER and PER work impressively well.
Human readability however has always been the selling point of terrible formats like XML and JSON.
Not sure how can a binary-compact format ever account for that. Maybe excellent cross-platform tools that allow you to inspect and modify the binary-compact format wherever it is? (I mean not only standalone CLI and GUI software; I also mean native browser support in the Dev Tools space and going down the line in the future -- transparent support for the format[s] natively in the programming languages / VMs themselves.)
> Previously the options were reference counts, garbage collection, or bugs.
I think you mean “lack of memory safety” rather than bugs. Garbage collection doesn't magically free you from finalization bugs, it just makes their consequences less disastrous.
> Formal methods are still a pain. The technology tends to come from people in love with the theory, resulting in systems that are too hard to use.
Clearly, the solution is getting rid of people, but it isn't entirely clear on which end to get rid of people.
- The borrow checker in Rust is a great innovation. Previously the options were reference counts, garbage collection, or bugs. Now there's a new option. Expect to see a borrow checker in future languages other than Rust.
- Formal methods are still a pain. The technology tends to come from people in love with the theory, resulting in systems that are too hard to use. Yet most of the stuff you really need to prove is very dumb. X can't affect Y. A can't get information B. Invariant C holds everywhere outside the zone (class, module, whatever) where it is transiently modified. No memory safety violations anywhere. What you really need to prove, and can't establish by testing, is basically "bad thing never happens, ever". Focus on that.
- The author talks about "Java forever". It's more like Javascript Everywhere.
- Somebody needs to invent WYSIWYG web design. Again.
- Functional is OK. Imperative is OK. Both in the same program are a mess.
- Multithread is OK. Event-driven is OK. Coroutine-type "async" is OK. They don't play well together in the same program. Especially if added as an afterthought.
- Interprocess communication could use language support.
- We still can't code well for numbers of CPUs in triple digits or higher.
- How do we talk to GPU-type engines better?