I wish them all the best. Julia seems to a good idea -- a high performance language with easy syntax (easy for Python users to jump to Julia), very good features for threading/multiprogramming, good type system...
...here it seem like any other modern language, until you see that Julia has something that many other languages lack: true macros (true metaprogramming.) A big feature. And multiple dispatch on all functions! (a very nice feature that puts it above many other languages in use.)
You can even program Julia in s-expressions if you feel like it. (Some argue that Julia should be considered a Lisp dialect.)
Compared to the other languages with Python-like, C-like or Algol-like syntax, Julia stands out from them as a more powerful alternative. (If you need more power and flexibility than Julia with good processing speed, i think only Common Lisp will clearly provide it.)
A very recommendable language, especially now with this initiative for giving more "enterprise-like" support, and worth looking in depth, if you are also considering moving to Go and Rust.
Julia is a great language, but it is still focused on the numerical computing world and is not "1.0" (language and API's are not locked down). I wouldn't really compare it to Go or Rust at this point. Go is focused on building services at its core (e.g. external event driven workflows). Aka microservices. Rust is focused on being a "traditional" systems programming language. Things like web browsers (duh), office suites, games, etc.
Julia is focused on taking the best of R, Python/Numpy, and Matlab. It is a _great_ language. But as far as I can see it's really focused on a different problem space.
If you take a look at the features (esp. macros and multiple dispatch) you can see that is readily extensible for lots of applications. Thus the reason it is also sold as a "general purpose" language.
Additonally, the very good support for threading and multiprocessing means it has a good future with services, async IO, and with event-loop web servers (a la node.js and friends).
Finally, the focus on fast speed for numerical computation is particularly suited to games (as is the support for threading).
I didn't think 1 based arrays would be a big deal, but whenever I try to port code from Julia to almost any other environment (C++, python/numpy, scala, JS), it becomes one of the hardest things to reason about quickly and correctly.
1-based arrays, and column-major layout of arrays annoyed me at first, however, one of the beautiful things is that much of Julia is written in Julia. If you want 0-based (or arbitrary based!) arrays, then simply use the https://github.com/JuliaArrays/OffsetArrays.jl package, and PermutedDimArrays have been added to Base, to handle some of the issues of dealing with row-major (and other sorts of memory layout) matrices.
Games are not going anywhere from a mostly C++ code base. New languages get adopted if they are too good to be true or they have enterprise adoption like Go. Even Python took 20 years to reach where it is now. Julia will take a long time to become a general programming language for all purposes.
Scientific Computing is a realistic goal just like Golang is a realistic language with Enterprise adoption. It may not be the most elegant language like Lisp but it is practical. (People don't want the perfect solution but one that works most of the time) (P.S I am vaguely quoting this from some anecdote about a discussion on Lisp and C)
Handling technical/scientific/numeric programming well is rather a hard nut to crack, and that is where the developers of Julia wanted a better solution for their own "greedy" programming needs, but Jeff Bezanson has been clear that it is also for general purpose programming (which is what I use it for). Many Julians are working to improve Julia for places that hadn't yet gotten as much attention (such as database access). I think in a couple of years, the canard of it being a "niche" language will be laid to rest.
Unfortunately 1.0 is delayed for a couple of months. 0.6 was released yesterday though, which already includes many of the features that were originally scheduled for 1.0.
I have yet to fully jump into Julia but it is one language that has impressed me for a while, and I've seen a good number of them that impress me but Julia has a few things that to this day I think back to like code_native() and how it returns assembly code for whatever you throw at it, might seem trivial to others and y'know you can do it with C too but it's still cool you can just do it.
Your description of it makes it very intriguing and makes me want to revisit it.
For scientific users used to R or Python, the performance should knock their socks off.
(But users doing their work in C, C++ or Fortran will probably see little or no improvement).
Numpy is good for vectorized calculations on dense multidimensional arrays of homogenous 64 or 32 bit floating point numbers, or integers. If that describes your problem and you like Python, by all means use numpy. If that doesn't describe your entire problem, you'll need more tools.
It should be understood that numpy is written in C, and is an example of the "2 language problem" that Julia solves rather well.
The question really is the converse ... given how deft Julia is with general programming as well as numerically intensive programming, do you really need Python + numpy(C).
But good enough for doing numerical computing for the NASA and for Raytheon.
The NASA used it for complete command of the Deep Space 1 spacecraft. "The Remote Agent" wasn't just entirely written in Common Lisp, it also won the NASA's Software of the Year Award...
"It's one small step in the history of space flight.
But it was one giant leap for computer-kind, with a state of
the art artificial intelligence system being given primary
command of a spacecraft. Known as Remote Agent,
the software operated NASA's Deep Space 1 spacecraft
and its futuristic ion engine "
... and i don't think they did it without heavy numerical computation.
Want another application? Analyzing HDTV video images of the Discovery launch in real-time. Again, NASA.
Another one? Mind you, i'm only citing examples where numerical computation is heavily involved...
How about simulation of missile defense systems at Raytheon?
"SigLab simulates an incoming warhead intercepted
by the Exo-atmospheric Kill Vehicle (...)
The system is capable of solving any computable system
of difference equations. "
Another one?
"the new version of the complete Piano aircraft analysis tool,
used by several major aircraft and engine manufacturers worldwide."
As for the "type system being weak", the comment puzzles me, considering that Common Lisp has one of the best (if not the best) OOP system, CLOS. There is an automatic correspondence between objects and types in CLOS so I can argue it has one of the best type systems out there.
It is. I know of the examples you mention (and some of them like planning may not really involve much numerical work) but the fact is that unless you're doing things by scratch (like starting from GEMM), numerical libraries are often badly documented, slow or otherwise incomplete (or a mixture thereof). You can disagree with me, but Tamas Papp who maintained a bunch CL libraries seems to have moved over to Julia entirely.
> As for the "type system being weak",
I meant weak in the sense that it's not very expressive, so as to allow better static type inference. I'm not saying CL should be static for "correctness" reasons, but for removing the user from having to have declarations all over the place.
Let's reify. If I were to create a bunch of matrix classes (matrix-float, matrix-double, ignoring the lack of sugar), CL provides no way to declare a generic method 'mref' to be intelligent enough to know what the output will be, given the input type. Even getting the type in the compiler-macro is implementation dependent. In C++ this is extremely easy with templates, and in Haskell you do it via a type declaration (AFAIK).
Yes, you can get around it via lots of macro magic, but this is so fundamental that you might as well create a new language (like Qi/Shen).
> Let's reify. If I were to create a bunch of matrix classes (matrix-float, matrix-double, ignoring the lack of sugar), CL provides no way to declare a generic method 'mref' to be intelligent enough to know what the output will be, given the input type.
I am not sure if I follow you.
If you have "matrix-float, matrix-double" you are perfectly able to define a "mref" generic method that acts smartly depending on the input type. This is one of the most basic features of the CLOS OOP system. Maybe there is something missing on your explanation?
I don't dispute the effectiveness of Common Lisp, but
> There is an automatic correspondence between objects and types
is almost never what is meant by a "strong type system" -- it more commonly means that types are checked at compile time, rather than run time. I don't doubt that any given lisp hacker could slap together a type inference system on top of CLOS, but as far as I can tell it's not built in.
So, on the strong-to-weak type system spectrum, I would say:
Strong has various meanings. Ada is "strong" versus C being "weak" though the basic Algol-like model is not so different, and if we don't count tagged types, neither language is dynamically typed in any way. (So if "strong" means "static", we would not be able to use the strong/weak terminology to contrast these languages!)
"Strong" in this context mean various things, ranging from C having holes in the type system that allow punning and bad memory references, to C having automatic conversions that are unsafe: for instance assigning a floating-point value to an integer location without an explicit conversion operator which handles situations when the conversion is impossible. Or not having a character type distinguished from an integer type.
C++ has more "strongly typed" enums than C; and what that means is that you can't assign an integer value to an enum object without a cast.
ANSI Lisp is fairly strong, but allows some relaxations: car on an empty list, integers usable where floating-point values expected (a bit like C) and such.
You are confusing "strong/weak" typing with "static/dynamic" typing.
CL is for the most part strongly typed. Python too. In CL there are almost no automatic typecasts.
Javascript and PHP and Perl are textbook examples of weakly typed systems. The interpreter (or compiler) can even allow things like 1 == "1".
With implementations like SBCL, CL also allows for static type checking with the "declare" and "proclaim" keywords, this checks types before runtime, if they can be inferred before executing the code.
I was a bit surprised by the idea of the FAA using Julia for an "Aircraft Collision Avoidance System", since it apparently doesn't have static type checking and deployment seems like a weak spot.
Digging a bit deeper, it appears they use it to deliver specifications and example code to vendors?
"[T]ransferring the specifications to industry using this legacy system required three different types of documentation: first, the specifications were written both in variable-based pseudocode and in English descriptive pseudocode. But this approach left gaps in interpretation, leading to possible confusion or disagreement. So programmers also created state charts to fill these gaps and eliminate the potential for misinterpretation."
Yes, it's quite cool. The specification is a literate Julia program and the specification document is built from it. Simultaneously, they run large scale simulations to make sure that the specification behaves as the theory says it should. There was some discussion with them about deployment, and they funded us for a proof of concept Julia-to-C compiler, but the continuation of that discussion is probably still a number of years off. It's a very long term project.
I know it's also gained a following at MIT's Broad Lab amongst genomics folks. And that seems to be Julia's sweet spot: easy parallelism for scientists who can't spare the time futzing with HPC internals. Congrats to the Julia language team and to the great scientific discoveries that will be enabled with this investment!
Interesting. I'm a member of a Broad Institute lab that recently evaluated Julia. We reached the conclusion that Julia, while promising, lacked the flexibility and performance of our existing tools. I suppose this is a problem that investment could address!
* Pandas is wrapped by Pandas.jl, interfaces with Julia's DataFrame, generally excellent.
* Images.jl is a 60%-to-70% replacement for scikit-image.
* NumPy and SciPy are mostly covered by StatsBase and Base, can't think of anything off-hand that's missing.
Julia's still immature in the machine learning domain, but a lot of new work is being done. One interesting somewhat-new package is Madeleine Udell's LowRankModels.jl, which has a particularly Julian approach to low rank fitting.
This is totally fair. Even the Julia creators regularly mention "stay with existing tools for now, if you are happy." Julia is an increasingly attractive option for greenfield, high-performance development, but for folks who aren't doing a lot of that, a more stable language and mature ecosystem is obviously important. Julia and many libraries can still require some elbow-grease, and the maturity and coverage varies between fields. However, this is rapidly changing as folks do new research in Julia and build up everything else they need along the way. The current situation is very similar to SciPy in the mid-2000s, though with perhaps a somewhat different mix of backgrounds. Here are a few examples:
So, hopefully you'll have reason to take another look in a few years. The most important thing, to me, is that all of these communities want more open, reproducible, and accessible science. There is plenty of room for growth, because users of closed-source software still probably outnumber py/r/jl by 5:1 or more.
Julia frustrates me. I was in a mathematical modelling sphere, so I learned the language basics years ago and immediately fell in love (multiple dispatch, optional types, broadcasting, ...). But I couldn't get approvals at work to push it, because it was immature. Then a month later, all my code broke. I rewrote it using new APIs. It broke again. I know they reserved the right to make breaking changes up until 1.0, but they are hindering adoption because of all this. Sorry, guys, but I'll have to stick to Python, Go and (sigh) Fortran.
That being said, I applaud getting good funding for a project that's actually more than beyond MVP, it has happy users/customers and it serves a purpose. That's not all that common these days. Good luck, guys.
An alternative would be getting your bosses approve for you to use Common Lisp.
It is mature (30+ years in use, rock solid ANSI standard), many compilers available for many plataforms, highly portable code, multiple dispatch (CLOS is arguably the most powerful object oriented system available), you can also very easily call C libraries with CFFI and there is portable support for threading...
...and well written CL code will approach Fortran and C in speed, at least with SBCL which is a free, popular compiler.
You can also compile to the JVM and to the LLVM, and if you need pro support for it, there is LispWorks and Franz Inc. Ah, and with ABCL you can also easily call java libraries, if you need to.
Yes Common Lisp is freaking amazing, but it isn't as good of an out of the box experience as a lot of these products. How do you do plotting besides pushing to GNUPlot? The REPL is great, but can you specify types? Yes it can reach near C/Fortran speeds, but how much time will I spend optimizing it? Are the built-in linear algebra, optimization...etc libraries good, or must I toil with FFI? How much time is really saved over dealing with all that and learning SBCL, slime, emacs...etc? Data scientists just don't have the time.
I was assuming that the use case cited above by drej was doing a complex, big system requiring numerical computation/analysis. It was mentioned that Julia was rejected on grounds of being immature technology, so I assume we're talking about a big project, perhaps destined to run as a server or as a clustered system.
The use case you imply is different, it is more like a scientist working alone at a workstation for interactive calculations, in which case perhaps Python with Numpy and Scipy is just perfect.
In any case, for the previous use case (complex, big system), i'd use CL for the calculations themselves, for the visualizations I could do interactive web graphics using the extremely nice Bokeh library for Python. I'd use Bokeh under Python (using Flask as a web framework) for the presentation layer, and the Lisp system for the actual calculations.
> Yes it can reach near C/Fortran speeds, but how much time will I spend optimizing it?
I guess you are aware that for maximizing speed in C you need to be very careful on how you write your code. In this sense there is no easy way out in any language. The good part is that Lisp, unlike C, is garbage collected, so working on a piece of code can be easier since there is no need to deal with pointers, allocation, and deallocation. It also supports arbitrary number precision and fractions and complex numbers out of the box, no need to do anything in particular to optimize their performance nor rely on external libs to operate on them. It is particularly good (speedwise) with big integers and arbitrary precision numbers.
As for Common Lisp itself, besides that observation -which applies to all languages-, you need to add type declarations. This improves the performance a lot. You are going to add type declarations on a statically typed language (C, C++, Julia, Java, etc) anyways, so i fail to see how this could take more time than in other languages.
Alright, so slightly different assumptions. You're talking about using CL for just one part of the process which is reasonable if not a little frustrating if one must switch between multiple different technologies.
Binding to GSL was kind of my point. That requires a lot more setup to use the FFI and bind what you need to CL. In Julia I imagine one either does A*b or loads a batteries included library first and then does that. My point being there is a fair amount of overhead to CL to get to that magical development spot :).
I think I've seen that plotting library in the clicki link, but it has a long way to go if you're used to Matplotlib, JS, or what is included in .NET.
The type declarations is another thing that the user would have to research and deal with. In Julia it is just part of what you do, so a bit more natural unless you can declare the types in standard CL without loading a library or something else odd. Can you just (define int(a) 3) and let the compiler guide you? I honestly don't know, but don't recall seeing that in the lisp tutorials and books I've read. My s-exps are probably wrong too lol.
The GSL library is already usable in CL, no need to add bindings, i included the link.
> you can declare the types in standard CL without loading a library or something else odd. Can you just (define int(a) 3) and let the compiler guide you?
Yes, and yes. No need to load anything and the declarations are simple as in "(declare (fixnum a b c))". The compiler does the rest. You can also specify ranges, etc.
"declare" is part of the ANSI standard.
> I think I've seen that plotting library in the clicki link, but it has a long way to go if you're used to Matplotlib, JS, or what is included in .NET.
There are many libs, not just one. But you might be correct, in which case i'd just have the visualization done in JS or .NET (etc), and just load the final for the plot through HTTP from the Lisp server (which is the one which did the heavy lifting.)
I didn't know GSL could be used right out of the box. That's good to know. The declarations part is pretty nifty too. I can't believe I didn't notice that before.
Perhaps easy for you and probably not super difficult for most people considering CL, but it's still a bigger pain than having 1 single tool that can build fast code with very little hand optimization, REPL, analysis libraries, built in plotting, and with a familiar syntax. Thanks for all the comments though. You pointed out parts of the CL ecosystem aren't nearly as bad as I thought. You should consider some blog posts if you've done this sort of thing before!
My girlfriend does incredibly complex modeling (she's a computational geneticist). From my limited understanding, it sounds like you may have some of the same challenges that she does.
Do you think Python and Go are more suited to that domain than R or Julia?
R is very, very slow compared to all the other programming platforms (including Ruby, a nice language but with some of the slowest implementations out there).
R syntax is very unlike most programming languages so jumping to other languages can feel strange.
Python with Numpy, SciPy and Pandas, is the current widespread alternative to R and a good starting point. Recommended!!
For all the cases I worked on, R is not slow at all. At home, I wrote a R package to do deep learning (include most common layers, but only conv layer was implemented in C++) from scratch, and I also wrote one in python (using numpy + numba). My R version is same fast as python on the MNIST dataset (without conv layers, hence all code is in plain R or python).
What this means is that for your purposes R is fast enough. But in the absolute sense (or in any case, relative to other programming platforms), R is one of the slowest ever. R is usually slower than Python, and Python in some cases can be 100x slower than C.
Performance, assuming identical hardware, depends on the program itself (that is, on what you are trying to calculate or compute or perform), but if you choose many different programs you can have a comparison between platforms ("platform" in this case means combination of programming language plus compiler or interpreter.)
Yeah, the Python implemented version is. But people doing serious computing in Python that requires speed are doing it with NumPy or even Cython or just straight up calling C/Fortran libraries in Python.
Go isn't meant for computational science, but business code as services. Python is really good in this area when combined with Matplotlib, Numpy, & ScyPy.
I wonder if this would be a feasible funding strategy for Crystal (https://crystal-lang.org/) as well. From the scattered benchmarks I could find online†, Crystal seems to be just as fast, if not a bit faster.
Well, they're both implemented in LLVM, but Julia uses a JIT and not straight to a native binary. I bet they have very similar performance. The use cases are different though. Julia is a replacement for MATLAB, R, Fortran, & scientific Python with speed and good macro support. Crystal is basically fast/native Ruby, so websites and business apps. You could technically use either for those applications, but the communities will be mostly scientific computing for Julia and business apps for Crystal.
In a pre 1.0 language there will always be some things that are still waiting to be optimized. I saw some performance spec for one of their HTTP libraries awhile back and it was a lot faster than Ruby. It might have been faster than Go.
Well, it is the only LLVM language on the list. Even so, Nim transpiles to C and should be fairly comparable unless it is because of the async part or like you said they're cutting corners. I guess it's possible they're just that good, but it seems unlikely.
That's an impressive amount of money to raise in Seed funding. Here in the UK this is more like Series A money: a Seed round would probably be around $700K (no authoritative source - purely from knowledge of a few companies raising this kind of money).
I do wonder though: have Julia been able to raise this much money thanks to awesome traction or the reputation of the team?
That's A money. They were founded in 2013. They have 11-50 employees. They already have products (read: MVP) and services. That's not proving a hypothesis seed money. That's scaling an enterprise A money. That said, Crunchbase [1] says seed round. But that's A money both by size and apparent intent.
We are self-funded and have been profitable from day one. [2]
- JuliaPro is the go-to product for anyone who wants to use Julia – professionals, engineers, software developers and data scientists. Beginners and experts can build better software quicker and dramatically improve performance.
- JuliaRun is for scalable deployment of Julia in production for real time analytics and large-scale parallel simulations in the public cloud or a private cluster.
- JuliaBox lets data scientists, quants and strats run Julia in Jupyter notebooks right in the browser. JuliaBox is the number one choice for universities teaching Julia since students can get started with Julia in seconds with no installation required.
- JuliaFin is a suite of Julia packages that simplify the workflow for quantitative finance, reducing the gap between the quant and the trader. JuliaFin provides the fastest path from market data to market transactions by leveraging the full power of Julia for implementing new trading strategies where time is of the essence.
JuliaPro is basically VisualStudio Express vs. Pro (e.g. there's a free version and then an expensive version with better integrations, better enterprise level tools, etc).
JuliaRun is basically a cloud service (e.g. charge for hosting) designed for julia code.
JuliaBox seems to be a cloud IDE / google docs / notebook style product. Probably with a similar model to github eventually (e.g. pay for teams / commercial use, etc).
JuliaFin seems to be a way to charge banks a lot more money for the same stuff.
Hey, I'd be interested in knowing if you're just selling plain ol' non-free software. Can I buy the PRO version and give it to my friends, can I study it, can I modify it?
It's a little sad when we are trying to figure out how to make money for free software and the answer ends up being "sell (some) software non-freely". Okay, so it's impossible to make money without selling non-free software? There can only be one Red Hat in the world?
I don't think it's any consolation that you are also producing free software; pretty much every major software vendor nowadays does that while keeping all the interesting secret sauce secret.
> Can I buy the PRO version and give it to my friends, can I study it, can I modify it?
Mostly: JuliaPro is essentially a nice convenient installer of a huge collection of open source work (though I can't say for sure, as I'm not really involved in that part). A few small components (e.g. the Excel plugin) are not open, and we are moving to MKL for some stuff, so that will change slightly though.
Open source business models are hard (as you're no doubt aware). You may disagree, but I think Julia Computing has so far done a good job of balancing the open source vs business aspect, which is part of the reason I originally joined.
JuliaFin actually includes a bunch more stuff on top of JuliaPro/JuliaRun that is finance/banking/insurance specific. We have a really cool library called Miletus - which is a contract specification language based on a paper by simon peyton jones. Also, it will include JuliaDB, Bloomberg connectivity, etc.
So, not more money for the same stuff, but more money for a lot more other stuff. Some of these components are proprietary, but many are open source.
So, are they just selling plain ol' non-free software? The "PRO" features seem to be ripping out the GPL libraries so they can link to MKL with a support contract. I have hope this means that the paid version is also free software.
Right, but selling non-free software isn't interesting for those of us who really want to see more businesses thrive around free software. If it's really impossible to make money without selling non-free software, then Julia isn't very interesting from an economic perspective.
If Julia hasn't figured out how to make money without restricting customers via legal or technical means, trying to lock them in, or putting software in their machines that controls their machines without the full knowledge and consent of the users (i.e. source code and permission), then I'm not really interested in what they're doing economically.
> JuliaBox is the number one choice for universities teaching Julia since students can get started with Julia in seconds with no installation required.
I don't understand. Julia is not stable at all yet. Who would want to teach it now? What would be the point?
I recently used Julia in my numerical analysis course and it worked well for the most part. We couldn't use it for some of the larger projects because it wasn't as optimized as MATLAB for certain matrix operations. However, it was great most of the time and better than using MATLAB on the university computers.
They sell an enterprise version of Julia for $1500 per year. The main things you get are support, MS Excel integration, and a non-GPL license. They also offer consulting, training, and a few other flavors of Julia (large scale server based gets its own edition).
I'm sure the main draw will be companies that don't want to be stuck with a GPL license in their product.
Isn't Julia licensed under the MIT license? They advertise being MIT licensed on their website, and their Github repository is also under the MIT license.
The core language is. The libraries are another matter.
From the website:
The core of the Julia implementation is licensed under the MIT license. Various libraries used by the Julia environment include their own licenses such as the GPL, LGPL, and BSD (therefore the environment, which consists of the language, user interfaces, and libraries, is under the GPL).
>They sell an enterprise version of Julia for $1500 per year.
$1500 per year
You can't support 100 people strong developer collective on proceeds from licensing of such product at all, given how small that commercial "data science" market is.
The Julia core is MIT and thus not an issue. The issue with making a bigger GPL-free package is in libraries and dependencies (which I assume are either turned off, swapped out for non-GPL versions or have dual-license deals of their own for the gpl-free package)
For those thinking "What's Julia?", article has a nice tldr --
Julia is the fastest modern high performance open source computing language for data, analytics, algorithmic trading, machine learning and artificial intelligence.
So, this is much more like a Series A? I wonder if they were advised to call it seed funding to leave open the possibility of a "big" Series A, due to the level of interest. It does seem like SV VCs are making somewhat large bets on F/OSS-based companies, so maybe that is wise.
General Catalyst and Founder Collective are both Boston funds, so SV rules don't apply here. For better or worse raising in Boston is very different. I did a bit of diligence on this deal as it was going around (for a different fund), and the dynamics of the company stage probably dictated the raise size.
It's not uncommon for an institutional seed round to be $2-3MM on a valuation of $6-15MM in Boston right now. The size of rounds is partially a function of how frothy the OSS tools market has gotten. I'm looking at another hybrid enterprise/OSS deal that has a valuation north of $15MM.
A rounds nowadays look for $!MM ARR so you can say the letters shifted as well :)
Sounds like a good idea for the language and a good startup idea. I know of two businesses that support python and since Data Science is a growing field we can always need competition in environments and languages.
As someone who used Python and R, and also Julia, my take on Julia has changed over time.
My overall assessment is that yes, it's definitely worth getting your feet wet with. It has all the advantages of R or Python (for numerics, a point I will return to), but with much, much better performance. I feel like the syntax is also cleaner than either, although it has more of an advantage over R than Python in that area (I like Python's syntax more than R's).
I had an experience of some prototype R code running for about a day without finishing. The same code in Julia finished in about 5 minutes. It was kind of the final straw that convinced me to gradually move.
Since that time, though, there's one issue that's kind of nagged me, and has only grown over time, which is that Julia is kind of a niche language, like R. It's a big niche, and it might not matter, but over time I've come to appreciate the fact that Python is more general purpose. I am also watching as things like Kotlin, Scala, and Nim gain in popularity and in resources. I suspect that Julia will expand over time, but those others have a head start in some ways (even if they are behind in other ways).
Like some others, I also had some experience of head-scratching changes that occurred with new API-breaking releases. They were subtle changes that were difficult to catch because they weren't deprecations or things that caused errors, but changes in how valid syntax is interpreted. I don't see that as a long-term problem, but it gave me pause.
I guess the TLDR is: if you're interested, I recommend you dip your toes in it, if that works for you, but with some caution. I see it more as a replacement for R long-term than Python, and I see serious competitors rising in popularity.
No. Not a knock on the language (I've never used it) but look at the R and Python ecosystems - it will be incredibly hard to replicate them. Then you have all the proprietary tools - MATLAB, Mathematica, Q etc - they aren't going anywhere. Then there's all the legacy FORTRAN code...
If it's still around in 10 years then it might be worth looking at (but still probably not).
This is true, sort of. It was also true when people said the same about Perl (some people still say this about Perl!). Then Rails happened to Ruby, just as the web was going through another phase of explosive growth. Likewise, Node or Rust + GitHub effect.
The point is not to compare the growth trajectories or projected peaks, but to emphasize the levering effect of an enjoyable language combined with modern collaborative development and infrastructure. A language doesn't need "everyone" to build a viable ecosystem, just a critical mass.
> If it's still around in 10 years then it might be worth looking at (but still probably not).
Knock on wood and all, but from past observation, I think Julia is at or very close to this critical mass already. I would compare Julia now to the SciPy ecosystem around 2009-2010. That's just when IPython was getting popular, early versions of Pandas were released, and people were starting to trust Cython. (I was a heavy Python user then, and frequently suggested it to Matlab users who had exactly your question even in to 2012-2013). There's already a surprising breadth of packages, they just need time to mature. Sometimes too many, actually -- I recently found three for a relatively obscure geophysics format I used to work with. That lever is big enough that people often don't mind rolling their own.
I've heard good reviews of the language here. So I've no doubt about its technical capabilities. However, funding might suppress the ability to make breaking changes.. which I believe are pretty good until 1.0 or so. On the business side, will we end up with (hopefully) a better, cheaper Matlab?
My guess is that we'll probably start seeing pricing split into various modules a la Matlab - Julia Financial, FEM, Semiconductor etc. Also Julia Enterprise etc.
Maybe Mathworks will acquire them and call it MATLAB 2020!
I'm not sure I buy the notion of, "Solves the two language problem." There is always going to be some library or environment consideration that makes a unified language for everything impractical (unless your entire business runs on an AS400).
The two language problem is more specific to scientific computing where literally every popular library is written in more then one language - fast C/C++/Fortran and convenient high-level R/Matlab/Python. NumPy, SciPy, Caffe, Theano, Tensorflow to name a few. I once had to rewrite Matlab + C code (for face tracking) to pure Julia - not only the resulting code was almost twice smaller, it also ran ~20% faster.
I wrote a library in c to replace a Julia one I wrote (and make it more portable) and found that the c version was slower! (Couldn't inline functions as easily in c and was also being more parsimonious about memory usage in c)
Even there you got support for C, C++, Java amung others as well as the whole IBM family (RPG/ILE, CL, REXX, ...) With somewhat decent bindings in RPG.
The actual internals, and features such as types and multiple dispatch can make it a very different to use, but hopefully that should be enough to get you started.
So this is why every once in a while we see some article about Julia being as easy to use as Python while having nearly the performance of C. That's some very slick but sketchy marketing tactics.
I'd rather deal with Cython apache license than deal with this GPL stuff for commercial use.
This seems to be a terrible misinformed comment. Not only isn't there really a problem with using GPL tools in a commercial context, as long as they don't become part of your work product. GCC is probably used to compile a large percentage of commercial software. And I'm sure you're writing comments in Chrome, yet expect to retain the copyright on those words.
Moreover, since I've been seeing this kind of hostility towards the GPL more and more often: it should be somewhat obvious to everyone that the current ecosystem of widely available, highest-quality OSS stacks is a rather surprising development. There are certainly parallel universes where parallel computing requires you to buy the compiler from IBM, and URIs end not in .html, but in .doc.
The GPL and its advocates aren't the only ones who created this, but I'm pretty sure they were at least necessary. That includes not just the software they created, but also the force of the moral argument for software freedom they made.
What I really dislike about a lot of the GPL arguments is that they are complaints that the GPL doesn't let you freeload masked as arguments that the GPL is anti-business. The GPL is one of the most business-friendly licenses there is. The freedoms guaranteed by the GPL mean dual licensing works. Good luck with a dual licensing strategy if you release your software under the MIT license.
> So this is why every once in a while we see some article about Julia being as easy to use as Python while having nearly the performance of C.
Here's a performance testimonial with links to real code and benchmarks against Fortran (not Python), from Steven G. Johnson, one of the authors of FFTW:
The reason Julia can beat the Fortran code is that metaprogramming makes it easy to apply performance optimizations that are awkward in Fortran. We have metaprogramming macros (@evalpoly) that can easily inline polynomial evaluations, whereas the Fortran code makes function calls that loop over look-up tables of polynomial coefficients. Even greater speedups are possible for evaluating polynomials of complex arguments, where there is a fancy recurrence from Knuth that is almost impossible to use effectively without code generation. In principle, the Fortran authors could have done the same inlining and gotten similar performance, but the code would have been much more painful to write by hand. (They could even have written a program to generate Fortran code, but that is even more painful.)
> I'd rather deal with Cython apache license than deal with this GPL stuff for commercial use.
The Julia core is MIT licensed. The language distribution is currently GPL licensed due to linked dependencies, but there has been a no-GPL build flag available for several releases. GPL (and other) dependencies are continually being removed, and there are only one or two (GPL) libraries left in the currently nightly build.
...here it seem like any other modern language, until you see that Julia has something that many other languages lack: true macros (true metaprogramming.) A big feature. And multiple dispatch on all functions! (a very nice feature that puts it above many other languages in use.)
You can even program Julia in s-expressions if you feel like it. (Some argue that Julia should be considered a Lisp dialect.)
Compared to the other languages with Python-like, C-like or Algol-like syntax, Julia stands out from them as a more powerful alternative. (If you need more power and flexibility than Julia with good processing speed, i think only Common Lisp will clearly provide it.)
A very recommendable language, especially now with this initiative for giving more "enterprise-like" support, and worth looking in depth, if you are also considering moving to Go and Rust.