This is amazing! It might be a personal feeling, but my opinion is that Facebook is SO much better than Google at delivering open source libraries that people want, and support them in the best way.
React vs Angular, Pytorch vs Tensorflow. These are just two examples among many where the Facebook framework arrives a bit later on the market but is then supported awesomely and improved continuously while the Google framework arrives earlier but then becomes a hot mess of non retro compatible upgrades and deprecations...
My “loyalty” to Facebook open source libraries just keep growing.
Not to dismiss anything you mentioned, but looking at the big picture Facebook also published things that went awry / never really gained adoption. Flow, Buck, Hack to name a few.
I mean, the cool things are cool (even if I have a really hard time with Facebook as a company) but I think it's a bit of a stretch to say Facebook has "the recipe".
FB has thrown a lot of random stuff over the wall that they use internally and thought might be useful to other people. I think Hack and Buck fit into this category. Facebook will maintain them forever, because they are core parts of internal infrastructure. Their value to Facebook is completely independent of wider industry adoption.
Whereas PyTorch is intended, from the ground up, to be a widely useful project, and the dev team weights open-source issues at least as much as internal ones.
(Full disclosure: I used to work at Facebook, including, briefly, on PyTorch)
I actually have been burnt by Flow so I agree with you, I don’t say that they have “the recipe”.
But my subjective point of view is: Google project = “Meh, this big opinionated framework will be probably be abandoned in 2years and all those poor folks who adopted it will have to rewrite everything”, while Facebook project = “Wow they just took the best of the state of the art, and packaged it in a simple reusable library that will be supported decently”.
Not a big fan of Facebook as a company, but as an open source contributor they are among the best in my book.
Isn’t part of that caused by Facebook actually drinking their own poison?
Part of what made AWS get such a head start is that Amazon actually build it for themselves. Google just throws random tech around, but most of their own tools stay internal.
Being a little slower to release allows you to learn from other's mistakes. Python tends to follow this philosophy when adopting language features, they're usually not the first to introduce something, but when they do introduce it it tends to be very polished. For long term maintainability, and adoption this matters tremendously.
Maybe its because I haven't got much experience with PyTorch but I find Tensorflow's API much simpler to understand. Do you think putting time to learning PyTorch will change my mind?
I have been using react native for 3 years and it is amazing! To my knowledge there are no equivalent platform allowing to develop cross platform apps in Typescript.
My react native app does some advanced stuff like displaying 3D stuff with openGL, running separate threads, interacting with drones and running neural nets (hence why I am happy about this news!). I still have to think how to upgrade to 0.60 while supporting threads though.
While it was not always super easy to configure all this, it has stayed unbroken for all this time so again this is something I am thankful for.
Well, that wasn't my experience, my experience is that each new release seriously broke my project, that Android React-native was in a sad state for a long time, that scrolling list and image loading performance was bad and had to be replaced with custom versions, that the attempts to write the Navigator in JS were bad and janky, and you invariably had to resort to native platform dependent ones, and one and one. The migration tool frequently failed for me.
And I'm not even referencing the sorry state of third party components as well. Something as simple as a map component or bluetooth component would break on every release.
Oh, and then there's Babel choking on large complex projects in hard to debug ways.
I agree that upgrading versions in react native is not as simple as increment it a number in package.json.
My advice to you in your case would be find the versions that work and then stick to them.
Only update when needed and treat migrations seriously (in a separate branch, migrate, test everything and only merge once everything works perfectly).
That’s what we do and we did rollback several migrations because of various problems. But given our complex stack I think this is fair enough.
Yes yes yes! I've been anticipating this for so long.
However I'm a bit skeptical about doing quantization after training, in my experience you have to do quantization-aware training for there not be a large performance decrease. I guess it works though otherwise they wouldn't have released it?
https://arxiv.org/abs/1906.04721 is an example paper where they perform data-free quantization after training (using batchnorm parameters to get information about the data distribution) without large performance drops.
Does anyone else have to maintain backend PyTorch based services? Is it just me or is it a complete mess?
Members of my team have spent literal months tracking down memory leaks, the performance of these services are always sub-par to Tensorflow based ones and the less said about the atrocious memory/cpu usage the better.
What's the advantage of using PyTorch when you have things like Tensorflow Serving ready to productionize any model with ease?
As far as I know, there is no way to use Neural Engine without going through Core ML. This does not go through Core ML, hence this can't use Neural Engine hardware acceleration.
This is a common trend for being second in market, when we see Pytorch and TensorFlow 2.0, TF 2.0 was created to compete directly with Pytorch pythonic implementation (Keras based, Eager execution). Facebook at least on Pytorch has been delivering a quality product. Although for us running production pipelines TF is still ahead in many areas (GPU, TPU implementation, TensorRT, TFX and other pipeline tools) I can see Pytorch catching up on the next couple of years which by my prediction many companies will be running serious and advanced workflows and we may be able to see a winner there.
In this experimental release with prebuilt binaries it’s about 5Mb per architecture. This includes all operators for inference (that is forward only). We’re working on selective compilation so that you can build a smaller bundle with only a subset of ops that you use. With that for common CNNs it should get to 1-2 Mb range or even smaller.
It's non-trivial, certain models like GRU has fundamentally different implementation, so they are not cross-compatible. TF also has no incentive to merge any PR back into its code-base for compatibility, and dragged its feet for months. I have spent a long time investigating into this and hit brick walls after brick walls.
Maven is simple, works extremely well and behaves pretty sane. IDE integration is amazing and unless you write a build/deploy pipeline you don't even need to call a single mvn command. What do you not like about it?
It's supposed to go through their JIT first, so there's no Python running on embedded devices. Of course, this means if your model can't easily be made compatible with the JIT, you can't use PyTorch Mobile. But that's not surprising.
React vs Angular, Pytorch vs Tensorflow. These are just two examples among many where the Facebook framework arrives a bit later on the market but is then supported awesomely and improved continuously while the Google framework arrives earlier but then becomes a hot mess of non retro compatible upgrades and deprecations...
My “loyalty” to Facebook open source libraries just keep growing.