Like I said somewhere, a safe subset of C++ sounds wonderful at first, but it also has demerits:
- If you enforce memory safety to all of your code, the number of the libraries that you can use will be severely limited. Imagine if you can only use C++14 libraries. How many C++14 libraries are out there? And I'm not sure if there will be many libraries written in safe C++, at least for the next few years. The adoption rate for newer C++ standards (C++11, C++14) was generally not satisfactory.
- If you only enforce memory safety to your own code, then you could use the existing ecosystem. But in that case, your program will still segfault because of the libraries you're using. Admittedly this is also a concern of Rust when you use C libraries in Rust. The more safe code you use, the safer your program is. The problem is, if you choose this route, there's not much ecosystem-ish advantage in the C++'s side. You will also have to "reinvent the wheel" as you would have to do in Rust.
This problem all boils to this: adding guaranteed memory safety to the unsafe language is a breaking change. And the problems occurred by breaking changes are very hard to solve.
Rereading your comment it feels like you've moved the goal posts and are being overly cynical. (This presentations essentially addressed your previous concern by providing interoperability between new safe code and existing code that doesn't have safety guarantees)
These are valid concerns, but you're being very dismissive of the benefits
I think confining errors to libraries is generally a good strategy. In effect you are guaranteeing that you're not going to introduce any new unsafe code.
It might uncover errors in the existing code.. but that's a much smaller issue and one that fades with time (esp if you're talking about libraries that a lot of people are using)
Is this "good enough" for a brand new aircraft-control/heart-lung-machine system? Maybe not. Maybe then you should start from the ground up using either Rust or C++/GSL entirely. But I think for most use cases this is sufficient and brings huge immediate benefits now. And of course with time libraries will be updated to include safety guarantees (which for the most part seem like it wouldn't involve a ton of work)
This is more of an organic process that basically in some form preserves the millions of man hours people have invested in C++ and less of a clean slate solution. As I've said before, you gotta make a living now in the current reality with the current codebases and the current tools. You can't just judge languages in a vaccuum.
> which for the most part seem like it wouldn't involve a ton of work
This is probably the reason why our opinions vary. I believe rewriting existing libraries to be safe would be a tremendous amount of work. This estimation comes from my experiences with Rust. Safety guarantee hugely affects the interfaces of the libraries. You cannot "just" gradually increase safety of your library. It is basically a redesign. At least with what Rust currently provides this was my experience. And the method suggested by Herb Sutter seems to be very similar to Rust's. So I expect the same churn to happen in the C++ ecosystem, and it would effectively split the ecosystem into two: safe ones and unsafe ones.
Let me rephrase myself again: if adding safety to C++ in a gradual manner was that easy, Rust would have not been invented in the first place. The Mozilla guys could just use C++. There's a reason why Rust had to exist, despite all the "reinventing the wheel".
I guess it's a matter of opinion but I think you're wrong that you have to redesign stuff. Most of the guarantees in essence already exist b/c of best-practices (like RIIA) - they're just not guaranteed by a validating program.
Don't diminish it, b/c the presented solution is non obvious/trivial. If it was, it would have been done ages ago.
So either
A - The Mozilla folks may have not thought of this solution
B - At the time of Rust's development C++ was stagnant and not evolving as it is right now. There wasn't a lot of hope for fixing it at the time. It's thanks to the work of several people that the C++ standards committee now is a fast moving organization.
If Mozilla were considering starting the Rust project in 2016 I don't think they would have gone ahead with it
- If you enforce memory safety to all of your code, the number of the libraries that you can use will be severely limited. Imagine if you can only use C++14 libraries. How many C++14 libraries are out there? And I'm not sure if there will be many libraries written in safe C++, at least for the next few years. The adoption rate for newer C++ standards (C++11, C++14) was generally not satisfactory.
- If you only enforce memory safety to your own code, then you could use the existing ecosystem. But in that case, your program will still segfault because of the libraries you're using. Admittedly this is also a concern of Rust when you use C libraries in Rust. The more safe code you use, the safer your program is. The problem is, if you choose this route, there's not much ecosystem-ish advantage in the C++'s side. You will also have to "reinvent the wheel" as you would have to do in Rust.
This problem all boils to this: adding guaranteed memory safety to the unsafe language is a breaking change. And the problems occurred by breaking changes are very hard to solve.