I implemented a prototype version of the algorithm in that paper when exploring exhaustiveness checking for pattern matching in Dart. I found it pretty easy to understand, but also really easy to get it to generate huge combinatorially large spaces. Some careful memoization and deduplication helped, but even so I never got the performance to a state I felt comfortable with.
Instead, I went with Luc Maranget's classic approach and figured out a way to adapt it to a language with subtyping (with a ton of work from Johnni Winther to figure out all of the hard complex cases around generics):
The performance (in the prototype!) was dramatically better. You can always make pattern matching go combinatorial, but I haven't seen any real-world switches get particularly slow with our approach yet, and we have some fairly large tests of matching on tuples of enums.