Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In proof languages where you want a proof that a function will succeed without running it, knowing that it doesn't diverge is important.

But if you're going to do the calculation, there's no difference between an infinite loop and a loop that finishes, but will take longer than you're willing to wait. Either way, it looks like the program hung.

So in practical programming languages, I'm not sure that the "diverge" effect is useful? You still need to kill it if it hangs. If it will take a long time, returning some kind of progress indicator would be nice. Maybe writing it as an iterator would help with that?



It may depend on what you mean by 'practical', perhaps, but in a Haskell-like language divergence (as opposed to induction) more or less takes the form "let x = x in x" with varying degrees of ceremony. If you can provide to the compiler that you don't have any such thing in your fragment of code it can omit black-hole updates and checks during garbage collection for a small (possibly negligible or nil, if divergence still exists in the language anyway) performance boost.

As you suggest, explicitly adding a provably-reducing value like a maximum iteration limit lets you show progress, give up at a "reasonable" time, and avoid divergence. This is how unlifted programming languages permit general recursion - that, and allowing inductive types to merely prove they're productive within a finite time bound (eg, a stream of values only needs to show it produces the next value eventually, not that it produces all values eventually).




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: