Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

My understanding is that implementing a PRNG in software results in a very small entropy pool. At the OS level, it can collect entropy from a vast number of sources, including things that an application won't have access to, which is why the OS exposes the RNG to applications. It also means maintaining such a PRNG in your software. Basically, it's the same old dependency argument again - there is nothing inherently wrong with using the tools the environment provides for you rather than building your own, but it does mean that the tools have to work properly. This is especially critical for cryptographically-secure PRNGs - those are things you really do not want to be maintaining yourself if you can access a high-quality source of random data, but again, if the PRNG doesn't work, you're in deep trouble (see for example Yubico's broken PRNG chips on the Yubikey 4). Hardware PRNGs on the CPU itself were supposed to dramatically improve the state of random data provisioning, but when bugs like this hit, it shows the weakness of depending on the stack beneath.


To be clear, you only need some random bytes to seed your cryptographic PRNG. This should of course be gathered from the OS but after that you only need to reseed once in a blue moon. Of course you shouldn't write and maintain a CPRNG yourself but there are many widely used, maintained and scrutinized libraries for this purpose.

For example, seeding ChaCha with 256bits will give you 1 ZiB of output before cycling. That should keep you going for awhile.


> My understanding is that implementing a PRNG in software results in a very small entropy pool.

A lot of PRNG are now implemented as the output of stream ciphers or block cipher in counter mode:

* https://en.wikipedia.org/wiki/Fortuna_(PRNG)

So 128 bits is all that is needed to get going.

Re-key every so often to ensure forward security in case there is a kernel-level compromise.

With AES-NI instructions in most CPUs, several GB/s can be achieved.


> This is especially critical for cryptographically-secure PRNGs - those are things you really do not want to be maintaining yourself

My conclusion would be exactly the opposite: If your RNG is so important that it has to be cryptographically secure, you owe it to your users to put in the time and effort of maintaining a proper implementation yourself, or at the very least use an open source library that provides this functionality in software. Otherwise you're always going to be at the mercy of a potentially misbehaving environment.

In terms of entropy, you don't really need to "maintain a pool" for CSPRNGs. You either have enough entropy to feed it with, or you don't. Once it is properly seeded, you can squeeze as many random bits out of it as you want (or at least, as many as anyone would ever reasonably need). It's really no different from a stream cipher, the key is the seed, and you're just encrypting zeroes. You don't need to suddenly get another randomly generated key after encrypting 100 MiB to encrypt the next 100 MiB securely.

Another great thing about entropy is that you can't reduce it. Which is why you really don't have to spend any time thinking about whether a particular entropy source is well behaved or uniformly distributed or anything like that at all. You just have to be certain that you have overall enough entropy that nobody can guess the entire seed. So anything the OS can give you? Dump it in there. Any kind of user interaction? Dump it in there. The time? CPU jitter? Network jitter? Just put it all in there. 100 MiBs of 0s? You know what why not, just put it on top because you literally can't make it worse, only better.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: