wlanboy
Content Contributer
Encryption is based on two main factors:
It generates randomness from hardware interrupts e.g. by keyboard, mouse, disk or network I/O.
The main difference between /dev/random and /dev/urandom is that first is a blocking device.
So /dev/random waits until the entropy pool is filled to return random data.
/dev/urandom does not wait and therefore gerenates random data in a lower quality.
Lower quality means that previous data is repeated more likely.
So the quality of the entropy pool does have a direct inpact on the quality of SSL/TLS and other block ciphers.
One of my small servers does have a quite low entropy level:
cat /proc/sys/kernel/random/entropy_avail
129
One of my big (and busy) KVM servers does not have that problem:
cat /proc/sys/kernel/random/entropy_avail
4968
A level beyond 1000 is critical if you are using any OpenSSL based encryption.
If your server is handling a lot of SSL traffic (and handshakes) like any webserver or mailserver does, two things can happen:
But there are solutions for servers too - one of them is haveged.
It is a daemon that feeds the /dev/random pool on Linux using an adaptation of the HArdware Volatile Entropy Gathering and Expansion algorithm.
Installation is simple:
apt-get install haveged
Afterwards you have to check the default settings to ensure that enough bits are generated:
nano /etc/default/haveged
Content:
# Configuration file for haveged
# Options to pass to haveged:
# -w sets low entropy watermark (in bits)
DAEMON_ARGS="-w 2048"
After starting the server:
service haveged start
You should then set the daemon to autostart:
update-rc.d haveged defaults
There are - of course - better tests than cat to check if your entropy is ok:
apt-get install rng-tools
rng-test is using the FIPS-140 method to check the entropy:
cat /dev/random | rngtest -c 1000
Output would be:
rngtest 2-unofficial-mt.14
Copyright (c) 2004 by Henrique de Moraes Holschuh
This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
rngtest: starting FIPS tests...
rngtest: bits received from input: 2000032
rngtest: FIPS 140-2 successes: 1000
rngtest: FIPS 140-2 failures: 0
rngtest: FIPS 140-2(2001-10-10) Monobit: 0
rngtest: FIPS 140-2(2001-10-10) Poker: 0
rngtest: FIPS 140-2(2001-10-10) Runs: 0
rngtest: FIPS 140-2(2001-10-10) Long run: 0
rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
rngtest: input channel speed: (min=8.557; avg=17.635; max=24.236)Mibits/s
rngtest: FIPS tests speed: (min=68.610; avg=156.648; max=188.846)Mibits/s
rngtest: Program run time: 133849 microseconds
On 1000 runs there should not be more than 1 to 5 failures.
So check the line:
- prime numbers
- random numbers
It generates randomness from hardware interrupts e.g. by keyboard, mouse, disk or network I/O.
The main difference between /dev/random and /dev/urandom is that first is a blocking device.
So /dev/random waits until the entropy pool is filled to return random data.
/dev/urandom does not wait and therefore gerenates random data in a lower quality.
Lower quality means that previous data is repeated more likely.
So the quality of the entropy pool does have a direct inpact on the quality of SSL/TLS and other block ciphers.
One of my small servers does have a quite low entropy level:
cat /proc/sys/kernel/random/entropy_avail
129
One of my big (and busy) KVM servers does not have that problem:
cat /proc/sys/kernel/random/entropy_avail
4968
A level beyond 1000 is critical if you are using any OpenSSL based encryption.
If your server is handling a lot of SSL traffic (and handshakes) like any webserver or mailserver does, two things can happen:
- The server waits until /dev/random is ready again
- The server switches to /dev/urandom or it's own random generator (after a timeout)
But there are solutions for servers too - one of them is haveged.
It is a daemon that feeds the /dev/random pool on Linux using an adaptation of the HArdware Volatile Entropy Gathering and Expansion algorithm.
Installation is simple:
apt-get install haveged
Afterwards you have to check the default settings to ensure that enough bits are generated:
nano /etc/default/haveged
Content:
# Configuration file for haveged
# Options to pass to haveged:
# -w sets low entropy watermark (in bits)
DAEMON_ARGS="-w 2048"
After starting the server:
service haveged start
You should then set the daemon to autostart:
update-rc.d haveged defaults
There are - of course - better tests than cat to check if your entropy is ok:
apt-get install rng-tools
rng-test is using the FIPS-140 method to check the entropy:
cat /dev/random | rngtest -c 1000
Output would be:
rngtest 2-unofficial-mt.14
Copyright (c) 2004 by Henrique de Moraes Holschuh
This is free software; see the source for copying conditions. There is NO warranty; not even for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.
rngtest: starting FIPS tests...
rngtest: bits received from input: 2000032
rngtest: FIPS 140-2 successes: 1000
rngtest: FIPS 140-2 failures: 0
rngtest: FIPS 140-2(2001-10-10) Monobit: 0
rngtest: FIPS 140-2(2001-10-10) Poker: 0
rngtest: FIPS 140-2(2001-10-10) Runs: 0
rngtest: FIPS 140-2(2001-10-10) Long run: 0
rngtest: FIPS 140-2(2001-10-10) Continuous run: 0
rngtest: input channel speed: (min=8.557; avg=17.635; max=24.236)Mibits/s
rngtest: FIPS tests speed: (min=68.610; avg=156.648; max=188.846)Mibits/s
rngtest: Program run time: 133849 microseconds
On 1000 runs there should not be more than 1 to 5 failures.
So check the line:
Code:
rngtest: FIPS 140-2 failures:
Last edited by a moderator: