Cryptography Reference
In-Depth Information
plaintext
encryption
decryption
check
ciphertext
ok
Fig. 5.2
Correctness check using the decryption
5.3 DMR Approaches Specific to Block Ciphers
Dual modular redundant approaches are the most straightforward way to implement
error detection but they usually double the area costs or halve the performance
of hardware implementations. However, for block ciphers the situation is slightly
different as they have a structure which often allows optimizations. Depending on
the block cipher and on the mode of operation it is sometimes even possible to keep
the overhead close to zero. Furthermore, studies have shown that such approaches are
the most efficient for hardware implementations if a realistic fault model is assumed
[266].
5.3.1 Using the Inverse
In the previous section, we argued that injecting twice the same fault is always
sufficient to overcome space-redundant approaches and that permanent errors defeat
time-redundant countermeasures. In contrast to algorithms in general, block ciphers
present a bijection. Thus, it is possible to feed the result into the algorithm's inverse,
in our case the decryption. The decrypted ciphertext can then be checked against the
original plaintext. This is depicted in Fig. 5.2 . For time-redundant detection schemes
this approach has a significant advantage over encrypting twice and comparing the
ciphertexts, namely that permanent errors are detected with high probability.
For space-redundant detection schemes, it depends on the fault model whether the
scheme provides higher security than standard DMR schemes. This is because, for
block ciphers, the decryption is usually composed of the inverse round functions in
reverse order. Thus, during the decryption, at a certain point the only present error is
the very same as that induced during the encryption. If it is possible to reverse the fault
injection, the error will not be detected. Note that this is specific to block ciphers and
is not typically true for public key cryptographic algorithms, e.g. RSA. In general,
reversing an error is expected to be more difficult than injecting the same fault twice.
However, for an adversary who can induce random byte faults, the probability of
correcting the error is the same as that of injecting the same error again.
For time-redundant approaches (as usually pursued in software implementations),
the performance reduction for encrypting twice or encrypting and decrypting is nearly
Search WWH ::




Custom Search