Cryptography Reference
In-Depth Information
dimensional space, in order to highlight their similarities and differences. In the
SCA techniques portfolio, PCA has already revealed its eciency on Template
attacks [23]. Basically, Template attacks are considered very powerful since they
can break cryptographic implementations which security is dependent on the as-
sumption that an attacker cannot obtain more than one or a limited number of
side channel traces. Moreover, these attacks require that an attacker has access to
a clone device on which he can perform trials to get trained. As described in [23],
PCA improves the class of Template attacks by pre-processing the leakage traces
before performing the attack on the real cryptographic device. Indeed, in the pre-
processing phase the attacker builds templates in order to profile the clone device.
Then, those templates are used to mount an attack on the real device. Our attack
uses PCA no more as pre-processing tool but as a distinguisher. Moreover, it fol-
lows the usual steps of differential power analysis (DoM [20], DPA [5] or CPA [6])
that consists of only one phase and does not require a clone device for profiling,
which makes the task of the attacker easier.
The rest of the paper is organized as follows. First, Section 2 attempts to
give some elementary background that is required to understand the process of
PCA. Second, this background knowledge is taken advantage of in section 3 to
outline the way how PCA could be exploited to mount an ecient attack. This
section goes through the different steps needed to perform the FPCA. Section 4
is devoted to experiments on unprotected and protected DES implementations.
This section highlights the eciency of FPCA by making a comparative analysis
with existing attacks (DoM, DPA, CPA, VPA). The conclusions and perspectives
are in section 5.
2 Principal Component Analysis: Background Knowledge
Letadatasetof M quantitative variables describing N samples, arranged respec-
tively in rows and columns. The goal of PCA is to ensure a better representation
of the N samples by describing the data set with a smaller number M' of new
variables. Technically speaking, PCA proposes to seek a new representation of
the N samples in a subspace of the initial space by defining M' new variables
which are linear combinations of the M original variables, and that are called
principal components. Generally speaking, reducing the number of variables used
to describe data will lead to some loss of information. PCA operates in a way
that makes this loss minimal. For PCA to work properly, the data set should
be centred. PCA starts by computing the covariance matrix of the data set in
order to find the eigenvectors and eigenvalues which permit the capture of the
existing dispersion in variables. In other words, it makes a change of orthogonal
reference frame, the new variables being replaced by the Principal Components
which are totally characterized by the associations of the eigenvectors and eigen-
values. But, more importantly, these associations reveal the hidden dynamics of
the data set. Determining this fact allows the attacker to discern which dynam-
ics are important and which are just redundant. The first component can be
expected to account for a fairly large amount of the total variance. Each suc-
ceeding component will account for progressively smaller amounts of variance. In
 
Search WWH ::




Custom Search