Cryptography Reference
In-Depth Information
from E 1 . Huffman coding compresses all DCT coe cients D k (i, j) includ-
ing those modified and stores the authenticated image as a JPEG file on
adisk.
The first three steps, which are the same as Steps 1, 2, 4 in the invertible
authentication, are skipped for the introduction of the integrity verification.
The fourth step runs the context-free lossless arithmetic decoder for the LSBs
containing the coe cients visited during the same random walk as the embed-
ding process. Once the length of the decompressed bit-stream reaches B +H,
whereHstands for the hash length, the procedure is stopped. The decom-
pressed bit-stream is separated into the LSBs of visited DCT coe cients and
the extracted candidate for hash h. The retrieved LSBs replace the LSBs
of all visited coe cients to restore the original quantized DCT coe cients
d k (i, j), 0≤i, j≤8,k =1, 2,,B. For authentication, the hash H of all
retrieved quantized DCT coe cients is calculated. H is compared with H.If
they match, the JPEG file is authentic and the original JPEG image is ob-
tained. If H = H , the image is deemed non-authentic. Experimental results
show that the distortion increases with the compression ratio.
Lossless Bit-Plane Compression in the IDWT Domain
Paper [6] embeds data into the middle bit-planes of the integer wavelet trans-
form coe cients and applies histogram modification in the preprocess. This
is done to prevent the overflow and underflow problem caused by modifica-
tion of the wavelet modification. The method achieves a larger bias between
binary 1s and binary 0s in the middle and the high bit-plane of the IDWT
coe cients than that in the spatial domain. Owing to the larger bias, those
bit-planes coe cients can be losslessly compressed to accommodate the more
hidden data. The method is able to imperceptibly embed about 5 k to 94 kbits
into a grayscale image of 5125128. This is much more than that achieved
by existing techniques.
Lossless RS Data-Embedding Method
Goljan et al. [7] presented the first lossless marking technique suitable for
data embedding. They generated loss free compressible bit-streams using the
concepts of invertible noise adding or flipping. Special discrimination or pre-
diction functions were also used on small groups of pixels. The new approach
is much more e cient when allowing for large payload with minimal or in-
vertible distortion.
The details are as follows. The pixels in an image with size MN are
partitioned into non-overlapped n groups, each of which consisting of adjacent
pixels (x 1 ,x 2 ,,x n ). For instance, it could be a horizontal block having four
consecutive pixels. A discrimination function f is established that assigns a
real number f (x 1 ,x 2 ,,x n )∈
R
to each pixel group G (x 1 ,x 2 ,,x n ).
Search WWH ::




Custom Search