Information Technology Reference
In-Depth Information
(4 i )
U (4 i )
1
f
( p )=
,..., U (4 i )
4
U (4 i )
1
( p )
U (4 i )
2
+ U (4 i )
4
( p ) U (4 i )
1
( p + q i )+ U (4 i )
( p q i ) U (4 i )
( p q i )[1
δ ( q i )]
4
1
U (4 i )
3
+ U (4 i )
4
( p )+ U (4 i )
4
( p q i )[1
δ ( q i )]
U (4 i )
4
( p )
(12)
with δ ( · ) the Dirac delta function. The Dirac delta function is needed here, to
prevent the weights w ( p
p ) to be counted twice. In the last pass, again the image
accumulation buffer intensities are divided by the accumulated weights, which
gives:
,
T
U ( I )
2
( p )
f ( I )
U ( I )
1
( p )=
( p ) 000
,
(13)
U ( I )
4
U ( I )
3
,...,
with I =4( |
δ
| +1) / 2+1=2 |
δ
| +3. The output of the NLMeans algorithm is
then ˆ
X ( p )= U ( I )
2
U ( I )
3
( p ) . Consequently, the complete NLMeans algorithm
comprises the passes i =1 , ..., I defined by steps (10)-(13).
( p ) /
2.4 Extension to Noise Correlated across Color Channels
In this Section, we briefly explain how our GPU-NLMeans algorithm can be
extended to deal with Gaussian noise that is correlated across color channels.
Our main goal here is to show that our video algorithm is not restricted to white
Gaussian noise. Because of space limitations, visual and quantitative results for
color images and color video will be reported in later publications. As we pointed
out in [4, p. 6], the algorithm can be extended to spatially correlated noise by
using a Mahalanobis distance based on the noise covariance matrix instead of the
Euclidean distance similarity metric. When dealing with noise which is correlated
across color channels, we need to replace (5) by:
T
C 1
r ( Δx,Δy )
p
r ( Δx,Δy )
p
w ( p
,
p + q )= g
,
q
,
q
(
Δx,Δy
) [
B,...,B
] 2
with
can be estimated
from flat regions in the video sequence, or based on an EM-algorithm as in
[27]. Now, by introducing the decorrelating color transform
C
the noise covariance function. In practice, the matrix
C
G = C 1 / 2 ,andby
defining:
)
p , q = GY ( p x + q x + Δx, p y + q y + Δy, p y + q y ) GY ( p x + Δx, p y + Δy, p t ) ,
the weighting function can again be expressed in terms of the Euclidean distance
r
(
Δx,Δy
r
2 . Hence, removing correlated noise from video sequences solely re-
quires a color transform
(
Δx,Δy
)
p , q
applied as pre-processing to the video sequence.
Furthermore, this technique can be combined with our previous approach from
[4, p. 6] in order to remove Gaussian noise which is both spatially correlated and
correlated across color channels.
G
 
Search WWH ::




Custom Search