Image Processing Reference
In-Depth Information
proved performance we can run them in parallel. To implement and execute these algorithms
we used a Lenovo W530 running Windows 8 Pro 64-bit, equipped with an i7@2.6GHz CPU,
and 8GB of memory. The GPU is an NVIDIA Quadro K1000M with 192 CUDA cores.
Code 1
__global__ void Parallel_kernel(uchar4 *ptr, uchar4 *result,
const int filterWidth, const int filterHeight) {
// map from threadIdx/BlockIdx to pixel position
int x = threadIdx.x + blockIdx.x * blockDim.x;
int y = threadIdx.y + blockIdx.y * blockDim.y;
int offset = x + y * blockDim.x * gridDim.x;
if( offset >= N*M ) return; // in case we launched more threads than we need.
int w = N;
int h = M;
float red = 0.0f, green = 0.0f, blue = 0.0f;
//multiply every value of the filter with corresponding image pixel.
//Note: filter dimensions are relatively very small compared to the dimensions of an image.
for(int filterY = 0; filterY < filterHeight; filterY++) {
for(int filterX = 0; filterX < filterWidth; filterX++) {
int imageX = ((offset%w) - filterWidth / 2 + filterX + w) % w;
int imageY = ((offset/w) - filterHeight / 2 + filterY + h) % h;
red += ptr[imageX + imageY*w].x * filterConst[filterX+filterY*filterWidth];
green += ptr[imageX + imageY*w].y * filterConst[filterX+filterY*filterWidth];
blue += ptr[imageX + imageY*w].z * filterConst[filterX+filterY*filterWidth];
//truncate values smaller than zero and larger than 255, and store the result.
result[offset].x = T(int(red));
result[offset].y = T(int(green));
result[offset].z = T(int(blue));
For a camera, we used an Axis P13 series network camera atached to a Power Over Ethernet
100 Gbit switch. We wrote software that communicates with the camera using the HTTP pro-
tocol, to instruct the camera when to begin and end the video feed as well as changing the
resolution of the feed. The camera transmits the video in Motion JPG format. Each frame is de-
limited with special tags and the header for each image frame contains the length of the data
frame. We used the std_image ( htp:// ) package to decode each JPG
image before passing the image frame to our processing algorithms.
4.1 Motion Detector
The first algorithm detects motion in the field of view of the camera. To implement this al-
gorithm, we need to keep track of the previous frame to see if the current and the previous
frames differ and by how much. We first calculate the square difference of the previous and
the current frame; this operation is done for every pixel.
Search WWH ::

Custom Search