Databases Reference
In-Depth Information
FIGURE 6.8: Conversion of a normal loop condition to an OpenCL kernel.
6.2.2.1 CUDA
CUDA is a set of proprietary API and language extensions for GPU pro-
gramming that works on NVIDIA's GPUs. CUDA can be used via the runtime
API or the hardware API [82]. The runtime API provides C-like routines and
extensions for application development. The hardware API provides more flex-
ibility, in that it offers low-level control of hardware, but requires more code
and programming effort. Both CUDA and OpenCL implement a piece of code
that runs on GPU, known as the kernel. CUDA can be written in high-level
programming languages such as C, C++, and Fortran.
6.2.2.2
OpenCL
Open Computing Language (OpenCL) [83] is a GPU programming model
that can be used on multiple platforms. OpenCL implements a C-like language
for programming compute device programs. The key feature of OpenCL is
portability. Unlike a CUDA kernel, an OpenCL kernel can be compiled at
runtime, which would add to an OpenCL's running time. Because OpenCL is
intended for different GPU platforms, its kernel can be developed based on the
specific platform to be used. Figure 6.8 shows the conversion of a sequential
program to OpenCL.
6.3 From Coding to Applications
In this chapter, we have discussed system models and programming lan-
guages that can be used to develop Internet-scale applications. By harness-
ing the tremendous potentials of the distributed system architecture, which
range from high-speed Internet connectivity to inter-process communications
in GPU computing, we are able to develop scalable systems that can adapt
Search WWH ::




Custom Search