Friday, December 9, 2011


Do you have a computer with NVIDIA Graphics Card. If by any chance the answer is yes then you probably can use that machine to do some cool paralel computation task in this area:
Computational Structural Mechanics
Bio-Informatics and Life Sciences
Medical Imaging
Weather and Space
Data Mining and Analytics
Imaging and Computer Vision
Computational Finance
Computational Fluid Dynamics
Electromagnetics and Electrodynamics
Molecular Dynamics

Yeah, if you bought NVIDIA powered machine this year or last year, your graphics card maybe supported paralel computing using CUDA.

CUDA™ is a parallel computing platform and programming model invented by NVIDIA. It enables dramatic increases in computing performance by harnessing the power of the graphics processing unit (GPU).

With millions of CUDA-enabled GPUs sold to date, software developers, scientists and researchers are finding broad-ranging uses for CUDA, including image and video processing, computational biology and chemistry, fluid dynamics simulation, CT image reconstruction, seismic analysis, ray tracing, and much more.

GPU Computing: The Revolution

It's hard to believe that twenty years ago we stuck on a machine with no GUI and no multitasking, at least multitasking is rare thing. And ten years ago we stop increasing speed of processor at 3GHz, more than that is either we burn our computer or move our PC using towing car as conductor can't be any smaller. Developer focused on multicore machine and cluster machine.

You're faced with imperatives: Improve performance. Solve a problem more quickly. Parallel processing would be faster, but the learning curve is steep – isn't it?

Not anymore. With CUDA, you can send C, C++ and Fortran code straight to GPU, no assembly language required.

GPU computing is possible because today's GPU does much more than render graphics: It sizzles with a teraflop of floating point performance and crunches application tasks designed for anything from finance to medicine.

History of GPU Computing

The first GPUs were designed as graphics accelerators, supporting only specific fixed-function pipelines. Starting in the late 1990s, the hardware became increasingly programmable, culminating in NVIDIA's first GPU in 1999. Less than a year after NVIDIA coined the term GPU, artists and game developers weren't the only ones doing ground-breaking work with the technology: Researchers were tapping its excellent floating point performance. The General Purpose GPU (GPGPU) movement had dawned.

But GPGPU was far from easy back then, even for those who knew graphics programming languages such as OpenGL. Developers had to map scientific calculations onto problems that could be represented by triangles and polygons. GPGPU was practically off-limits to those who hadn't memorized the latest graphics APIs until a group of Stanford University researchers set out to reimagine the GPU as a "streaming coprocessor."

In 2003, a team of researchers led by Ian Buck unveiled Brook, the first widely adopted programming model to extend C with data-parallel constructs. Using concepts such as streams, kernels and reduction operators, the Brook compiler and runtime system exposed the GPU as a general-purpose processor in a high-level language. Most importantly, Brook programs were not only easier to write than hand-tuned GPU code, they were seven times faster than similar existing code.

NVIDIA knew that blazingly fast hardware had to be coupled with intuitive software and hardware tools, and invited Ian Buck to join the company and start evolving a solution to seamlessly run C on the GPU. Putting the software and hardware together, NVIDIA unveiled CUDA in 2006, the world's first solution for general-computing on GPUs.