Tuesday, June 9, 2009

June, 4. Journée Jeunes Chercheurs sur les Multiprocesseurs et Multicoeurs (Overview)

Developing on GPU is a "hot" theme in Parallel Programming World. Here, I show you the main topics on this subject presented on Young Researchers on Multiprocessors and Multicores Journey in June, 4 at Paris.

  1. Sylvain Contassot-Vivier,"Iterative Asynchronous Algorithms on GPU Cluster"
  2. Thomas Jost, "Adaptation of Iterative Asynchronous Algorithms on GPU Cluster"
  3. Matthieu Ospici, "GPU Exploring and Sharing on Clusters of Hybrid Computation"
  4. Florent Calvayrac, "Precision and Performance Comparative on GPU Cluster for Different Algorithms for Physical-Chemical Numerical Computation"






Iterative Asynchronous Algorithms on GPU Cluster
Mr. Contassot-Vivier spoke about GPU Cluster and Asynchronous Algorithms. The GPELEC cluster is a 16 node cluster of GPUs and designed for computer science experimentation. It has been granted and bought by SUPÉLEC. Each node is a PC hosting a dual-core CPU and a GPU card: a nVIDIA GeForce 8800 GT, with 512MiB of RAM (on the GPU card). The 16 nodes are interconnected across a devoted Gigabit Ethernet switch. An Infiniband network is also available on half of the GPELEC cluster(on 8 nodes). Some Wattmeters have been installed on the GPELEC cluster (nodes and switches) in order to measure and analyse the energetic consumption, function of the computations run. Development environment available on GPELEC are mainly the gcc suite and its OpenMP library, OpenMPI and the CUDA environment of nVIDIA (nvcc compiler).

The objective of GPELEC platform was to quickly provide an experimental GPU cluster to researchers of SUPÉLEC and AlGorille in order to experiment scientific programming on GPU ("GPGPU"), and to track computing and energetic performances. In 2008 GPELEC has allowed to experiment the compatibility of MPI and CUDA frameworks, and to develop some fast Monte-Carlo simulations for an option pricing problem. Others developments and experimentations are planned in 2009 in collaboration with EDF researchers, and with our colleagues from CERMICS and MathFi INRIA team.

Adaptation of Iterative Asynchronous Algorithms on GPU Cluster
Their experiments are binded to the performance increase making better memory access. They got high performance using cache and memory alignments. They said that it was obtained a gain compared to CNC.

Precision and Performance Comparative on GPU Cluster for Different Algorithms for Physical-Chemical Numerical Computation
The great part of methods to compute on GPU is based on Direct and Iterative Methods to solve Linear Equation Systems. This speech treats of benchmark in already known methods.

If you want more information, contact me or the authors directly.

Monday, June 1, 2009

Overview of OpenCL and Code Generation

My research project comprehends the code generation for heterogenous parallel platforms. More precisely, GPU architectures. This is a little presentation that treats of OpenCL and its aspects and how to model data and task parallelism to generate a optimized code.

OpenCL.pdf