by Alessandro Curioni and Ray Walshe

Recently, when speaking at Teratec, the conference on High Performance Computing (HPC) in Paris, the European commissioner for the Information Society, Viviane Reding said "Supercomputers are the 'cathedrals' of modern science, essential tools to push forward the frontiers of research at the service of Europe's prosperity and growth". This statement underlines a strategic emphasis being placed on High Performance Computing across the European Union and investment in selected HPC centres aims to develop many key areas of research and industry.

Massively parallel simulations of aircraft wake instabilities
Massively parallel simulations of aircraft wake instabilities. An aircraft wake consists of powerful long-lasting trailing vortices. This potential hazard imposes safety distances, and thus limits airport traffic. Thousands-of-processors high-resolution simulations can accurately capture fast growing instabilities which perturb the vortices and can therefore accelerate their decay. The figure (also on the cover) shows the volume rendering of vorticity in the case of a fast-growing instability. Secondary vortices generated by the stabilizer reconnect with the wing ones and result in a disturbance that propagates along and inside the vortex cores. Credits: Philippe Chatelain, Michael Bergdorf, Diego Rossinelli, Petros Koumoutsakos, ETH Zurich, Switzerland; Alessandro Curioni, Wanda Andreoni, IBM Zurich Research Laboratory, Switzerland. Acknowledgments: IBM T.J. Watson Research Center, Swiss Supercomputing Center.

by Klaus Johannsen, Andreas Kopp, Olli Tourunen and Josva Kleist

Computational problems related to the safe storage of CO2 have become a major focus in environmental science. The computational power required to solve these problems is in principle available at various computing centres. The CO2 Community Grid project now enables scientists to start large-scale computations with literally eighteen keystrokes. In this way, it brings the power of computing centres to the desks of researchers.

by Maciej Szpindler and Maciej Cytowski

Numerical weather forecasting represents one of the great mathematical modelling challenges applied directly to real-life processes. Methods used for solving corresponding mathematical systems are usually very well suited to work on parallel vector computers. The Interdisciplinary Centre for Mathematical and Computational Modelling (ICM) provides a set of free accessible weather services dedicated to the Polish and wider European communities. Recently ICM opened a new weather service based on the British Unified Model and implemented on an extremely high-resolution grid. In order to achieve a fully operational service we had to optimize the application to fit a specific supercomputer architecture. The result was a success, with an overall speedup of almost three being achieved over the initial implementation.

by Domenico Laforenza, Franco Maria Nardini, Fabrizio Silvestri and Gabriele Tolomei

The HPC-Lab at ISTI-CNR is investigating the topic of service discovery with the aim of supporting service-oriented architectures (SOAs) on Grid computing infrastructures. We present a brief overview of SPRanker (Service Provider Ranker), a discovery tool that, unlike the usual service discovery components, retrieves provider information rather than service descriptions. At its core SPRanker exploits a score formula based on information retrieval that takes into account judgments expressed collaboratively by past service users.

by Juan Antonio Ortega, Jorge Cantón, Ana Silva, David Bosque and Francisco Velasco

In 2006 Junta de Andalucía created the Andalusian Supercomputing Network (RASCI). RASCI consists of supercomputing nodes distributed geographically throughout Andalusia that provide the region with a large number of computing resources. Increased network bandwidth, more powerful computers and acceptance of the Internet have driven a growth in demand for new and better ways to utilize high-performance technical computing (HPTC) resources.

by Zoltán Nagy, László Kék, Zoltán Kincses, András Kiss and Péter Szolgay

Array computers can be useful in the numerical solution of spatio-temporal problems. IBM has recently introduced a topographic array processor called the Cell Broadband Engine (Cell BE). Researchers at the Cellular Sensory Wave Computers Laboratory of SZTAKI in collaboration with the Department of Information Technology, Pázmány Péter Catholic University, Budapest and the Department of Image Processing and Neurocomputing, Pannon University, Veszprém, have implemented a Cellular Neural Network (CNN) simulation kernel on the Cell BE. The CNN simulation is optimized according to the special requirements of the Cell BE and implements both linear and nonlinear (piecewise linear) templates. We have used the CNN simulation kernel to solve Navier-Stokes partial differential equations (PDEs) on the Cell architecture.

by Philipp Wieder, Wolfgang Ziegler and Vincent Keller

IANOS - Intelligent ApplicatioN-Oriented Scheduling framework is a Grid-scheduling system that provides a generic job-submission framework for automatic optimal positioning and scheduling of high-performance computing applications.

by Olaf Schenk, Helmar Burkhart and Hema Reddy

Personalized medicine based on high-performance computing (HPC) has the potential to transform health care and to dramatically improve clinical outcomes. This new approach to medicine provides clinical researchers with fast access to information about individual patients. A promising exploitation of the Cell architecture for personalized medicine in biomedical life science applications is the subject of an HPC research project between the University of Basel and IBM.

by Peter Arbenz and Ralph Müller

According to the World Health Organization (WHO), the lifetime risk for osteoporotic fractures in women is close to 40%, while in men it is about 13%. Osteoporosis is second only to cardiovascular disease as a leading health-care problem. Therefore it is of paramount importance to be able to predict osteoporosis as early as possible in order to be able to take effective measures to prevent its progression. This project is about locating positions in the bone that are subject to excessive stresses and thus are prone to failure. Supercomputing offers novel approaches to simulating and visualizing larger pieces of bone with unprecedented resolution. Simulations of bones that have been strengthened by implants have also been successfully conducted.

by Jesus Luna, Manolis Marazakis and Marios D. Dikaiakos

A collaboration between CoreGRID partners FORTH (Foundation for Research and Technology - Hellas) and UCY (University of Cyprus) is investigating the design of Secure Desktop Grids that are compliant with the strict EU data protection legislation set for e-Health applications. The objective of this effort is to explore the use of a desktop Grid to store the data generated by the Intensive Care Grid System (ICGrid), developed by UCY and the Nicosia General Hospital (Intensive Care Unit).

by Stefan Zasada, C.V. Gale, Steven Manos and Peter Coveney

We introduce the GENIUS project, part of the revolution in on-demand access to large-scale computing resources for cerebrovascular modelling, surgical planning and intervention.

by James T. Murphy, Ray Walshe and Marc Devocelle

Micro-Gen is a tool for modelling the population dynamics of bacterial colonies by taking into account the unique characteristics of each individual bacterial cell and its local environmental conditions. This level of granularity requires significant computing resources when modelling large numbers of agents. However, by implementing an efficient parallel algorithm, Micro-Gen can take advantage of high-performance computing resources to scale up to biologically realistic numbers of bacterial agents while maintaining information about each individual cell.

by Richard Blake

The explosive growth in computing power has brought computers into the heart of most scientific disciplines. While servers and workstations service this mainstream, it is parallel supercomputers capable of harnessing thousands of processors for a single calculation that are redefining the boundaries of computational science. The Science and Technology Facilities Council's (STFC) Computational Science and Engineering Department (CSED) is at the forefront of efforts to use supercomputing to tackle problems across a range of scientific disciplines.

by Petros Koumoutsakos

A century of advances in numerical methods, integrated with advances in software and hardware, provide us today with unprecedented tools to study and control flow as it pertains to key problems of our society. This includes energy (wind turbines, aircraft wakes), health (microfluidics, hematodynamics) and nanotechnology (nanofluidics, nanomedicine).

by Sándor Kocsárdi, Zoltán Nagy, Árpád Csík and Péter Szolgay

In the areas of mechanical, aerospace, chemical and civil engineering the solution of partial differential equations (PDEs) has long been a primary mathematical problem. In this field, one of the most exciting areas of development is the simulation of fluid flow, which involves for example problems of air, sea and land vehicle motion. The governing equations are derived from the Navier-Stokes equations and solved by using first and second-order numerical methods. In collaboration with the Department of Mathematics and Computational Sciences, Széchenyi István University and the Department of Image Processing and Neurocomputing at the University of Pannonia, researchers at the Cellular Sensory and Wave Computing Laboratory of SZTAKI are working on finding an optimal computational architecture that satisfies the functional requirements with the minimal required precision, while driving computing power toward its maximum level.

by Bipin Kumar, Yan Delaure and Martin Crane

Appropriate understanding of the behaviour of multiphase environmental flows requires CPU-intensive computational modelling. Large systems of algebraic equations must be solved iteratively and this has motivated the development and implementation of parallel iterative algorithms.

by Dimitrios S. Nikolopoulos

Exponential improvements in the performance of computer systems have brought supercomputing to the desks of users. A modern processor has the equivalent computing capacity of a massively parallel computer system from the last decade. The history of Top500, the list of the 500 most powerful computing systems on the planet, indicates that our 2008 laptops would have ranked in the Top500 most powerful computer systems ten to fifteen years ago.

Next issue: January 2018
Special theme:
Quantum Computing
Call for the next issue
Get the latest issue to your desktop
RSS Feed