by Frédéric Desprez, Ottmar Krämer-Fuhrmann and Ramin Yahyapour

The fast evolution of hardware capabilities in conjunction with fast wide area communication and availability of virtualization solutions is allowing new operational models for information technology. The advent of Cloud computing has resulted in access to a wide range of services. Infrastructure-as-a-Service (IaaS) allows access to large-scale resources like computational power or storage. These large scale platforms, based on huge datacentres, are available to industrial users as well as to scientific communities. IaaS allows the use of a large number of bare machines on which any software stack can be installed. The Platform-as-a-Service (PaaS) model provides the programmer with sets of software elements than can be combined in a scalable way to build large scale applications. Finally, Software-as-a-Service (SaaS) simplifies access to large applications in a remote and seamless way.

by Ignacio M. Llorente and Rubén S. Montero (invited article)

OpenNebula is the result of many years of research and development in efficient and scalable management of virtual machines on large-scale distributed infrastructures. Its innovative features have been developed to address the requirements of business use cases from leading companies in the context of flagship European projects in cloud computing. OpenNebula is being used as an open platform for innovation in several international projects to research the challenges that arise in cloud management, and also as production-ready tool in both academia and industry to manage clouds.

by Wolfgang Theilmann and Ramin Yahyapour (invited article)

IT-supported service provisioning has become of major relevance in all industries and domains. The research project SLA@SOI provides a major milestone for the further evolution towards a service-oriented economy, where IT-based services can be flexibly traded as economic goods, ie under well-defined and dependable conditions and with clearly associated costs. Eventually, this will allow for dynamic value networks that can be flexibly instantiated thus driving innovation and competitiveness. SLA@SOI created a holistic view for the management of service level agreements (SLAs) and provides an SLA management framework that can be easily integrated into a service-oriented infrastructure.

by Daniel Field (invited article)

One of FP6’s largest projects recently came to a successful conclusion. Over the last four years the On-Demand IT services sector has transformed beyond recognition, both in commercial and research spheres. Here’s how BEinGRID’s legacy lives on in today’s cloud environment.

by Christine Morin, Yvon Jégou and Guillaume Pierre (invited article)

XtreemOS is an open-source distributed operating system for large scale dynamic Grids. It has been developed in the framework of the XtreemOS European project funded by the European Commission under the FP6.

by Attila Marosi, Miklós Kozlovszky and Péter Kacsuk

Following on from previous successful grid related work, the Laboratory of Parallel and Distributed Systems (LPDS) of SZTAKI is now focusing on Grid-Cloud interoperability.

by Andy Edmonds, Thijs Metsch, Alexander Papaspyrou and Alexis Richardson

The Open Cloud Computing Interface (OCCI) comprises a set of open community-lead specifications delivered through the Open Grid Forum, which define how infrastructure service providers can deliver their compute, data, and network resource offerings through a standardized interface. OCCI has a set of implementations that act as its proving-ground. It builds upon the fundamentals of the World Wide Web by endorsing the proven REST (Representational State Transfer) approach for interaction and delivers an extensible model for interacting with “as-a-Service” services.

by Frédéric Desprez, Luis Rodero-Merino, Eddy Caron and Adrian Muresan

The Distributed Interactive Engineering Toolkit, or DIET, project started with the goal of implementing distributed scheduling strategies on compute Grids. In recent times, the Cloud phenomenon has you-go billing approach. This led to a natural step forward in the evolution of DIET, with the inclusion of Cloud platforms in resource provisioning. DIET will be used to test resource provisioning heuristics and to port new applications that mix grids and Clouds.

by Orlando Cassano and Stéphane Mouton

Utility grids are generating increasingly huge amounts of metering information. Grid operators face rising costs and technical hurdles to aggregate and process data. Can Cloud Computing tools, developed notably by Web companies to deal with large data sets, also be used for power grid management?

by Máté J. Csorba and Poul E. Heegaard

Large-scale computing platforms will soon become a pervasive technology available to companies of all sizes. They will serve thousands, or even millions of users through the Internet. However, existing technologies are based on a hierarchically managed approach that does not possess the required scaling properties. Moreover, existing systems are not equipped to handle the dynamism caused by severe failures or load surges. We conjecture that using self-organizing techniques for system (re)configuration can improve both the scalability properties of such systems as well as their ability to tolerate variations in load and increased failure rates. Specifically, we focus on the deployment of virtual machine images onto physical machines that reside in different parts of the network. Our objective is to construct balanced and dependable deployment configurations that are resilient and support elasticity. To accomplish this, a method based on a variant of Ant Colony Optimization is used to find efficient deployment mappings for a large number of replicated virtual machine images that are deployed concurrently. The method is completely decentralized; ants communicate indirectly through pheromone tables located in the nodes.

by Eduard Ayguadé and Jordi Torres

For a more sustainable Cloud Computing scenario the paradigm must shift from “time to solution” to “kWh to the solution”. This requires a holistic approach to the cloud computing stack in which each level cooperates with the other levels through a vertical dialogue.
Due to the escalating price of power, energy-related costs have become a major economic factor for Cloud Computing infrastructures. Our research community is therefore being challenged to rethink resource management strategies, adding energy efficiency to a list of critical operating parameters that already includes service performance and reliability.

by András Micsik, Jorge Ejarque, Rosa M. Badia

Delivering a good quality of service is crucial for service providers in cloud computing. Planning the schedule of resource allocations and adapting the schedule to unforeseen events are the primary means of obtaining this goal. Within the EU funded IST project BREIN (Business objective driven reliable and intelligent Grids for real business), semantic and agent technologies have been applied to implement a platform with scheduling, monitoring and adaptation to ensure the agreed quality of service during service provision. In the Department of Distributed Systems of SZTAKI and the Barcelona Supercomputing Centre novel semantic techniques applied in the platform have been developed, namely prediction of quality of service based on historical data and allocation of licenses.

by Leonardo Candela, Donatella Castelli, Pasquale Pagano

In recent years scientists have been rethinking research workflows in favour of innovative paradigms to support multidisciplinary, computationally-heavy and data-intensive collaborative activities. In this context, e-Infrastructures can play a crucial role in supporting not only data capture and curation but also data analysis and visualization. Their implementation demands seamless and on-demand access to computational, content, and application services such as those typified by the Grid and Cloud Computing paradigms. gCube is a software framework designed to build e-Infrastructures supporting Virtual Research Environments, ie on-demand research environments conceived to realise the new science paradigms.

by Matthias Meier, Joachim Seidelmann and István Mezgár

The objective of the ManuCloud project is the development of a service-oriented IT environment as a basis for the next level of manufacturing networks by enabling production-related inter-enterprise integration down to shop floor level. Industrial relevance is guaranteed by involving industrial partners from the photovoltaic, organic lightning and automotive supply industries.

by Syed Naqvi and Philippe Massonet

The RESERVOIR project is developing breakthrough system and service technologies that will serve as an infrastructure as a service (IaaS) using Cloud computing. The project is taking virtualization forward to the next level in order to allow efficient migration of resources across geographies and administrative domains, maximizing resource exploitation, and minimizing their utilization costs.

by Jérôme Gallard and Adrien Lèbre

Virtualization technologies have been a key element in the adoption of Infrastructure-as-a-Service (IaaS) cloud computing platforms as they radically changed the way in which distributed architectures are exploited. However, a closer look suggests that the way of managing virtual and physical resources still remains relatively static.

by Denis Caromel, Cédric Dalmasso, Christian Delbe, Fabrice Fontenoy and Oleg Smirnov

The ProActive Parallel Suite features Infrastructure as a Service (IaaS) capabilities together with an innovative parallel programming model and a distributed workflow environment. It involves the OASIS team from INRIA Sophia Antipolis, which initiated the development early 2000 and ActiveEon, an INRIA spin-off created in 2007, which together co-develop ProActive and provide users with professional services.

by Vincent C. Emeakaroha, Michael Maurer, Ivona Brandic and Schahram Dustdar

The DSG Group at Vienna University of Technology is investigating self-governing Cloud Computing infrastructures necessary for the attainment of established Service Level Agreements (SLAs). Timely prevention of SLA violations requires advanced resource monitoring and knowledge management. In particular, we develop novel techniques for mapping low-level resource metrics to high-level SLAs, monitoring resources at execution time, and applying Case Based Reasoning for the prevention of SLA violations before they occur while reducing energy consumption, ie, increasing energy efficiency.

by Pierre Riteau, Maurício Tsugawa, Andréa Matsunaga, José Fortes and Kate Keahey

How can researchers study large-scale cloud platforms and develop innovative software that takes advantage of these infrastructures? Using two experimental testbeds, FutureGrid in the United States and Grid’5000 in France, we study Sky Computing, or the federation of multiple clouds.

by Claudio Cacciari, Daniel Mallmann, Csilla Zsigri, Francesco D’Andria, Björn Hagemeier, Angela Rumpl, Wolfgang Ziegler and Josep Martrat

One of the major obstacles to using commercial applications in Distributed Computing Infrastructures like Grids or Clouds is the current technology that relies on controlling the use of these applications with software licenses. “Software licensing practices are limiting the acceleration of grid adoption” was one of the results of a survey of the 451group in 2005. Just recently the 451group published a similar report on obstacles to the broad adoption of Cloud Computing - and again licensing practices were listed among the top five obstacles. elasticLM overcomes the limitations of existing licensing technologies allowing seamless running of license protected applications in computing environments ranging from local infrastructures to external Grids and Clouds.

by Fabrizio Marozzo, Domenico Talia and Paolo Trunfio

MapReduce is a parallel programming model for large-scale data processing that is widely used in Cloud computing environments. Current MapReduce implementations are based on master-slave architectures that do not cope well with dynamic Cloud infrastructures, in which nodes join and leave the network at high rates. We have designed a MapReduce architecture that uses a peer-to-peer approach to manage node churn and failures in a decentralized way, so as to provide a more reliable MapReduce middleware that can be effectively exploited in dynamic Cloud infrastructures.

by Rainer Schmidt and Matthias Rella

Researchers at the Austrian Institute of Technology (AIT) are exploring ways to utilize cloud technology for the processing of large media archives. The work is motivated by a strong demand for scalable methods that support the processing of media content such as can be found in archives of broadcasting or memory institutions.

by Radu Prodan, Vlad Nae and Thomas Fahringer

Computational Grids and Clouds remain highly specialized technologies that are only used by scientists and large commercial organizations. To overcome this gap, University of Innsbruck is conducting basic research that is unusual compared with previous academic research projects in that it addresses a new class of application that appeals to the public for leisure reasons: Massively Multiplayer Online Games. Online games have the potential to raise strong interest, providing societal benefits through increased technological awareness and engagement. By standardizing on a Cloud-based platform and removing the need for large investments in hosting facilities, this research may remove the technical barrier and the costs of hosting MMOGs, and thus significantly increase the number of players while keeping the high quality of responsiveness of action games.

by Nikos Karacapilidis, Stefan Rüping and Isabel Drost

Collaboration and decision making settings are often associated with huge, ever-increasing amounts of multiple types of data, obtained from diverse sources, which often have a low signal-to-noise ratio for addressing the problem at hand. In many cases, the raw information is so overwhelming that stakeholders are often at a loss to know even where to begin to make sense of it. In addition, these data may vary in terms of subjectivity and importance, ranging from individual opinions and estimations to broadly accepted practices and indisputable measurements and scientific results. Their types can be of diverse level as far as human understanding and machine interpretation are concerned.

by Rafael Accorsi and Lutz Lowis

A key obstacle to the development of large-scale, reliable Cloud Computing is the difficulty of timely compliance certification of business processes operating in rapidly changing Clouds. Standard audit procedures are hard to conduct for Cloud-based processes. ComCert is a novel, well-founded approach to enable automatic compliance certification of business process with regulatory requirements.

Next issue: July 2021
Special theme:
"Privacy-Preserving Computation"
Call for the next issue
Get the latest issue to your desktop
RSS Feed