by Anja Feldmann, Mario Kind, Olaf Maennel, Gregor Schaffrath and Christoph Werle

While the Internet is currently viewed as widely successful for some of its participants, namely the users and service providers such as Google, it also suffers from ossification in the underlying infrastructure. The ossification has multiple causes, among them the fact that since the Internet works quite well as it is, Internet Service Providers (ISPs) have no incentive to change their ways. Moreover, ISPs suffer from a lack of business perspective due to the predominant charging modi for Internet access: flat rates for users and a combined price model consisting of a base rate and usage-based component for content providers. An additional complication is that traffic grows at a higher rate than that at which the network equipment costs decrease.

As a consequence, there is debate within the ISP community over whether ISPs should become pure bit-pipe providers or should offer value-added services. In addition, some service providers have found that some application support inside the network can help their applications and are considering investing in network infrastructure, eg Google.

Indeed, over the last twenty years almost all innovation, eg novel applications, has taken place at the edge of the network, while the core remains almost untouched. However, the time has come to support novel applications with services inside the network (eg via network-based enablers), and to revisit the Internet architecture to add native support for security, mobility and manageability.

For circumventing the difficulty of changing successful networks, the concept of overlays has proven to be very useful. For example, the Internet got started as an overlay on top of the phone network. One of the key insights is that each overlay can be considered a virtual network.

Virtualization is an old but very successful technique for CPU, memory, storage and almost all other system resources. Fundamentally, virtualization is an abstraction concept that hides hardware details, eg to cope with heterogeneity. It effectively offers a level of indirection as well as resource sharing. The former improves flexibility while the latter enables partitioning as well as reuse of resources, resulting in higher efficiency. However, to achieve this it also requires a resource separation with a sufficient level of isolation.

Within the last five years, end-system virtualization, eg via Xen or VMware, has revamped server business. Router vendors such as Cisco and Juniper offer router virtualization, and existing techniques such as MPLS (Multiprotocol Label Switching), GMPLS (Generalized MPLS) and VPNs (Virtual Private Networks) offer some coarse-grained link virtualization. Overlays such as peer-to-peer (P2P) networks over the Internet (eg BitTorrent) can also be seen as a virtual network, but they suffer from a lack of sufficient isolation. VPNs (eg realized via MPLS), can also be seen as virtual networks. However, they suffer from a lack of node programmability.

Indeed, a significant part of the current Internet infrastructure either already supports or has the potential to support a basic form of network virtualization. Moreover, due to high operational costs, some sharing of network resources among network operators already exists. For example, T-Mobile UK and 3UK share network sites. As such we need to explore the technical feasibility and potential business opportunities that virtualization can offer while overcoming Internet ossification.

The first observation is that it is possible to treat the current Internet as one future virtual network, which implies that one does not have to ‘change the running system’. The next observation is that service providers can potentially operate their own virtual network according to their needs, eg, to offer a value-added service. A virtual network here may imply operating a non-IP network that may require low-level access to each ‘slice’ of each network device. We also point out that each of these virtual networks can be built and operated according to different design criteria; for instance, they could optimize a specific network metric like throughput, latency, or security. This is possible as long as the virtual networks are properly isolated, which requires (among other things) corresponding Quality-of-Service (QoS) support in the underlying network. For example, one network might be optimized for anonymity while another is optimized for accountability. Virtual networks offer the added benefit to the service provider that their resources (eg node or link resources as well as the topology) can be increased or decreased gradually in line with the popularity of the service. They offer the benefit of resource migration and resource aggregation to the network infrastructure provider.

Nevertheless, network virtualization creates a tussle among service and infrastructure providers over who should operate and who should manage such virtual networks. Hence, there is a need for additional players besides providers (PIPs) and service providers (SPs): virtual network providers (VNPs) for assembling virtual resources from one or multiple PIPs into a virtual network, and virtual network operators (VNOs) for the installation and operation of the VNet provided by the VNP according to the needs of the SP.

In terms of business relationships, the VNP buys its bit pipes from one or several PIPs. The VNO uses the resources assembled by the VNP to operate a virtual network according to the needs of an SP. Note that virtualization enables a VNP to act as a PIP to another VNP.

Requirements and Conclusion
To realize the benefits of virtualization, we need an architecture for network virtualization that encompasses the players, PIP, VNP, VNO and SP. On the technical side, we need standardized interfaces between the players to automate the setup of virtual networks, ie, a common control plane. Moreover, we need ways in which each player can check if it is being provided with the service it is paying for (eg in terms of QoS). Furthermore, it must be possible for the PIP to render/delegate low-level management of the virtualized pieces of the network components via the VNP to the VNO. Of course for virtualization to succeed it must be accepted by each player that some information is hidden, for example the SP should not be able to know exactly which link within the PIP is being used by a certain connection. However, not all information can be hidden; for example, an SP might want to specify the reach of its virtual network. Therefore, we need to explore the trade-offs between the level of specification of the virtual network and the flexibility of optimizing the resource usage. Another challenge associated with information hiding is ‘debuggability’, eg, figuring out who is responsible if something does not work as it is supposed to. Still, the unique opportunity is that when business processes are being redesigned, it will be possible to completely trace a link in a virtual network as used by the SP to a link in a PIP. On the business side, we need agreements on the kinds of contract and their prices. Moreover, we need support for accountability, audits and security.

Virtualization shows a lot of promise. Nevertheless, each player must determine for itself the role it wants to take. For this, it must analyse its business position and determine the potential benefits and dangers of each role. As such, further work is needed on both the technical and the business aspects of network virtualization.

Link:
4WARD Project: http://www.4ward-project.eu

Please contact:
Anja Feldmann, Olaf Maennel,
Gregor Schaffrath
TU Berlin/Deutsche Telekom Laboratories, Germany
E-mail: {anja, olaf, grsch}@net.t-labs.tu-berlin.de

Mario Kind
Deutsche Telekom Laboratories, Germany
E-mail: mario.kind@telekom.de

Christoph Werle
Karlsruhe University, Germany
E-mail: werle@tm.uka.de

Next issue: October 2024
Special theme:
Software Security
Call for the next issue
Get the latest issue to your desktop
RSS Feed