Introduction to the Special Theme

by Lynda Hardman and Steven Pemberton

The Sapir-Whorf Hypothesis postulates a link between thought and language: if you haven't got a word for a concept, you can't think about it; if you don't think about it, you won't invent a word for it. The term "Web 2.0" is a case in point. It was invented by a book publisher as a term to build a series of conferences around, and conceptualises the idea of Web sites that gain value by their users adding data to them. But the concept existed before the term: Ebay was already Web 2.0 in the era of Web 1.0. But now we have the term we can talk about it, and it becomes a structure in our minds, and in this case a movement has built up around it.

An interview with Frank van Harmelen

The semantic Web will be a considerable part of the future Web. What is the difference between the semantic Web and artificial intelligence? And what about Web 2.0? Frank van Harmelen, computer scientist in the Netherlands and a specialist in the semantic Web, answers some questions.

by Boris Motik

Scalability of ontology reasoning is a key factor in the practical adoption of ontology technologies. The KAON2 ontology reasoner has been designed to improve scalability in the case of reasoning over large data sets. It is based on a novel reasoning algorithm that builds upon extensive research in relational and deductive databases.

by Lars Bröcker

The semantic Web offers exciting opportunities for scientific communities: knowledge bases with underlying ontologies promote the process of collaborative knowledge creation and facilitate research based on the published body of work. However, since most scientific communities do not possess the knowledge required to build an ontology, these opportunities tend not to be taken up. The WIKINGER project aims to provide tools that largely automate the creation of an ontology from a domain-specific document collection.

by Kees van der Sluijs and Geert-Jan Houben

While it is generally desirable that huge cultural collections should be opened up to the public, the paucity of available metadata makes this a difficult task. Researchers from the Eindhoven University of Technology in the Netherlands have built a Web application framework for opening up digital versions of these multimedia documents with help of the public. This leads to a win-win situation for users and content providers.

by Peter Haase, Enrico Motta and Rudi Studer

The NeOn project is investigating the entire life cycle of networked ontologies that enable complex, semantic applications. As the amount of semantic information available online grows, semantic applications are becoming increasingly ubiquitous, Web-centric and complex. The NeOn Toolkit and the NeOn methodology lie at the core of the NeOn vision, defining the standard reference infrastructure and the standard development process for creating and maintaining large-scale semantic applications. The first version of the NeOn Toolkit for ontology engineering has just been released in open source by the Neon Consortium.

by Ben Adida

RDFa, a W3C proposal about to enter Last Call, will help bridge the clickable and semantic Webs. Publishers of current HTML Web pages can augment their output with interoperable, structured data. Web users can extract and process this data straight from its pleasant visual rendering within their browsers. RDFa will enable semantic Web data to progressively emerge from the existing, visual Web.

by Pablo Cesar, Dick Bulterman and Jack Jansen

In order to make multimedia a first-class citizen on the Web, there is a need for major efforts across the community. European projects such as Passepartout (ITEA) and SPICE (IST IP) show that there is a need for a standardized mechanism to provide rich interaction for continuous media content. CWI is helping to build a framework that adds a temporal dimension to existing a-temporal Web browsers.

by Raphaël Troncy

Creating, organising and publishing findable media on the Web raises outstanding problems. While most applications that process multimedia assets make use of some form of metadata to describe the multimedia content, they are often based on diverse metadata standards. In the W3C Multimedia Semantics Incubator Group (XG), we have demonstrated, for various use cases, the added value of combining several of these into a single application. We have also shown how semantic Web technology can help in making these standards interoperable.

by David Lewis and Kevin Feeney

The World Wide Web is witnessing an explosion in new forms of online community. These are built on advances in Web content posting, social networks and the wisdom of crowds. However, we are still largely ignorant about the factors that enable online communities to react successfully to change. Change can involve a number of things: swings in membership and levels of engagement; the emergence of internal conflicts; changing relationships with other online communities and offline organisations; and increasingly, changes caused by new computer-mediated communication technology.

by Pirjo Näkki

For most users, the Web is about communication rather than technology. The rise of so-called social media shows that when people are provided with simple tools and easy access to online content, they find new ways to utilize the Internet. The VTT Open Web Lab (Owela) is studying how social media tools can also be utilized in innovation and product design processes.

by Pär J. Ågerfalk and Jonas Sjöström

Web 2.0 and the commercial interest in open-source software both reflect a current trend towards increased user involvement in product and service development. To stay competitive in this era of open innovation, companies must learn to trust users as codevelopers and to make use of the Web as an instrument for identity cultivation.

by Pierre Senellart, Serge Abiteboul and Rémi Gilleron

A large part of the Web is hidden to present-day search engines, because it lies behind forms. Here we present current research (centred around the PhD thesis of the first author) on the fully automatic understanding and use of the services of the so-called hidden Web.

by Pierre Genevès and Nabil Layaïda

Static analysers for programs that manipulate Extensible Markup Language (XML) data have been successfully designed and implemented based on a new tree logic by the WAM (Web, Adaptation and Multimedia) research team, a joint lab of INRIA and Laboratoire d'Informatique de Grenoble. This is capable of handling XML Path Language (XPath) and XML types such as Document Type Definitions (DTDs) and XML Schemas.

by Alexandre Bergel, Stéphane Ducasse and Lukas Renggli

Page-centric Web application frameworks fail to offer adequate solutions to model composition and control flow. Seaside allows Web applications to be developed in the same way as desktop applications. Control flow is modelled as a continuous piece of code, and components may be composed, configured and nested as one would expect from traditional user interface frameworks.

by Constantina Doulgeraki, Alexandros Mourouzis and Constantine Stephanidis

EAGER is an advanced toolkit that helps Web developers to embed in their artefacts accessibility and usability for all. Web applications developed by means of EAGER have the ability to adapt to the interaction modalities, metaphors and user interface elements most appropriate to each individual user and context of use.

by Fabio Paternò, Carmen Santoro and Antonio Scorcia

An environment developed at the Human Interfaces in Information Systems (HIIS) Laboratory of ISTI-CNR supports Web user-interface migration through different devices. The goal is to provide user interfaces that are able to move across different devices, even offering different interaction modalities, in such a way as to support task continuity for the mobile user. This is obtained through a number of transformations that exploit logical descriptions of the relevant user interfaces.

by Adrian Stanciulescu, Jean Vanderdonckt, Benoit Macq

Multimodal Web applications provide end-users with a flexible user interface (UI) that allows graphical, vocal and tactile interaction. As experience in developing such multimodal applications grows, the need arises to identify and define major design options of these applications in order to pave the way of designers to a structured development life cycle.

by Lora Aroyo

Personalization and user experience are key challenges for effectively using current consumer electronics. The VU University Amsterdam demonstrated in two projects the use of Semantic Web technology for personalization: CHIP - combining the experience in a physical museum with mobile devices and the Web, and iFanzy - a personalized selection of the digital TV content in a cross-media environment.

by Dimitrios Skoutas, Alkis Simitsis and Timos Sellis

Scientists from the National Technical University of Athens and the IBM Almaden Research Centre are proposing a novel infrastructure for ranking and selecting Web services using semantic Web technology. This approach uses the measures of recall and precision to evaluate the similarity between requested and provided services and expresses that similarity as a continuous value in the range of 0 to 1.

by Kyriakos Kritikos and Dimitris Plexousakis

The success of the Web service paradigm has led to a proliferation of available services. While sophisticated semantic (functional) discovery mechanisms have been invented to overcome UDDI's syntactic solution, the number of functionally equivalent Web services returned is still large. The solution to this problem is the description of the non-functional aspect of Web services, in particular quality of service (QoS), which is directly related to their performance. We are currently developing a semantic framework, which includes ontologies and matchmaking algorithms, for the semantic QoS-based description and discovery of Web services.

by Walter Binder, Ion Constantinescu, Boi Faltings and Radu Jurca

The creation of compound, service-oriented Web applications is a tedious, manual task. The software designer must search for relevant services, study their application programming interfaces (APIs) and integrate them into the desired application, while also taking into account non-functional aspects such as service cost or reliability. We have been investigating models and algorithms to automate the service integration process, resulting in novel service composition algorithms that combine artificial intelligence (AI) planning techniques with advanced dynamic matchmaking in large-scale Web service directories.

Next issue: January 2019
Special theme:
Transparency in Algorithmic Decision Making
Call for the next issue
Get the latest issue to your desktop
RSS Feed