The Path to Web n+1Introduction to the Special Theme
by Lynda Hardman and Steven Pemberton
The Sapir-Whorf Hypothesis postulates a link between thought and language: if you haven't got a word for a concept, you can't think about it; if you don't think about it, you won't invent a word for it. The term "Web 2.0" is a case in point. It was invented by a book publisher as a term to build a series of conferences around, and conceptualises the idea of Web sites that gain value by their users adding data to them. But the concept existed before the term: Ebay was already Web 2.0 in the era of Web 1.0. But now we have the term we can talk about it, and it becomes a structure in our minds, and in this case a movement has built up around it.
There are inherent dangers for users of Web 2.0. For a start, by putting a lot of work into a Web site, you commit yourself to it, and lock yourself into their data formats. This is similar to data lock-in when you use a proprietary program. You commit yourself and lock yourself in. Moving comes at great cost. This was one of the justifications for creating the eXtended Markup Language (XML): it reduces the possibility of data lock-in - having a standard representation for data helps using the same data in different ways too.
As an example, if you commit to a particular photo-sharing Web site, you upload thousands of photos, tagging extensively, and then a better site comes along. What do you do? How about if the site you have chosen closes down (as has happened with some Web 2.0 music sites): all your work is lost. How do you decide which social networking site to join? Do you join several and repeat the work? How about geneology sites, and school-friend sites? These are all examples of Metcalf's law, which postulates that the value of a network is proportional to the square of the number of nodes in the network. Simple maths shows that if you split a network into two, its value is halved. This is why it is good that there is a single email network, and bad that there are many instant messenger networks. It is why it is good that there is only one World Wide Web.
Web 2.0 partitions the Web into a number of topical sub-Webs, and locks you in, thereby reducing the value of the network as a whole.
The advantage of the semantic Web, however, is that it allows your data to be distributed over the Web and not centralised. By adding metadata to your data it allows aggregation services to offer the same functionalities as Web 2.0 without the lock-in and potential loss of data should the service die. To enable the inclusion of non-textual media in the semantic Web we need to extend existing languages, and be aware that existing interfaces are based on assumptions that may no longer apply. Articles in this special issue describe emerging technologies that will contribute to enabling making use of modalities beyond the visual and audible, for the whole, mobile, Web population.
The Web 2.0 phenomenon could not have been predicted when the first Hypertext Transfer Protocol (http) was being developed. Similarly, as we are going beyond human-created links to machine integrated data, we cannot forsee the applications and social phenonena which may emerge, even in the near future.
In this special issue we are not seeking to predict where the Web will go along its apparently accelerating path, but give insights into emerging core technologies that will enable the continuing revolution in our species' relationship to creating, using and maintaining information.
Lynda Hardman, CWI
and Eindhoven University of Technology, The Netherlands
Steven Pemberton, CWI, The Netherlands and W3C