"This article may be confusing for some readers, and should be edited to enhance clarity." That's the editorial foreword at the top of Wikipedia's entry for Web 2.0, an entry which says straight off, "a consensus upon [the term's] exact meaning has not yet been reached" (all quotes as of the time of writing, but subject to change). I discovered, after I started writing this post, that searches for Web 2.0 were briefly in the top ten searches on Technorati's blog index, so bloggers at least are keen to know more. I found all of that strangely reassuring, because it meant that it's not just my lack of brainpower that's to blame for me not being able to get a firm handle on the concept.
But a presentation at this week's User-Generated Content seminar from Colin Donald of Futurescape gave me some more insight through reasonably concrete examples — the kind of use cases that I was missing when I wrote about Digital Lifestyle Aggregation (a very Web 2.0 technology).
Colin has now put his notes and slides online. He spoke of 'Second Generation' blogs, which would be easier to create, would pull content from other sites, incorporate more multimedia, and use permission-based access so that different users see different things (only family and friends seeing personal content, for example).
Some examples of this approach are already available.
Pulling data from Upcoming.org, here are some events I'll be at: | From Flickr, here are some photos of the areas around where I live: | And I've previously included data from some of my playlists on gofish (happy birthday for last Saturday, Mr Young): |
These snippets of syndicated content from other sites are referred to as badges, and one of the things I feel faintly uneasy about is the whiff of Boy Scouts you get from some blogs that have masses of different colours and sizes of badges all down their 'blogroll'.
The screenshot of a second generation blog that Colin Donald showed (see left, originally from SiliconBeat, and click the thumbnail for a full-screen 960KB version) was visually much more seamless and uniform than the hodge-podge of different styles you see above. It will need to be genuinely easy to achieve both this seamlessness and allow some individuality of design. Although I'm no coding geek, I'm probably in the top 5-10% of the population when it comes to ability to deal with mark-up, and I hate having to tweak CSS codes to any degree. It was also a bugger trying to get this page to be valid XHTML with all the scripts that make the badges above.
One thing I would appreciate in second generation blogs is the ability selectively to syndicate content from a main blog (e.g. this one) to secondary blogs I have (e.g. my Ecademy one) — I used to do that manually, but it was too much hassle.
But these are the superficial elements of Web 2.0 use cases. It has to be about more than just adding dynamic versions of personal ephemera to blog sites. Colin persuasively described a coming era of 'participatory consumers' contributing to, and being informed by, 'mass criticism'. For this to happen, there has to be an enabling layer of what Colin referred to as 'structured data'. This includes machine-interpretable metadata that tells other computers that a particular web page is a review, that it's a review of War of the Worlds, and that this version of the War of the Worlds is the 02005 film, not the 01953 film, the Orson Welles radio play version, or the H.G.Wells book. Then a blogger or webmaster would be able to aggregate all these reviews, in a form of Electronic Press Kit. See Colin's notes for screenshots of how this might look.
Reviews in this context are not limited to films and albums, but could extend to mobile phones, washing machines, conferences and seminars, buildings and locations (linked to GPS systems) — more or less anything you can think of, including people (reputation and testimonials become very important).
What I'm still not clear about is how structured data might mesh with folksonomy-derived metadata. I imagine structured data have to be, well, structured and more-or-less standardised. Yet it's precisely the log-jam of standardisation in a world of competing technical and business interests that led to the dam-burst of more-or-less unstructured folksonomies in the first place. Can these two approaches meet in the middle? I feel the need to revisit John Seely Brown and Paul Duguid's book The Social Life of Information.
Posted by David Jennings in section(s) Human-Computer Interaction, Social Software on 15 November 02005 | TrackBackThanks for the write-up, David!
Regarding the standardisation issue, it's significant that the Microformats group (Tantek Celik of Technorati et al) has steered clear of attempting to set standards for exactly the log-jam reason you mention, favouring the adaptation of existing standards. From their wiki:
"The hCard format is a 1:1 representation of the vCard standard, in semantic XHTML."
http://www.microformats.org/wiki/hcard
It seems to me that this may give their approach a relatively fast uptake.
As for the structured content standard vs folksonomy, I think that the two will co-exist, much as Google Base allows, with a set of fields that "ought" to be there (date, time, location etc for an event) and more fields for free-form tags. Then it will be up to the end user - individual, aggregator, search engine - how to search for and present it.
Maybe it's going to be more like mashing than meshing, with individuals (enabled by aggregators and SEs) taking what they want from the structured content and what they want from the tagged content.
For instance, a user searches a music review aggregator for all the reviews of heavy metal gigs in Liverpool in 2005 and automatically also receives a tag cloud with the results. Then they use the tag cloud as a basis for making a new view of the reviews on their personal blog.
Posted by: Colin Donald on 27 November 02005 at 2:04 PM