The presentation of this document has been augmented to identify changes from a previous version. Three kinds of changes are highlighted: new, added text, changed text, and deleted text. NOTE: the status section of the document has not been augmented to identify changes from a previous version.


W3C

Use Cases and Requirements for Ontology and API for Media Object 1.0

W3C Working Draft @@ April 2009

This version:
http://www.w3.org/TR/2009/WD-media-annot-reqs-200904@@
Latest version:
http://www.w3.org/TR/media-annot-reqs
Editors:
WonSuk Lee, Electronics and Telecommunications Research Institute (ETRI)
Tobias Bürger, University of Innsbruck
Felix Sasaki, Invited ExpertW3C
Véronique Malaisé, VU University of Amsterdam

Abstract

This document specifies use cases and requirements as an input for the development of the "Ontology for Media Object 1.0" and the "API for Media Object 1.0". The ontology will be a simple ontology to support cross-community data integration of information related to media objects on the Web. The API will provide read access and potentially write access to media objects, relying on the definitions from the ontology.

The main scope of this document are videos. Metadata for other media objects like audio or images will be taken into account if it is applicable for videos as well.

Status of this Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

This is an updated Working Draft of the Use Cases and Requirements for Ontology and API for Media Object 1.0 specification. It has been produced by the Media Annotations Working Group, which is part of the W3C Video on the Web Activity. The purpose of this publication is to reflect the progress of the Working Group. There are still topics e.g. in the area of terminology about which the Working Group has not reached consensus.

A list of changes and a diff-marked version against the previous version of this document are available.

Please send comments about this document to public-media-annotation@w3.org mailing list (public archive).

Publication as a Working Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. The group does not expect this document to become a W3C Recommendation. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Table of Contents

1 Introduction
2 Purpose of this draft publication
3 Purpose of the Ontology and the API
4 Terminology
5 Use Cases
    5.1 Interoperability between Media resources acrossa Cultural Heritage Institutions
    5.2 Recommendation across different media types
    5.3 Life Log
    5.4 Access via web client to metadata in heterogeneous formats
    5.5 User generated Metadata
    5.6 Use cases: to be done
6 Requirements
    6.1 Requirement r01: Providing methods for getting structured or unstructured metadata out of media objects in different formats
    6.2 Requirement r02: Providing methods for setting metadata in media objects in different formats
    6.3 Requirement r03: Providing in the API a means for supporting structured annotations
    6.4 Requirement r04: Providing a means to access user-defined metadata
    6.5 Requirement r05: Providing the ontology as a simple set of properties
    6.6 Requirement r06: Specifying an internal or external format for the ontology
    6.7 Requirement r07: Introducing several abstraction levels in the ontology
    6.8 Requirement r08: Being able to apply the ontology / API for collections of metadata
    6.9 Requirement r09: Taking different roles in metadata processing into account
    6.10 Requirement r10: Being able to describe fragments of media objects
    6.11 Requirement r11: Providing the ontology in slices of conformance
    6.12 Requirement r12: Provide support for controlled vocabularies for the values of different properties
    6.13 Requirement r13: Allow for different return types for the same property

Appendices

A References
B References (Non-Normative)
C Change Log (Non-Normative)
D Acknowledgements (Non-Normative)


1 Introduction

Anticipating the increase in online video and audio in the upcoming years, we can foresee that it will become progressively more difficult for viewers to find the content using current search tools. In addition, video services on the web that allow for upload of video, need to display selected information about the media documents which could be facilitated by a uniform access to selected metadata across a variety of file formats.

Unlike hypertext documents, it is more complex and sometimes impossible to deduce meta information about a medium, such as its title, author, or creation date from its content. There has been a proliferation of media metadata formats for the document's authors to express this metadata information. For example, an image could potentially contain [EXIF], [IPTC] and [XMP] information. There are also several metadata solutions for media related content, including [MPEG-7], Yahoo! [MEDIA RSS], Google [Videositemaps], [VODCSV], [TVAnytime] and [EBU P-Meta]. Many of these formats have been extensively discussed in the deliverables [XGR Vocabularies] and [XGR Image Annotation] of the W3C Multimedia Semantics Incubator Group , which provide a major input to this Working Group.

The "Ontology for Media Object 1.0" will address the intercompatiblity problem by providing a common set of properties to define the basic metadata needed for media objects and the semantic links between their values in different existing vocabularies. It will help circumventing the current proliferation of video metadata formats by providing full or partial translation and mapping between the existing formats. The ontology will be accompanied by an API that provides uniform access to all elements defined by the ontology, which are selected elements from different formats.

This document specifies the use cases and requirements that are motivating the development of the "Ontology for Media Object 1.0". The scope is mainly video media objects, but we take also other media objects into account if their metadata information is related to video.

The development of the requirements has three major inputs: Use cases, analysis of existing standards, and a description of canonical media processes.

2 Purpose of this draft publication

This initial version of this document contains only a small set of use cases and requirements. Nevertheless it is being published to gather wide feedback on the general direction of the Working Group. Hence, we would like to encourage especially feedback on 6 Requirements, the requirements which we are planning to implement, or others which we are planning not to take into account.

Currently, there is an additional section under development, describing a top-down modeling approach to describe the media annotation problem. The Working Group is considering to publish that section in an updated version of this document.

3 Purpose of the Ontology and the API

The following figure visualizes the purpose of the ontology, the purposeontology of the API and their relation to applications.

Purpose of the ontology and the API

The ontology will define mappings from properties in formats to a common set of properties. The API then will define methods to access heterogeneous metadata, using such mappings. An example: the property createDate from XMP [XMP] can be mapped to the property DateCreated from IPTC [IPTC]. The API will then define a method getCreateDate that will return values either from XMP or IPTC metadata.

An important aspect of the above figure is that everything visualized above the API is left to applications. For example.

  • languages for simple or complex queries

  • queries, analysis of user preferences (like "preferring movies with actor X and suitable for children")

  • children"), or other mechanisms for accessing metadata

metadata. The ontology and the API provide merely a basic, simple means of interoperability for such applications.

4 Terminology

The keywords MUST, MUST NOT, SHOULD and SHOULD NOT are to be interpreted as defined in [RFC 2119].

5 Use Cases

5.1 Interoperability between Media resources acrossa Cultural Heritage Institutions

Summary: Accessing media collections of different cultural heritage institutions (libraries, museums, archives, etc.) on the Web.

Related requirements:

Description / Example:

The collections of cultural heritage institutions (libraries, museums, archives, etc.) are increasingly digitised and made available on the Web. These collections range from text to image, video and audio (music and radio collections, forcollections example). A comprehensive, professionally created documentation is usually available, however, often using domain specific or even proprietary metadata models. This hinders accessing and linking these collectionscollections. The media types that are archived in ana cultural heritage perspective range from image to homogeneous or centralized way and linking them across collections.

For example, Jane is a TV journalist searching for material about some event in contemporary history. She is interested in televisionmovie clips and radio broadcasts from this event, along with photos and newspaper articles. All these resources come from different collections,languages. She and some are in different languages. A homogeneous way of accessing them across the Web would improve her work. Web.

5.2 Recommendation across different media types

Summary: Accessing heterogeneous media objects metadata as the input to the creation of recommendations which is based on user preferences.

Related requirements:

Description / Example:

People nowadays are able to enjoy large number of programs from different content providers (broadcasting companies, Internet video website, etc.). To achieve better user experience, reduceuser history based recommendation is very promising. Recommendation the user's experienceto of being overloaded, and henceretain users by retain users, some systems provideof recommendations based on the user's history, ratings, or stateduser preferences. However, different content providers usually have their specific or proprietary metadata models, which is one of the key problems faced by recommendation service providers. A common ontology spanningacross different metadata sets can allow recommendation systems to return a better, larger, and more relevant selectionusers than when the metadata systems are unrelated. metadata.

Company A is an IPTV add-value service provider. One of their services is to recommend programs that users might like, based on their watching history or explicit rating of programs. In this system, users are able to watch regular TV programs with electronic program guide (EPG) format metadata, videos such as fromvideos YouTube, with website-specific metadata, etc. In order to perform uniform and effective recommendation in theuniformly absence of a common set of vocabularies, they would need to design own integrated media annotation model.

5.3 Life Log

Use case summary: combining heterogeneous metadata from life logs, to allow searching personal life log information, potentially enriched with geolocation information.

Related requirements:

Description / Example:

With modern devices, aA person can capturecaptures his or her experience, including all sorts oftheir daily events, by creating images, audios and videos files, and publish them on the Web. These are called "Life Logs". These Life Logs contain various information such as time, location, creator's profile, relations between differenthuman people, and even emotion. If accessed via an ontology providing links between the different metadata usedthe to describe these various information, a user couldcan easily and efficiently search for his or herhis/her personal Life Log information, including emotional information ( this type of information can be described using a vocabulary like [Emotions ML 1.0]), or geolocation information on the Web (which can be describedweb, using the [Geolocation API] specification). Other people's Life Logs contents could also be searched and accessed via this ontology.necessary.

5.4 Access via web client to metadata in heterogeneous formats

Use case summary: Accessing metadata in heterogeneous formats for web developers

Related requirements:

Description / Example:

John is developing a JavaScript library for accessing metadata of media objects (e.g. video) in various formats. These objects are available within a database, such as that of a search engine indexing the internet or other web-accessible content (e.g. a corporate repository, library, etc.). His library can be used to make queries of the media objects like:

  • "Find me all media objects which have been created by a specified person"

  • "Find me all media objects which have been created this year"

  • "Find me all videos which are not longer than a specified time"

  • "Extract all user added tags from all media objects available"

This use case is related to many other use cases. Nevertheless it is mentioned separately since, at the difference fromother requirements, its implementation requires only a small set of requirements. TheAlso, the difference from this use case is not to require or to thepropose developing a Cultural Heritage use case is that the formerontology can be is very strongly tied to the requirement of such a read-only client side API.language.

5.5 User generated Metadata

Use case summary: Adding or linking to external metadata by different users.

Related requirements:

Description / Example:

John wants to publish comments on the last movies he has seen on http://example.cheap-vod.com/ . For each movie, he uses the description metadata field to provide a personal summary of the movie (with incentive to see or avoid the movie according to his own opinions), and the ranking metadata. John is also not satisfied with the genre classification of the website, so he uses the genre metadata field to provide his appreciation of the genre with regard to a better scheme. He then publishes these metadata on his blog (may be in the form of a podcast), but only links to the videos themselves.

Jane, a friend of John's and another cheap-vod customer, can now configure her cheap-vod account or her browser, to have John's metadata added to or replacing the original metadata embedded in each file.

Now Jane wants to study more particularly the characters of the movie. For making this easier, she defines one custom metadata field for each of the main characters, and sets these fields to "yes" or "no" for each sequence, to indicate if they contain that character or not. For example:

<http://example.library.myschool.edu/rose.ogv#some_fragment_identifier>
dc:title "Meeting Tom Baxter" ;
dc:description "Cecilia sees the movie several times when...." ;
custom:cecilia "yes" ;
custom:tom "yes" ;
custom:gil "no" ;
custom:monk "no".

In this context, the ontology would enhance the interoperability between different users.

5.6 Use cases: to be done

Editorial note 
In a future draft of this document, the following use cases will be spelled out separately, integrated into existing use cases or dropped.

6 Requirements

This sections describes requirements for the ontology and the API. The Working Group has agreed to implement the following requirements. For the other requirements, there is no agreement yet, and the Working Group is asking reviewers of this document for feedback about their implementation.

The requirements which the Working Group currently does not have agreement to take into account are the following:

6.4 Requirement r04: Providing a means to access user-defined metadata

Description: It MUST be possible to access user-defined metadata to media objects. "user-defined metadata" means metadata that is not defined in a standardized format, but which is being created entirely by the user.

Rationale: The ability to access user-defined metadata is necessary for the use case user generated metadata.

Target (API and / or ontology): API which needs to provide a method to add user-defined metadata, and the ontology which needs to provide an extensibility mechanism.

Note:

"Accessing user-defined metadata" may mean setting or getting such metadata. We have not decided whether we will be able to support the process of setting metadata, see issues mentioned at Requirement r02: Providing methods for setting metadata in media objects in different formats.

6.7 Requirement r07: Introducing several abstraction levels in the ontology

Description: The ontology MUST provide several abstraction levels.

Rationale: Several metadata standards like [FRBR] or [CIDOC] allow referring to multimedia objects on several abstraction levels, in order to separate e.g. a movie, a DVD which contains the movie and a specific copy of the DVD. Especially for collections of multimedia objects, knowledge about such abstraction levels is helpful, as a means for accessing the objects on each level.

Target (API and / or ontology): ontology and potentially API, if we want to provide access to metadata and multimedia objects on several abstraction levels.

A References

RFC 2119
S. Bradner. Key Words for use in RFCs to Indicate Requirement Levels. IETF RFC 2119, March 1997. Available at http://www.ietf.org/rfc/rfc2119.txt.

B References (Non-Normative)

CIDOC
N. Crofts, M. Doerr, T. Gill, S. Stead, M. Stiff. Definition of the CIDOC Conceptual Reference Model, Version 5.0. Technical specification December 2008. Available at http://cidoc.ics.forth.gr/docs/cidoc_crm_version_5.0_Dec08.pdf.
EBU P-Meta
EBU-P Tech 3295: The EBU Metadata European Broadcasting Union specification 2007. Available at Release.
EBU Core
EBU CORE European Broadcasting Union specification 2008. Available at .
Emotions ML 1.0
P. Baggia, F. Burkhardt. J. C. Martin, C. Pelachaud, C. Peter, B. Schuller, I. Wilson and E. Zovato. Elements of an EmotionML 1.0 . W3C Incubator Group Report 20 November 2008 . Available at http://www.w3.org/2005/Incubator/emotion/XGR-emotionml-20081120/.
EXIF
Exchangeable image file format for digital still cameras: Exif Version 2.2. JEITA Technical specification August 2002. Available at http://www.digicamsoft.com/exif22/exif22/html/exif22_1.htm.
FRBR
. Technical specification 1998. Available at .
Geolocation API
A. Popescu. Geolocation API Specification. W3C Working Draft 22 December 2008. Available at http://www.w3.org/TR/2008/WD-geolocation-API-20081222/. The latest version of the Geolocation API specification is available at http://www.w3.org/TR/geolocation-API/ .
IPTC
IPTC Standard Photo Metadata 2008. IPTC Core Specification Version 1.1, IPTC Extension Specification Version 1.0, Document Revision 2, June 2008. Available at http://www.iptc.org/std/photometadata/2008/specification/IPTC-PhotoMetadata-2008.pdf
MEDIA RSS
Yahoo! Media RSS Module - RSS 2.0 Module. Technical specification March 2008. Available at http://search.yahoo.com/mrss.
MPEG-7
Information Technology - Multimedia Content Description Interface (MPEG-7). Standard No. ISO/IEC 15938:2001, International Organization for Standardization(ISO), 2001.
TVAnytime
TheTVAnytime specifications andMetadata. schemas can be downloaded free of charge fromhttp://www.tv-anytime.org/workinggroups/wg-md.html#docs .
Videositemaps
Google Video Sitemap. Example available at http://www.google.com/support/webmasters/bin/answer.py?answer=80472&topic=10079 .
VODCSV
Video-On-Demand Content Specification Version 2.0. CableLabs technical specification January 2007. Available at http://www.cablelabs.com/specifications/MD-SP-VOD-CONTENT2.0-I02-070105.pdf.
XGR Image Annotation
M. Hausenblas. Multimedia Vocabularies on the Semantic Web. W3C Incubator Group Report 24 July 2007. Available at http://www.w3.org/2005/Incubator/mmsem/XGR-vocabularies-20070724/.
XGR Vocabularies
R. Troncy, J. v. Ossenbruggen, J. Z. Pan and G. Stamou. Image Annotation on the Semantic Web. W3C Incubator Group Report 14 August 2007. Available at http://www.w3.org/2005/Incubator/mmsem/XGR-image-annotation-20070814/.
XMLTV
XML TV Project. Available at http://wiki.xmltv.org/index.php/XMLTVProject. (See http://wiki.xmltv.org/index.php/XMLTVProject.)
XMP
XMP Specification Part 2 - Standard Schemas. Technical specification, Adobe 2008. Available at http://www.adobe.com/devnet/xmp/pdfs/XMPSpecificationPart2.pdf .

C Change Log (Non-Normative)

DateChange
2009-01-19Initial publication.
2009-03-16Integrated comments from the Media Fragments Working Group, and Raphaël Troncy. See editing summary.
2009-03-16Editing of the Cultural Heritage Institutions use case.
2009-03-19Integrated comments from Jean-Pierre Evain.
2009-04-02Removed the mobile use case.
2009-04-29Integrated comments from David Singer, except the "More structural comments".
2009-04-29Added a health warning to the status section about ongoing terminology discussions.

D Acknowledgements (Non-Normative)

This document is the work of the W3C Media Annotations Working Group.

Members of the Working Group are (at the time of writing, and by alphabetical order): Werner Bailer (K-Space), Tobias Bürger (University of Innsbruck), Eric Carlson (Apple, Inc.), Pierre-Antoine Champin ((public) Invited expert), Jaime Delgado (Universitat Politècnica de Catalunya), Jean-Pierre EVAIN ((public) Invited expert), Ralf Klamma ((public) Invited expert), WonSuk Lee (Electronics and Telecommunications Research Institute (ETRI)), Véronique Malaisé (Vrije Universiteit), Erik Mannens (IBBT), Hui Miao (Samsung Electronics Co., Ltd.), Thierry Michel (W3C/ERCIM), Frank Nack (University of Amsterdam), Soohong Daniel Park (Samsung Electronics Co., Ltd.), Silvia Pfeiffer (W3C Invited Experts), Chris Poppe (IBBT), Víctor Rodríguez (Universitat Politècnica de Catalunya), Felix Sasaki (W3C Invited Experts), David Singer (Apple, Inc.), Joakim Söderberg (ERICSSON), Thai Wey Then (Apple, Inc.), Ruben Tous (Universitat Politècnica de Catalunya), Raphaël Troncy (CWI), Vassilis Tzouvaras (K-Space), Davy Van Deursen (IBBT).

The people who have contributed to discussions on public-media-annotation@w3.org are also gratefully acknowledged.