An impression of the OPDS/OpenPub catalog data model

27. Mai 2010 um 00:05 7 Kommentare

A few days ago Ed Summers pointed me to the specification of the Open Publication Distribution System (OPDS) which was just released as version 0.9. OpenPub (an alias for OPDS) is part of the Internet Archive’s BookServer project to build an architecture for vending and lending digital books over the Internet. I wonder why I have not heard more of BookServer and OpenPub at recent library conferences, discussion lists, and journals but maybe current libraries prefer to stay in the physical world to become museums and archives. Anyway, I had a look at OpenPub, so here are my public notes of the first impressions – and my answer to the call for comments. Please comment if you have corrections or additions (or create an issue in the tracker)!

OPDS is a syndication format for electronic publications based on Atom (RFC 4287). Therefore it is fully based on HTTP and the Web (this place that current libraries are still about to discover). Conceptually OPDS is somehow related to OAI(-ORE) and DAIA but it is purely based on XML which makes it difficult to compare with RDF-based approaches. I tried to reengineer the conceptual data model to better seperate model and serialization like I did with DAIA. The goal of OPDS catalogs is „to make Publications both discoverable and straightforward to acquire on a range of devices and platforms“.

OPDS uses a mix of DCMI Metadata Terms (DC) elements and ATOM element enriched with some new OPDS elements. Furthermore it interprets some DC and ATOM elements in a special way (this is common in many data formats although frequently forgotten).

Core concepts

The core concepts of OPDS are Catalogs which are provided as ATOM Feeds (like Jangle which should fit nicely for library resources), Catalog Entries that each refer to one publication and Aquisition Links. There are two disjunct types of Catalogs: Navigation Feeds provide a browseable hierarchy and Acquisition Feeds contain a list of Publication Entries. I will skip the details on Navigation Feeds and search facities (possible via OpenSearch) but focus on Elements and Aquisition.

Catalog Elements

The specification distinguishes between Partial and Complete Catalog Entries but this is not relevant on the conceptual level. There we have two concepts that are not clearly seperated in the XML serialization: the Catalog Record and the Publication which a Catalog Record describes are mixed in one Catalog Element. The properties of a Catalog Record are:

atom:id
identifier of the catalog entry (MANDATORY)
atom:updated
modification timestamp of the catalog entry (MANDATORY)
atom:published
timestamp of when the catalog entry was first accessible

The properties of a Publication are:

dc:identifier
identifier of the publication
atom:title
title of the publication (MANDATORY)
atom:author
creator of the publication (possibly with sub-properties)
atom:contributors
additional contributors to the publication (dito)
atom:category
publication’s category, keywords, classification codes etc. (with sub-properties scheme, term, and label)
dc:issued
first publication date of the publication
atom:rights
rights held in and over the publications
atom:summary and atom:content
description of the publication (as plain text or some other format for atom:content)
dc:language
language(s) of the publication (any format?)
dc:extend
size or duration of the publication (?)
dc:publisher
Publisher of the publication

Moreover each publication may link to related resources. Unfortunately you cannot just use arbitrary RDF properties but the following relations (from this draft):

alternate
alternative description of the publication
copyright
copyright statement that applies to the catalog entry
latest-version
more recent version of the publication
license
license associated with the catalog entry
replies
comment on or discussion of the catalog entry

I consider this relation types one of the weakest points of OPDS. The domain and range of the links are not clear and there are much better vocabularies for links between publications, for instance in FRBR, the Bibliographic Ontology, the citation type ontology, Memento, and SIOC (which also overlaps with ODPS at other places).

In addition each publication must contain at least one atom:link element which is used to encode an Aquisition Link.

Aquisition Links

OPDS defines two Aquisition types: „Direct Acquisition“ and „Indirect Acquisition“. Direct Aquisition links must directly lead to the publication (in some format) without any login, meta or catalog page in front of it (!) while Indirect Acquisition links lead to such a portal pages that then links to the publications. There are five Aquisition types (called „Acquisition Relations“) similar to DAIA Service types:

odps:acquisition
a complete representation of the
publication that may be retrieved without payment
odps:acquisition/borrow
a complete representation of the publication
that may be retrieved as part of a lending transaction
odps:acquisition/buy
a complete representation of the publication
that may be retrieved as part of a purchase
odps:acquisition/sample
a representation of a subset of the publication
odps:acquisition/subscribe
a complete representation of the publication that may be retrieved as part of a subscription

odps:acquisition can be mapped to daia:Service/Openaccess and odps:acquisition/borrow can be mapped to daia:Service/Loan (and vice versa). odps:acquisition/buy is not defined in DAIA but could easily be added while daia:Service/Presentation and daia:Service/Interloan are not defined in ODPS. At least the first should be added to ODPS to indicate publications that require you to become a member and log in or to physically walk into an institution to get a publication (strictly limiting OPDS to pure-digital publications accessible via HTTP is stupid if you allow indirect aquisition).

The remaining two acquisition types somehow do not fit between the others: odps:acquisition/sample and odps:acquisition/subscribe should be orthogonal to the other relations. For instance you could subscribe to a paid or to a free subscription and you could buy a subset of a publication.

In addition Aquisition links may or must contain some other properties such as odps:price (containing of a currency code from ISO4217 and a value).

Cover and artwork links

Beside Aquisition links the relations opds:cover and opds:thumbnail can be used to relate a Publication with it’s cover or some other visual representation. The thumbnail should not exceed 120 pixles in height or width and images must be either GIF, JPEG, or PNG. Thumbnails may also be directly embedded via the „data“ URL schema from RFC2397.

Final thoughts

OPDS looks very promising and it is already used for benefit in practise. There are some minor issues that can easily be fixed. The random selection of relation types is surely I flaw that can be repaired by allowing arbitrary RDF properties (come on XML fanboys, you should notice that RDF is good at least at link types!) and the list of acquisition types should be cleaned and enhanced at least to support „presentation“ without lending like DAIA does. A typical use case for this are National Licenses that require you to register to access the publications. For more details I would like to compare OPDS in more depth with models like DAIA, FRBR, SIOC, OAI-ORE, Europeana etc. – but not now.

Working group on digital library APIs and possible outcomes

13. April 2008 um 14:48 3 Kommentare

Last year the Digital Library Federation (DLF) formed the „ILS Discovery Interface Task Force„, a working group on APIs for digital libraries. See their agenda and the current draft recommendation (February, 15th) for details [via Panlibus]. I’d like to shortly comment on the essential functions they agreed on at a meeting with major library system (ILS) vendors. Peter Murray summarized the functions as „automated interfaces for offloading records from the ILS, a mechanism for determining the availability of an item, and a scheme for creating persistent links to records.“

On the one hand I welcome if vendors try to agree on (open) standards and service oriented architecture. On the other hand the working group is yet another top-down effort to discuss things that just have to be implemented based on existing Internet standards.

1. Harvesting: In the library world this is mainly done via OAI-PMH. I’d also consider RSS and Atom. To fetch single records, there is unAPI – which the DLF group does not mention. There is no need for any other harvesting API – missing features (if any) should be integrated into extensions and/or next versions of OAI-PMH and ATOM instead of inventing something new. P.S: Google Wave shows what to expect in the next years.

2. Search: There is still good old overblown Z39.50. The near future is (slightly overblown) SRU/SRW and (simple) OpenSearch. There is no need for discussion but for open implementations of SRU (I am still waiting for a full client implementation in Perl). I suppose that next generation search interfaces will be based on SPARQL or other RDF-stuff.

2. Availability: The announcement says: „This functionality will be implemented through a simple REST interface to be specified by the ILS-DI task group“. Yes, there is definitely a need (in december I wrote about such an API in German). However the main point is not the API but to define what „availability“ means. Please focus on this. P.S: DAIA is now available.

3. Linking: For „Linking in a stable manner to any item in an OPAC in a way that allows services to be invoked on it“ (announcement) there is no need to create new APIs. Add and propagate clean URIs for your items and point to your APIs via autodiscovery (HTML link element). That’s all. Really. To query and distribute general links for a given identifier, I created the SeeAlso API which is used more and more in our libraries.

Furthermore the draft contains a section on „Patron functionality“ which is going to be based on NCIP and SIP2. Both are dead ends in my point of view. You should better look at projects outside the library world and try to define schemas/ontologies for patrons and patron data (hint: patrons are also called „customer“ and „user“). Again: the API itself is not underdefined – it’s the data which we need to agree on.

First draft of OAI-ORE

30. Dezember 2007 um 18:06 Keine Kommentare

„Web 3.0“ (or „Semantic Web“ – use the buzzword of your choice) is slowly on the raise. Two weeks ago the first public draft of OAI-ORE was published and Mike Giarlo published an OAI-ORE-Plugin for WordPress – I have not actually tried it, but as far as I understand one could add RFC 5005 to OAI-ORE to support large resource sets. Or is OAI-PMH enough? Well, in the end it depends on the availability of software libraries, client and the ease of connecting it with other services. After my fancy there are still too much generalized data models but we need concrete implementations – it was not RDF and OWL but Microformats that got the Web of data started (yes, we’re in it: the next hype after „Web 2.0“). For 2008 I wish less abstract meta-meta-meta-stuff but, more little usable applications and services that can be combined.

Relevant APIs for (digital) libraries

30. November 2007 um 14:50 5 Kommentare

My current impression of OCLC/WorldCat Service Grid is still far to abstract – instead of creating a framework, we (libraries and library associations) should agree upon some open protocols and (metadata) formats. To start with, here is a list of relevant, existing open standard APIs from my point of view:

Search: SRU/SRW (including CQL), OpenSearch, Z39.50

Harvest/Syndicate: OAI-PMH, RSS, Atom Syndication (also with ATOM Extensions)

Copy/Provide: unAPI, COinS, Microformats (not a real API but a way to provide data)

Upload/Edit: SRU Update, Atom Publishing Protocol

Identity Management: Shibboleth (and other SAML-based protocols), OpenID (see also OSIS)

For more complex applications, additional (REST)-APIs and common metadata standards need to be found (or defined) – but only if the application is just another kind of search, harvest/syndicate, copy/provide, upload/edit, or Identity Management.

P.S: I forgot NCIP, a „standard for the exchange of circulation data“. Frankly I don’t fully understand the meaning and importance of „circulation data“ and the standard looks more complex then needed. More on APIs for libraries can be found in WorldCat Developer Network, in the Jangle project and a DLF Working group on digital library APIs. For staying in the limited world if libraries, this may suffice, but on the web simplicity and availability of implementations matters – that’s why I am working on the SeeAlso linkserver protocol and now at a simple API to query availaibility information (more in August/September 2008).

P.P.S: A more detailed list of concrete library-related APIs was published by Roy Tennant based on a list by Owen Stephens.

P.P.S: And another list by Stephen Abram (SirsiDynix) from September 1st, 2009

Archiving Weblogs with ATOM and RFC 5005: An alternative to OAI-PMH

19. Oktober 2007 um 11:34 1 Kommentar

Following up to my recent post (in German) I had a conversation with my colleague about harvesting and archiving blogs and ATOM vs OAI-PMH. In my opinion with the recent RFC 5005 about Feed Paging and Archiving and its proposed extension of Archived Feeds ATOM can be an alternative to OAI-PMH. Instead of arguing which is better, digital libraries should support both for harvesting and providing archived publications such as preprints and weblog entries (scientific communication and publication already takes place in both).

Instead of having every project to implementing both protocols you could create a wrapper from ATOM with archived feeds to OAI-PMH and vice versa. The mapping from OAI-PMH to ATOM is probably the easier part: You partition the repository into chunks as defined in RFC 5005 with the from and until arguments of OAI-PMH. The mapping from OAI-PMH to ATOM is more complicated because you cannot select with timestamps. If you only specify a fromargument, the corresponding ATOM feed could be harvested going backwards in time but if there is an until argument you must harvest the whole archive just to get the first entries and and throw away the rest. Luckily the most frequent use case is to get the newest entries only. Anyway: Both protocols have their pros and cons and a two-way-wrapper could help both. Of course it should be implemented as open source so anyone can use it (by the way: There seems to be no OAI-crawler in Perl yet: Sure there is OAI-Harvester but for real-world applications you have to deal with unavailable servers, corrupt feeds, duplicated or deleted entries, and a way to save the harvested records, so a whole layer above the harvester is missing).

P.S.: At code4lib Ed Summers pointed me to Stuart Weibel who asked the same question about blog archiving, and to a discussion in John Udell’s blog that include blog archiving (he also mentions BlogML as a possible part of a solution – unluckily BlogML looks very dirty to me, the spec is here). And Daniel Chudnov drafted a blog mirroring architecture.

Weblogs Sammeln, Erschließen, Verfügbar machen und Archivieren

19. Oktober 2007 um 03:03 2 Kommentare

Ich ärgere mich ja schon seit längerer Zeit, dass praktisch keine Bibliotheken Weblogs sammeln und archivieren, obwohl diese Mediengattung bereits jetzt teilweise die Funktion von Fachzeitschriften übernimmt. Inzwischen kann ich unter den Kollegen zwar ein steigendes Interesse an Blogs feststellen (der nächste Workshop war nach kurzer Zeit ausgebucht), aber so richtig ist bei der Mehrheit noch nicht angekommen, dass hier eine mit der Einführung des Buchdrucks oder Erfindung von Zeitschriften vergleichbare Evolution im Gange ist. Ansonsten sollten doch viel mehr Bibliotheken damit beginnen Weblogs zu Sammeln, Erschließen, Verfügbar zu machen und zu Archivieren.

Anstatt erstmal darüber zu diskutieren, in welche MAB-Spezialfelder die Daten kommen und als was für eine Mediengatung Weblogs gelten, müsste nur mal einer der existierenden Open Source-Feedreader aufgebohrt werden, so dass er im großen Maßstab auf einem oder mehreren Servern läuft und wenigstens jene Feeds sammelt, die irgend ein Bibliothekar mal als sammlungswürdig eigestuft hat. Alles was wohlgeformtes XML und mit einem Mindestsatz an obligatorischen Elementen (Autor [Zeichenkette], Titel [Zeichenkette], Datum [ISO 8061], Inhalt [Zeichenkette]) ausgestattet ist, dürfte doch wenigstens so archivierbar sein, dass sich der wesentliche Teil rekonstruieren lässt – Besonderheiten wie HTML-Inhalte, Kategorien und Kommentare können ja später noch dazu kommen, wenn die Infrastruktur (Harvester zum Sammeln, Speicher zum Archivieren, Index zum Erschließen und eine Lesemöglichkeit zum Verfügbar machen) steht.

Für die Millionen von Blogartikeln, die bislang verloren sind (abgesehen von den nicht für die Archivierung zur Verfügung stehenden Blogsuchmaschinen wie Bloglines, Technorati, Google Blogsearch, Blogdigger etc.) gibt es zumindest teilweise Hoffnung:

Im September wurde RFC 5005: Feed Paging and Archiving definiert eine (auch in RSS mögliche) Erweiterung des ATOM-Formats, bei der vom Feed der letzten Einträge auf die vorhergehenden Einträge und/oder ein Archiv verwiesen wird. Im Prinzip ist das schon länger möglich und hier an einem Beispiel beschrieben, aber jetzt wurde es nochmal etwas genauer spezifiziert. Damit ist ATOM eine echte Alternative zum OAI-PMH, das zwar der Bibliothekswelt etwas näher steht, aber leider auch noch etwas stiefmütterlich behandelt wird.

Wie auch immer: Bislang werden Blogs nicht systematisch und dauerhaft für die Nachwelt gesammelt und falls Bibliotheken überhaupt eine Zukunft haben, sind sie die einzigen Einrichtungen die dafür wirklich in Frage kommen. Dazu sollte in den nächsten Jahren aber die „Erwerbung“ eines Blogs für den Bibliotheksbestand ebenso vertraut werden wie die Anschaffung eines Buches oder einer Zeitschrift. Meinetwegen können dazu auch DFG-Anträge zur „Sammlung und Archivierung des in Form von Weblogs vorliegenden kulturellen Erbes“ gestellt werden, obgleich ich diesem Projektwesen eher skeptisch gegenüber bin: Die Beständige Weiterentwicklung von Anwendungen als Open Source bringt mehr und es wird auch weniger häufig das Rad neu erfunden.

P.S.: Auf der Informationsseite der DNB zur Sammlung von Netzpublikationen findet sich zu Weblogs noch nichts – es liegt also an jeder einzelnen Bibliothek, sich mal Gedanken über die Sammlung von für Sie relevanten Weblogs zu machen.

Syndication and Harvesting with RSS, ATOM, OAI-PMH and Sitemaps

28. September 2007 um 12:32 Keine Kommentare

On my quest for metadata formats and APIs I found that ATOM is not just another RSS but more like a simple database language. Google’s Data API GData strongly pushes ATOM forward (but may also introduce some problems). Jim Downing wrote about ATOM, OAI-PMH, and Sitemaps – three different ways to provide a list of all the resources in a collection, and to incrementally discover changes. OAI-PMH is much less prominent, but why?

Andy Powell started a very lightening discussion with his talk at the JISC Digital repositories conference 2007. He complains that repositories are partly missing the web – popular we-could-also-call-them-repositories like Flickr, Slideshare, YouTube, Scribd etc. don’t use OAI-PMH nor does Google support it. Following the discussion I ask myself what the differences are between scholarly communication and people uploading and mixing any popular content. And do the differences justify different methods of syndication and harvesting? Have a look at the comments by Herbert van de Sompel and Erik Hetzner!