Working group on digital library APIs and possible outcomes

13. April 2008 um 14:48 3 Kommentare

Last year the Digital Library Federation (DLF) formed the „ILS Discovery Interface Task Force„, a working group on APIs for digital libraries. See their agenda and the current draft recommendation (February, 15th) for details [via Panlibus]. I’d like to shortly comment on the essential functions they agreed on at a meeting with major library system (ILS) vendors. Peter Murray summarized the functions as „automated interfaces for offloading records from the ILS, a mechanism for determining the availability of an item, and a scheme for creating persistent links to records.“

On the one hand I welcome if vendors try to agree on (open) standards and service oriented architecture. On the other hand the working group is yet another top-down effort to discuss things that just have to be implemented based on existing Internet standards.

1. Harvesting: In the library world this is mainly done via OAI-PMH. I’d also consider RSS and Atom. To fetch single records, there is unAPI – which the DLF group does not mention. There is no need for any other harvesting API – missing features (if any) should be integrated into extensions and/or next versions of OAI-PMH and ATOM instead of inventing something new. P.S: Google Wave shows what to expect in the next years.

2. Search: There is still good old overblown Z39.50. The near future is (slightly overblown) SRU/SRW and (simple) OpenSearch. There is no need for discussion but for open implementations of SRU (I am still waiting for a full client implementation in Perl). I suppose that next generation search interfaces will be based on SPARQL or other RDF-stuff.

2. Availability: The announcement says: „This functionality will be implemented through a simple REST interface to be specified by the ILS-DI task group“. Yes, there is definitely a need (in december I wrote about such an API in German). However the main point is not the API but to define what „availability“ means. Please focus on this. P.S: DAIA is now available.

3. Linking: For „Linking in a stable manner to any item in an OPAC in a way that allows services to be invoked on it“ (announcement) there is no need to create new APIs. Add and propagate clean URIs for your items and point to your APIs via autodiscovery (HTML link element). That’s all. Really. To query and distribute general links for a given identifier, I created the SeeAlso API which is used more and more in our libraries.

Furthermore the draft contains a section on „Patron functionality“ which is going to be based on NCIP and SIP2. Both are dead ends in my point of view. You should better look at projects outside the library world and try to define schemas/ontologies for patrons and patron data (hint: patrons are also called „customer“ and „user“). Again: the API itself is not underdefined – it’s the data which we need to agree on.

3 Comments »

RSS feed for comments on this post. TrackBack URI

  1. […] More on APIs for libraries can be found in WorldCat Developer Network, in the Jangle project and a DLF Working group on digital library APIs. For staying in the limited world if libraries, this may suffice, but on the web simplicity and […]

    Pingback by Relevant APIs for (digital) libraries « Jakoblog — Das Weblog von Jakob Voß — 10. Juli 2008 #

  2. […] (Talis) vorangetrieben und steht anscheinend sowohl in Konkurrenz als auch in Kooperation mit der DLF working group on digital library APIs. Ob Jangle etwas wird und ob es sich durchsetzt, ist noch offen – zumindest ist mehr technischer […]

    Pingback by Jangle-API für Bibliothekssysteme « Jakoblog — Das Weblog von Jakob Voß — 10. Juli 2008 #

  3. […] wird. Vor allem ist es notwendig für die verschiedenen Zusatzdienste eine gemeinsame Infrastruktur und Standards zu finden: statt mehrere große Baustellen, die alle den ultimativen Katalog entwickeln wollen, […]

    Pingback by …und noch ein 2.0-Katalog « Jakoblog — Das Weblog von Jakob Voß — 16. September 2008 #

Sorry, the comment form is closed at this time.