Data models age like parents

15. März 2018 um 21:51 Keine Kommentare

Denny Vrandečić, employed as ontologist at Google, noticed that all six of of six linked data applications linked to 8 years ago (IWB, Tabulator, Disko, Marbles, rdfbrowser2, and Zitgist) have disappeared or changed their calling syntax. This reminded me at a proverb about software and data:

software ages like fish, data ages like wine.

‏
The original form of this saying seems to come from James Governor (@monkchips) who in 2007 derived it from from an earlier phrase:

Hardware is like fish, operating systems are like wine.

The analogy of fishy applications and delightful data has been repeated and explained and criticized several times. I fully agree with the part about software rot but I doubt that data actually ages like wine (I’d prefer Whisky anyway). A more accurate simile may be „data ages like things you put into your crowded cellar and then forget about“.

Thinking a lot about data I found that data is less interesting than the structures and rules that shape and restrict data: data models, ontologies, schemas, forms etc. How do they age compared with software and data? I soon realized:

data models age like parents.

First they guide you, give good advise, and support you as best as they can. But at some point data begin to rebel against their models. Sooner or later parents become uncool, disconnected from current trends, outdated or even embarrassing. Eventually you have to accept their quaint peculiarities and live your own life. That’s how standards proliferate. Both ontologies and parents ultimately become weaker and need support. And in the end you have to let them go, sadly looking back.

(The analogy could further be extended, for instance data models might be frustrated confronted by how actual data compares to their ideals, but that’s another story)

Modeling is difficult

21. September 2011 um 00:33 3 Kommentare

Yesterday Pete Johnston wrote a detailed blog article about difficulties of „the right“ modeling with SKOS, and FOAF in general, and about the proposed RDF property foaf:focus in particular. As Dan Brickley wrote in a recent mail „foaf:focus describes a link from a skos:Concept to ‚the thing itself‘. Not every SKOS concept (in a thesauri of classification scheme) will have such a direct „thing“, but many do, especially concepts for people and places.“

Several statements in this discussion made me laugh and smile. Don’t get me wrong – I honor Pete, Dan, and the whole Semantic Web community, but there is a regular lack of philosophy and information science. There is no such thing as ‚the thing itself‘ and all SKOS concepts are equal. Even the distinction between an RDF ‚resource‘ and an SKOS ‚concept‘ is artificial. The problem origins not from wrong modeling, which could be solved by the right RDF properties, but from different paradigms and cultures. There will always be different ways to describe the same ideas with RDF, because neither RDF nor any other technology will ever fully catch our ideas. These technologies are not about things but only about data. As William Kent wrote in Data Reality (1978): „The map is not the territory“ (by the way, last year Chris Rusbridge has quoted Kent in the context of linked data). As Erik Wilde and Robert J. Glushko wrote in a great article (2008):

RDF has succeeded beyond the wildest expectations as a convenient format for encoding information in an open and easily computable fashion. But it is just a format, and the difficult work of analysis and modeling information has not and will never go away.

Ok, they referred not to „RDF“ but to „XML“, so the quotation is wrong. But the statement is right for both data structuring methods. No matter if you put your data in XML, in RDF, or carve it in stone – there will never be a final model, because there’s more than one way to describe something.

Mapping bibliographic record subfields to JSON

13. April 2011 um 16:26 4 Kommentare

The current issue of Code4Lib journal contains an article about mapping a bibliographic record format to JSON by Luciano Ramalho. Luciano describes two approaches to express the CDS/ISIS format in a JSON structure to be used in CoudDB. The article already provoked some comments – that’s how an online journal should work!

The commentators mentioned Ross Singer’s proposal to serialize MARC in JSON and Bill Dueber’s MARC-HASH. There is also a MARC-JSON draft from Andrew Houghton, OCLC. The ISIS format reminded me at PICA format which is also based on fields and subfields. As noted by Luciano, you must preserves subfield ordering and allow for repeated subfields. The existing proposals use the following methods for subfields:

Luciano’s ISIS/JSON:

[ ["x","foo"],["a","bar"],["x","doz"] ]

Ross’s MARC/JSON:

"subfields": [ {"x":"foo"},{"a":"bar"},{"x":"doz"} ]

Bill’s MARC-HASH:

[ ["x","foo"],["a","bar"],["x","doz"] ]

Andrew’s MARC/JSON:

"subfield": [
  {"code":"x","data":"foo"},{"code":"a","data":"bar"},
  {"code":"x","data":"doz"} ]

In the end the specific encoding does not matter that much. Selecting the best form depends on what kind of actions and access are typical for your use case. However, I could not hesitate to throw my encoding used in luapica into the ring:

{ "foo", "bar", "doz", 
  ["codes"] = { 
    ["x"] = {1,3}
    ["a"] = {2}
}}

I think about further simplifying this to:

{ "foo", "bar", "doz", ["x"] = {1,3}, ["a"] = {2} }

If f is a field than you can access subfield values by position (f[1], f[2], f[3]) or by subfield code f[f.x[1]],f[f.a[1]],f[f.x[2]]. By overloading the table access method, and with additional functions, you can directly write f.x for f[f.x[1]] to get the first subfield value with code x and f:all("x") to get a list of all subfield values with that code. The same structure in JSON would be one of:

{ "values":["foo", "bar", "doz"], "x":[1,3], "a":[2] }
{ "values":["foo", "bar", "doz"], "codes":{"x":[1,3], "a":[2]} }

I think a good, compact mapping to JSON that includes an index could be:

[ ["x", "a", "x"], {"x":[1,3], "a":[2] },
  ["foo", "bar", "doz"], {"foo":[1], "bar":[2], "doz":[3] } ]

And, of course, the most compact form is:

["x","foo","a","bar","x","doz"]

Data is not meaning – but a web badge

6. Januar 2011 um 00:57 3 Kommentare

I’m am sure that Douglas Adams and John Lloyd had a word for it: you know exactly what you mean, but not how to call it. Recently I tried to find information about : A particular kind of „web banner“, „button“, or „badge“ with specific size, border, and two parts. I finally found out, that it is a 80×15 web badge as introduced by Antipixel in 2002. A helpful description of the format is given by ZwahlenDesign, who also points to two online badge creation tools: Brilliant Button Maker and Button Maker. Note that the tools use „Button“ instead of „Badge“ to refer to the same thing.

I created a web badge to promote a simple philosophical web standard: data is not meaning* Here is the data as 177 bytes hexdump:

89 50 4E 47 0D 0A 1A 0A 00 00 00 0D 49 48 44 52 00 00 00 50 00 00 00 0F 01 03 00 00 00 49 07 DA CC 00 00 00 01 73 52 47 42 00 AE CE 1C E9 00 00 00 06 50 4C 54 45 FF FF FF 00 00 00 55 C2 D3 7E 00 00 00 59 49 44 41 54 08 D7 63 F8 0F 07 0C 0D 0C 50 C0 C8 B0 FF FF FF 0F D8 99 0D 10 E6 8E CF 7D 05 2D 7E 86 42 2E 85 0C BB 73 EF 6E 7E 76 C2 73 52 4A 23 C3 EE C4 3B 06 AD 7E 95 41 21 1B C1 A2 4F 3C 3C 8D 7C 26 32 EC 78 7B 77 43 8B 9F A7 90 4B 22 B2 09 D8 AD 40 72 03 C2 65 00 CA 67 45 A7 86 69 B7 81 00 00 00 00 49 45 4E 44 AE 42 60 82

If data was meaning, that should be all to say. But data is just a stream of bits, bytes, numbers, characters, strings, nodes, triples, or sometimes even words. You have to make use of it in a meaningful way. For instance you could give the data above to a specific piece of software like your web browser. Here comes the data again:

This PNG image was the smallest I was able to create with optimized colors, LZW compression etc. There is another piece of data, that is only eight bits more (178 bytes) and looks the same as GIF image:

I could also express the monochrome badge with one bit per pixel. That makes 80×15 = 1200 bits = 150 bytes uncompressed. The meaning could be the same, but not when only looking at your browser (because this piece of software cannot handle my „ad-hoc monochrome 80×15 web badge format“).

I also created a version with color. Feel free to use and modify as you like. In this case the PNG with 198 bytes is slightly larger than the GIF with 196 bytes.

PNG:

GIF:

* I was suprised that there were only seven Google hits for this phrase, none of them with the same meaning (sic!) that I try to express by this article. The badge was inspired by this important warning sign.

P.S: Enough data philosophy, time for music. There is so much more than one and zero and one and zero and one!

Is data a language? In search of the new discipline Data Linguistics

13. September 2010 um 01:24 33 Kommentare

Yesterday Jindřich asked me for a reason to treat data as a language. I really appreciate this little conversations in data philosophy, but after a while 140 characters get too limited. Half a year ago a similar discussion with Adrian became a small series of blog articles (in German). I doubt that you can find a simple and final answer to fundamental questions about data and meaning, because these questions touch the human condition. This is also the reason why we should never stop asking unless we give up being human beings.

JindÅ™ich’s question first made me wonder because for me data obviously is a language. All data is represented as sequence of bits, which can easily be defined as formal language. But this argument is stupid and wrong. Although language can be described by formal languages (as introduced by Noam Chomsky), this description only covers synax and grammar. Above all, a description of language must not be confound with language itself: the map is not the territory.

But data is used to communicate just like natural (written) language. The vast amount and heterogenity of data sometimes makes us forget that all data is created by humans for humans. Let me start with a simple argument against the view of data as language. A lot of data is created by measuring nature. As nature is not language, measured data is not language. This argument is also wrong. We (humans) design measuring devices and define their language (sic!) in terms of units like length, duration, blood pressure, and so forth. These units do not exist independent from language, but only communicated via it. And most units describe much more complex and fuzzy concepts like „name“, and „money“, which only exist as social construct. A piece of data is a statement that can be false, true, nonsense, or all of it, depending on context. Just like language.

Nowadays we create a lot of data for machines. Is this an argument against nature as language? I don’t think so. We may say that a piece of data made a machine to perform some task, but the machine was designed to act in a specific way. Machines do not „understand“ data, they just react. If I use an axe to cut a tree, I do not send the tree a message of data that it interprets to cut itself. Of course computers are much more complex then trees (and much simpler from another point of view). The chain of reaction is much more subtle. And most times there are more participants. If I create some data for a specific program, I do not communicate with the program itself, but with everyone involved in creating the program and its environment. This may sound strange but compare the situation with legal systems: a law is a piece of language, used to communicate to other people: „don’t step on the grass“. Unforunately society makes us think that laws are static and independent from us. In the same way people think that data is shaped by computers instead of people. Next time you get angry about a program, think about the vendor and programmer. Next time you get angry about a law, think about the lawmakers.

What follows from treating data as language? I think we need a new approach to data, a dedicated study of data. I would call this discipline data linguistics. Linguistics has many sub-fields concerned with particular aspects of natural language. The traditional division in syntax, semantics, and pragmatics only describes one direction to look at language. Anthropological linguistics and sociolinguistics study the relation between language and society, and historical linguistics studies the history and evolution of languages, only to mention a few disciplines. Suprisingly, the study of data is much more limited — up to now there is no data linguistics that studies data as language. The study of data is mainly focused on its form, for instance on the study of formal languages in computer science, the study of digital media in cultural studies and media studies, or the study of forms and questionnaire in graphic design and public administration (forms could be a good starting point for data linguistics).

There are some other fields that combine data and linguistics but from different viewpoints: computational linguistics studies natural language by computational means, similar to digital humanities in general. In one branch of data analysis, linguistic summaries of data are created based on fuzzy set theory. They provide natural language statements, that capture the main characteristics of data sets. Natural language processing analyses textual data by algorithmic methods. But data linguistics that analyses data in general is still waiting to be discovered. We can only conjecture possible reasons for this lack of research:

  • Data is not seen as language.
  • Digital data is a relatively new phenomenon. The creation of data on a large scale mainly began in the 20th century, so there is not enough time to historically explore the topic.
  • In contrast to natural language, data is too heterogenious to justify a combined look at data in general.
  • data seems to be well-defined, so no research is needed.

What do you think?

An impression of the OPDS/OpenPub catalog data model

27. Mai 2010 um 00:05 7 Kommentare

A few days ago Ed Summers pointed me to the specification of the Open Publication Distribution System (OPDS) which was just released as version 0.9. OpenPub (an alias for OPDS) is part of the Internet Archive’s BookServer project to build an architecture for vending and lending digital books over the Internet. I wonder why I have not heard more of BookServer and OpenPub at recent library conferences, discussion lists, and journals but maybe current libraries prefer to stay in the physical world to become museums and archives. Anyway, I had a look at OpenPub, so here are my public notes of the first impressions – and my answer to the call for comments. Please comment if you have corrections or additions (or create an issue in the tracker)!

OPDS is a syndication format for electronic publications based on Atom (RFC 4287). Therefore it is fully based on HTTP and the Web (this place that current libraries are still about to discover). Conceptually OPDS is somehow related to OAI(-ORE) and DAIA but it is purely based on XML which makes it difficult to compare with RDF-based approaches. I tried to reengineer the conceptual data model to better seperate model and serialization like I did with DAIA. The goal of OPDS catalogs is „to make Publications both discoverable and straightforward to acquire on a range of devices and platforms“.

OPDS uses a mix of DCMI Metadata Terms (DC) elements and ATOM element enriched with some new OPDS elements. Furthermore it interprets some DC and ATOM elements in a special way (this is common in many data formats although frequently forgotten).

Core concepts

The core concepts of OPDS are Catalogs which are provided as ATOM Feeds (like Jangle which should fit nicely for library resources), Catalog Entries that each refer to one publication and Aquisition Links. There are two disjunct types of Catalogs: Navigation Feeds provide a browseable hierarchy and Acquisition Feeds contain a list of Publication Entries. I will skip the details on Navigation Feeds and search facities (possible via OpenSearch) but focus on Elements and Aquisition.

Catalog Elements

The specification distinguishes between Partial and Complete Catalog Entries but this is not relevant on the conceptual level. There we have two concepts that are not clearly seperated in the XML serialization: the Catalog Record and the Publication which a Catalog Record describes are mixed in one Catalog Element. The properties of a Catalog Record are:

atom:id
identifier of the catalog entry (MANDATORY)
atom:updated
modification timestamp of the catalog entry (MANDATORY)
atom:published
timestamp of when the catalog entry was first accessible

The properties of a Publication are:

dc:identifier
identifier of the publication
atom:title
title of the publication (MANDATORY)
atom:author
creator of the publication (possibly with sub-properties)
atom:contributors
additional contributors to the publication (dito)
atom:category
publication’s category, keywords, classification codes etc. (with sub-properties scheme, term, and label)
dc:issued
first publication date of the publication
atom:rights
rights held in and over the publications
atom:summary and atom:content
description of the publication (as plain text or some other format for atom:content)
dc:language
language(s) of the publication (any format?)
dc:extend
size or duration of the publication (?)
dc:publisher
Publisher of the publication

Moreover each publication may link to related resources. Unfortunately you cannot just use arbitrary RDF properties but the following relations (from this draft):

alternate
alternative description of the publication
copyright
copyright statement that applies to the catalog entry
latest-version
more recent version of the publication
license
license associated with the catalog entry
replies
comment on or discussion of the catalog entry

I consider this relation types one of the weakest points of OPDS. The domain and range of the links are not clear and there are much better vocabularies for links between publications, for instance in FRBR, the Bibliographic Ontology, the citation type ontology, Memento, and SIOC (which also overlaps with ODPS at other places).

In addition each publication must contain at least one atom:link element which is used to encode an Aquisition Link.

Aquisition Links

OPDS defines two Aquisition types: „Direct Acquisition“ and „Indirect Acquisition“. Direct Aquisition links must directly lead to the publication (in some format) without any login, meta or catalog page in front of it (!) while Indirect Acquisition links lead to such a portal pages that then links to the publications. There are five Aquisition types (called „Acquisition Relations“) similar to DAIA Service types:

odps:acquisition
a complete representation of the
publication that may be retrieved without payment
odps:acquisition/borrow
a complete representation of the publication
that may be retrieved as part of a lending transaction
odps:acquisition/buy
a complete representation of the publication
that may be retrieved as part of a purchase
odps:acquisition/sample
a representation of a subset of the publication
odps:acquisition/subscribe
a complete representation of the publication that may be retrieved as part of a subscription

odps:acquisition can be mapped to daia:Service/Openaccess and odps:acquisition/borrow can be mapped to daia:Service/Loan (and vice versa). odps:acquisition/buy is not defined in DAIA but could easily be added while daia:Service/Presentation and daia:Service/Interloan are not defined in ODPS. At least the first should be added to ODPS to indicate publications that require you to become a member and log in or to physically walk into an institution to get a publication (strictly limiting OPDS to pure-digital publications accessible via HTTP is stupid if you allow indirect aquisition).

The remaining two acquisition types somehow do not fit between the others: odps:acquisition/sample and odps:acquisition/subscribe should be orthogonal to the other relations. For instance you could subscribe to a paid or to a free subscription and you could buy a subset of a publication.

In addition Aquisition links may or must contain some other properties such as odps:price (containing of a currency code from ISO4217 and a value).

Cover and artwork links

Beside Aquisition links the relations opds:cover and opds:thumbnail can be used to relate a Publication with it’s cover or some other visual representation. The thumbnail should not exceed 120 pixles in height or width and images must be either GIF, JPEG, or PNG. Thumbnails may also be directly embedded via the „data“ URL schema from RFC2397.

Final thoughts

OPDS looks very promising and it is already used for benefit in practise. There are some minor issues that can easily be fixed. The random selection of relation types is surely I flaw that can be repaired by allowing arbitrary RDF properties (come on XML fanboys, you should notice that RDF is good at least at link types!) and the list of acquisition types should be cleaned and enhanced at least to support „presentation“ without lending like DAIA does. A typical use case for this are National Licenses that require you to register to access the publications. For more details I would like to compare OPDS in more depth with models like DAIA, FRBR, SIOC, OAI-ORE, Europeana etc. – but not now.

First complete draft of DAIA Ontology

7. Januar 2010 um 19:06 6 Kommentare

I just finished the first complete draft of an OWL ontology of the DAIA data model. Unless the final URI prefix is sure, the ontology is available in GBV Wiki in Notation3 syntax, but you can also get RDF/XML. There is also a browsable HTML view created with OWLDoc (I only wonder why it does not include URI prefixes like in the same view of the Bibliographic Ontology).

It turned out that mapping the XML format DAIA/XML to RDF is not trivial – although I kept in mind doing so when I designed DAIA. XML is mostly based on a closed world tree data model but RDF is based on an open world graph model. Last month Mike Bergman wrote a good article about the clash of Open World Assumption and Closed World Assumption. I think as long as you only view data in form of tables, lists, and trees, you will not grasp the concept of the Semantic Web. I don’t know whether I have fully grasped the concept of document availability with DAIA and the ontology surely needs some further review, but it’s something to start with – just have a look!

Class or Property? Objectification in RDF and data modeling

14. August 2009 um 00:23 14 Kommentare

A short twitter statement, in which Ross Singer asked about encoding MARC relator codes in RDF, reminded me of a basic data modeling question that I am thinking about for a while: When should you model something as class and when should you model it as property? Is there a need to distinguish at all? The question is not limited to RDF but fundamental in data/information modeling. In Entity-relationship modeling (Chen 1976) the question is whether to use an entity or a relation. Let me give an example by two subject-predicat-object statements in RDF Notation3:

:Work dc:creator :Agent
:Agent rdf:type :Creator

The first statement says that a specific agent (:Agent) has created (dc:creator) a specific work (:Work). The second statement says that :Agent is a creator (:Creator). In the first dc:creator is a property while in the second :Creator is a class. You could define that the one implies the other, but you still need two different concepts because classes and properties are disjoint (at least in OWL – I am not sure about plain RDF). In Notation3 the implications may be written as:

@forAll X1, X2. { X1 dc:creator X2 } => { X2 a _:Creator }.
@forAll Y1. { Y1 a _:Creator } => { @forSome Y2. Y2 dc:creator Y1 }.

If you define two URIs for class and property of the same concept (the concept of a creator and creating something) then the two things are tightly bound together: Everyone who ever created something is a creator, and to be a creator you must have created something. This logic rule sounds rather rude if you apply it to other concepts like to lie and to be a liar or to sing and to be a singer. Think about it!

Beside the lack of fuzzy logic on the Semantic Web I miss an easy way to do „reification“ (there is another concept called „reification“ in RDF but I have never seen it in the wild) or „objectification“: You cannot easily convert between classes and properties. In a closed ontology this is less a problem because you can just decide whether to use a class or a property. But the Semantic Web is about sharing and combining data! What if Ontology A has defined a „Singer“ class and Ontology B defined a „sings“ property which refer to the same real-world concept?

Other data modeling languages (more or less) support objectification. Terry Halpin, the creator and evangelist of Object-Role Modeling (ORM) wrote a detailed paper about objectification in ORM whithout missing to mention the underlying philosophical questions. My (doubtful)
philosophic intuition makes me think that properties are more problematic then classes because the latter can easily be modeled as sets. I think the need for objectification and to bring together classes and properties with similar meaning will increase, the more „semantic“ data we work with. In many natural languages you can use a verb or adjective as noun by nominalization. The meaning may slightly change but it is still very useful for communication. Maybe we should more rely on natural language instead of dreaming of defining without ambiguity?