Articles – Ontogenesis http://ontogenesis.knowledgeblog.org An Ontology Tutorial Wed, 30 Jul 2014 13:36:46 +0000 en-US hourly 1 https://wordpress.org/?v=5.2 Pub Quiz for people who want to learn about OWL. http://ontogenesis.knowledgeblog.org/1502 http://ontogenesis.knowledgeblog.org/1502#respond Mon, 30 Nov -0001 00:00:00 +0000 http://ontogenesis.knowledgeblog.org?p=1502&preview_id=1502

Overview

the semantics of OWL can be a little tricksy at times. Having a firm hold of the semantics of axioms, the implications of those axioms and how the descriptions logics underlying OWL work should help in understanding the behaviur of one’s ontology when it’s given to a reasoner. This quiz presents some axioms and then asks some questions that allows one’s knowledge to be tested and possiblely enhanced once the answers are shown. …you are cordially invited to make up your own example, and your own modifications/enforcements! And to drink beer while doing this quiz.


The Authors

Uli Sattler and Robert Stevens
Information Management and BioHealth Informatics Groups
School of Computer Science
University of Manchester
Oxford Road
Manchester
United Kingdom
M13 9PL
sattler@cs.man.ac.uk and robert.stevens@Manchester.ac.uk


Introduction

There are quite a lot of facts about OWL ontologies to know that may affect our modelling, and these are rarely spelled out completely. Rather than making our “Top 16 things you need to know about OWL ontologies, but were too scared to ask”, we have made a quiz for you to test your understanding of them, together with explanations and examples. While we try to figure out a way to make this quiz interactive, you can try it out as it is, and we hope you enjoy the experience.


Preliminaries

Our example ontology is:

A EquivalentTo P some B
B SubClassOf C
C SubClassOf Q only D
C DisjointFrom D
Dom(Q) = C

b:A

All questions are “True or False” questions.


Questions about models

1) For an axiom to be entailed by an ontology, it has to be true in one of the ontology’s models [1] ?

False: An entailed axiom has to be true in all models of an ontology – and there may be many of these.

2) A model has some minimum size?

True: Sort of – it has to have at least one element in it. In fact, our example ontology has a model of size 1, where b is P-related to itself, and an instance of A, B, and C.

If we want larger models, we need to say so….see below.

3) Different individual names mean different things?

False: If we add c:B to our example ontology, we can still have a model with only 1 element that is both b and c, and P-related to itself, and an instance of A, B, and C.

If we want a form of unique name assumption, we need to say so: b DifferentIndividualFrom c.

4) Different class names denote different sets?

False: Unless we force this to be the case, we can always have a model where two classes have exactly the same instances. In our example ontology, B and C can have exactly the same instances.

An extreme way of forcing classes to be different is a disjointness axiom.

5) A class always has an instance?

False: In our example, we can have models where, e.g., D has no instances at all, like the one sketched under (1).

If we want a class to always have an instance, we need to introduce it: we need to coin a name for an individual and say its type is that class.

6) (We assume that we now all know the answer to “Does a property always have a pair of elements in it?”) If Dom(Q) = C is in our ontology, then each instance of C has a Q-successor?

False: We can still have a model where not even a single pair of elements is related by Q! The domain axiom only says that if two elements are related by Q, then the first element is an instance of C.

7) If we write C SubClassOf Q only D, this means that all instance of C have a Q-successor, and all its Q-successors are in D?

False: We can have models where instances of C don’t have a single Q-successor. All we know is that, if they have Q-successors, then they are instances of D.

If we want to force that C+s do have indeed one or more +Q-successors, then we need to add something like C SubClassOf Q some Thing.

8) A property P can relate an element to itself, say P(b,b)?

True: It can – unless we prevent it explicitly, e.g., by making P‘s domain and range disjoint, or by saying that P is irreflexive.

9) The following ontology has a model: A SubClass (P some (B and not B)), b:A?

False: It doesn’t have one: A now has an instance, b, which needs a P-successor, which would need to be both a B and not a B. Since the latter is impossible, our model is impossible.

10) The following ontology has a model: A SubClass (B and not B), b:B?

True: It has one – just not one with an instance of A.

11) There is an upper bound on the size of a model?

False: Models can be of any size we want – as long as they have at least 1 element.

12) Can a model be infinite in size?

True: It can be. And in it, we can have some classes with no instances, some with finitely many instances, and others with infinitely many.

13) Can an ontology have infinitely many models?

True: It can – our example ontology from the beginning has!

14) Given class names A, B, and C, I can build only finitely many class expressions?

False: Even with a single name A, I can build A, A and A, A and A and A, …. so infinitely many class expressions. Of course these are all equivalent, but they are still syntactically different. Similarly, I could consider A, A and B, A and (B or A), A and (B or (not A)), A and (B or (not (A and B))), A and (B or (not (A and (B or A)))), …. a slightly more diverse infinite sequence of class expressions.

15) Given a class name A and a property P, I can build only finitely many class expressions that are not equivalent?

False: Consider A, P some A, P some (P some A), P some (P some (P some A)),…and we have an infinite sequence of (increasingly long) class expressions, no two of which are equivalent. If you read P as hasChild and A as happy, then this sequence describes happy (people), parents of happy (people), grandparents of happy (people), great grandparents of happy (people), etc., and of course these are all different/non-equivalent concepts.

16) Given an ontology O that has class names A, B, and C, it can entail infinitely many axioms?

True: In fact, every ontology entails infinitely many axioms – but most of them are boring. For example, every ontology entails A SubClassOf A, A SubClassOf (A and A), A SubClassOf (A and A and A), …, so we already have infinitely many entailments (these boring entailments are called tautologies). Now assume our ontology entails A SubClassOf B, then it also entails A SubClassOf (B and B), … and it also entails (A and B) SubClassOf B, so there are many boring variants of originally interesting axioms.

Finally, there are ontologies that have infinitely many interesting entailments: we can easily build an ontology that entails A SubClassOf (P some B), A SubClassOf (P some (P some B)), A SubClassOf (P some (P some (P some B))), and none of these axioms is redundant, for example the following 2-axiom ontology:

A SubClassOf P some B
B SubClassOf P some B

So, when an ontology editor like Protege shows us the inferred class hierarchy or selected entailments, it only shows us a tiny fraction of entailments: there are always many more, and it depends on your ontology and your application how many of these are interesting or boring.

References

  1. U. Sattler, "OWL, an ontology language", Ontogenesis, 2010. http://ontogenesis.knowledgeblog.org/55
]]>
http://ontogenesis.knowledgeblog.org/1502/feed 0
The Green Green Grass of OWL http://ontogenesis.knowledgeblog.org/1499 http://ontogenesis.knowledgeblog.org/1499#respond Mon, 30 Nov -0001 00:00:00 +0000 http://ontogenesis.knowledgeblog.org?p=1499&preview_id=1499

Overview

We are going to discuss various options for modelling properties or qualities of objects, using colour as an example. We try to compare the different options with respect to what they allow us to distinguish and query.


The Authors

Uli Sattler and Robert Stevens
Information Management and BioHealth Informatics Groups
School of Computer Science
University of Manchester
Oxford Road
Manchester
United Kingdom
M13 9PL
sattler@cs.man.ac.uk and robert.stevens@Manchester.ac.uk


Characterising classes

Assume we want to model some concepts, like toys, ideas, or plants, in an OWL ontology, e.g., Ball, Plan, or Rose. Of course we will fix, in this ontology, a vocabulary (a set of terms) for these concepts, and arrange it in a subclass hierarchy using SubClassOf axioms. But we also want to describe their relevant characteristics, i.e., the things for which we use adjectives, as in “red Ball”, “cunning Plan”, or “thorny Rose”.

OWL affords several modelling options for this very common situation. Some work better than others and the various options can be useful in describing various things about OWL. In particular, options will differ with respect to the questions we can ask in our ontology – and get answered via a reasoner. This modelling situation and what OWL can offer is also a useful place to discuss those modelling options.


isGreen as an object property

Following a pattern we see frequently, we could use an object property isGreen, restrict its range to true and false: we introduce a Class TruthValues and make true and false its only instances. Furthermore, we can restrict its domain to Physical objects, and state that isGreen is functional. Thus, a physical object can hold only one isGreen object property which is filled by either true or false and this would appear to capture all the situations we need for saying whether or not an object is green.

While we may not find this style of modelling pretty, it allows us to retrieve all green things by asking for instances of isGreen value True. Or we can introduce a class GreenObjects as equivalent to PhysicalObject and (isGreen value True).

Why would we consider this first approach to be not well modelled?

Firstly, we use abstract objects for the Boolean values true and false. As a consequence, we need to state explicitly that they are different, to ensure, for example, that (isGreen value True) and (isGreen value False), when appearing on the same object, is indeed inconsistent. We would usually do this by making the individuals true and false different, using DifferentFrom.

We would also need to decide where these abstract objects true and false “live”, i.e., do we have a suitable superclass of TruthValues? And should we introduce other ways of writing them, like “0” and “1”?

Of course not: for cases like this, OWL has data properties and XML Schema datatypes: if we really like this modelling style, at least we should use a (functional) data property isGreen whose range is Boolean.

Now is there anything else wrong with this style? Yes: assume we want to introduce another colour (i.e., data property), isLightGreen, and another class LightGreenObjects, following the same scheme. Then we would expect the reasoner to infer that LightGreenObjects is a subclass of GreenObjects, i.e., that everything that is light green is also green – but of course this is not entailed by our ontology! To fix this, we could try to make isLightGreen a sub (data)property of isGreen – but this will not have the desired effect: it will work fine in the case that, say, MyBall isLightGreen true because then our ontology entails that MyBall isGreen true. Unfortunately, an analogous yet unwanted behaviour will take place if MyBall isLightGreen false, i.e., our ontology would then entail that MyBall isGreen false which would lead to a contradiction in the case that MyBall is dark green!

So, I think we can safely consider this modelling approach to be unsuitable.


Class: Green as the class of green things

Next, let us consider another very simple approach: let’s introduce a class Green and make MyBall an instance of it, i.e., the class Green really stands for “the green things”. We can then introduce a subclass LightGreen of Green, and entailments work in the right way: light green things will be entailed to be green, whereas green things are simply green (i.e., we don’t know whether they are light green as well).

So, we can clearly say that this approach is superior – but we will soon see its limitations if we consider another colour, say Red. Of course, red isn’t green, and so we should make Red and Green disjoint. But what if MyBall is green with red dots? If I make MyBall an instance of both Red and Green, my ontology will be inconsistent! Similarly, we could not write a definition for things with more than one colour – other than by enumerating all possible colour choices, which clearly is not evolvable at all, since it requires major reworkings each time we add a new colour or change the name of a colour.


Green as a subclass of colour

What the previous approach has shown us is that we should distinguish between a colour and things that have this colour. That is, let’s introduce a class Colour with subclasses Red and Green, with the latter having a subclass LightGreen. We can again state that Red and Green are disjoint. And let’s introduce a property hasColour with domain PhysicalObject and range Colour. Of course we won’t make this property functional so that MyBall can be an instance of hasColour some Red and hasColour some LightGreen.

Things work much better in this modelling approach than they did before:

  • the ontology is consistent,
  • MyBall is now also an instance of hasColour some Green – because LightGreen is a subclass of Green, and
  • MyBall is an instance of multi-coloured things, i.e., of (atleast 2 hasColour Colour)$ – because Red and Green are disjoint, and
  • we don’t need to change the definition of multi-coloured things when we add a new colour.

So, is this the best we can do?


Green as an instance vs a class

You may wonder whether we shouldn’t make LightGreen an instance of the class Colour? After all, isn’t light green is a colour?! We may think so today – but tomorrow we may want to distinguish different kinds of light green, like lime green: if we want that limet green things are also light green things, then we should better make both classes and the latter a subclass of the former – so that we could also consider lime green as a subclass of light green (we have written a blog on the question “instances versus classes” earlier).


Defining Colour

As an alternative or extension to the above naive model of colour, we can use well-established models of colour such as RGB or CMYK or such like. Choosing RGB as an example, we can define a colour as a (visual perceptual) property that has three numerical values associated with it, adapting one of the standard coding schemes, e.g.,

Class: Colour
        EquivalentTo:
(hasRValue some nonNegativeInteger) and
(hasGValue some nonNegativeInteger) and
(hasBValue some nonNegativeInteger)

DataProperty:  hasRValue
Domain: Colour
Range: xsd:nonNegativeInteger[<= "255"^^nonNegativeInteger]

DataProperty:  hasGValue
...

In this way, we can specify specific colour instances, e.g.,

Individual: limegreen
Facts:
        hasRValue "50"^^nonNegativeInteger,
        hasGValue "205"^^nonNegativeInteger,
        hasBValue "50"^^nonNegativeInteger

and we can try to (!) define colour classes, e.g.,

Class: Green
        EquivalentTo  :
(hasRValue some nonNegativeInteger[<= "100"^^nonNegativeInteger]) and
(hasGValue some nonNegativeInteger[>= "150"^^nonNegativeInteger]) and
(hasBValue some nonNegativeInteger[<= "100"^^nonNegativeInteger])

although the above is not really a good definition of Green in this way: we would need to compare the values of hasRvalue, hasGvalue, and hasBvalue, which requires data range extensions. So, while this approach would allow us to hold, in our ontology, the exact colour(s) of things – which we could then use to, say, render their graphical representation – it has its limitations w.r.t. how we can define classes. We could consider a hybrid approach whereby we keep the RGB values of our objects, but also add suitable symbolic names for their colours; the latter could be done automatically.


Is Colour different from other characteristics

For this blog, we chose the example of colour. Of course, you may need to model, in your ontology, other characteristics such as texture, shape, taste, etc. While these characteristics differ, the general approach of modelling these, and of evaluating the fitness of a model 😉 are similar: can I refine my model easily? Do I get the expected entailments? Can I define interesting classes or query for instances that share the same property?


Summary

We hope we have (a) shown you various ways in which a property of an entity (a quality) can be modelled, and (b) the effects they have on what we can define or query in these different approaches, and how robust they are wrt future extensions. The penultimate option (having a Colour class with an object property hasColour with objects that have that colour at the left-hand-side) is the one usually followed for describing qualities in an OWL ontology; here the dependant thing (the adjective or modifier) is separated from the independant thing (the object to which the characteristic, adjective, quality, …) is applied. This is a typical separation of concerns type approach. Most top ontologies will make this separation in some form Whilst we’ve not explicitly used a top ontology in the Entity hasColour some Colour pattern above, we have implicitly done so [1]. It should also be pointed out that modelling colour is a tricky one [2], but it is a good example to bring up this common modelling situation and some options for doing so in OWL that highlights some aspects of how OWL works.

References

  1. M. Egaña, A. Rector, R. Stevens, and E. Antezana, "Applying Ontology Design Patterns in Bio-ontologies", Knowledge Engineering: Practice and Patterns, pp. 7-16, . http://dx.doi.org/10.1007/978-3-540-87696-0_4
  2. P. Lord, and R. Stevens, "Adding a Little Reality to Building Ontologies for Biology", PLoS ONE, vol. 5, pp. e12258, 2010. http://dx.doi.org/10.1371/journal.pone.0012258
]]>
http://ontogenesis.knowledgeblog.org/1499/feed 0
How does a reasoner work? http://ontogenesis.knowledgeblog.org/1486 http://ontogenesis.knowledgeblog.org/1486#respond Mon, 30 Nov -0001 00:00:00 +0000 http://ontogenesis.knowledgeblog.org?p=1486&preview_id=1486

Summary

Reasoners should play a vital role in developing and using an ontology written in OWL. Automated reasoners such as Pellet, FaCT++, HerMiT, ELK and so on take a collection of axioms written in OWL and offer a set of operations on the ontology’s axioms – the most noticeable of which from a developer’s perspective is the inference of a subsumption hierarchy for the classes described in the ontology. A reasoner does a whole lot more and in this kblog we will examine what actually goes on inside a DL reasoner, what is offered by the OWL API and then what is offered in an environment like Protégé.


authors

Uli Sattler and Robert Stevens
Information Management BioHealth Informatics Groups
School of Computer Science
University of Manchester
Oxford Road
Manchester
United Kingdom
M13 9PL
Ulrike.Sattler@Manchester.ac.uk and robert.stevens@Manchester.ac.uk

Phillip Lord
School of Computing Science
Newcastle University+ Newcastle
United Kingdom
NE3 7RU
phillip.lord@newcastle.ac.uk


What a reasoner does

One of the most widely known usages of the reasoner is classification – we will first explain what this is, and then sketch other things a reasoner does. There are other (reasoning) services that a reasoner can carry out, such as query answering, but classification is the service that is invoked, for example, when you click “Start Reasoner” in the Protégé 4 ontology editor.

So, assume you have loaded an ontology O (which includes resolving its imports statements and parsing the contents of all relevant OWL files) and determined the axioms in O – then you also know all the class names that occur in axioms in O: let’s call this set N (for names). When asked to classify O, a reasoner does the following three tasks:

First, it checks whether there exists a model [1] of O, that is, whether there exists a (relational) structure that satisfies [2] all axioms in O. For example, the following ontology would fail this test since Bob cannot be an instance of two classes that are said to be disjoint, i.e., each structure would have to violate at least one of these three axioms:

 Class: Human
 Class: Sponge
 Individual: Bob types: Sponge
 Individual: Bob types: Human
 DisjointClasses: Human, Sponge

In case this test is failed, the reasoner returns a warning “this ontology is inconsistent [2] ” which is handled differently in different tools. In case this test is passed, the classification process continues to the next step.

Second, for each class name that occurs in O – i.e., for each element A in N, the reasoner tests whether there exists a model of O in which we find an instance x of A, i.e., whether there exists a (relational) structure that satisfies all the axioms in O and in which A has an instance, say x. For example, the following ontology would pass the first “consistency” test, but still would fail this “satisfiability” [2] test for A (while passing it for the other class names, namely Human and Sponge):

 Class: Human
 Class: Sponge
 DisjointClasses: Human, Sponge
 Class: A SubClassOf: (likes some (Human and Sponge))
A model of the ontology described above

The picture above shows a model of this ontology with x and z being instances of Sponge, y being an instance of Human, and x and y liking each other. Classes that fail this satisfiability test are marked by Protégé in red in the inferred class hierarchy, and the OWL API has them as subclasses of owl:Nothing.

Thirdly, for any two class names, say A and B, that occur in O, the reasoner tests whether A is subsumed by B, i.e., whether in each model of O, each instance of A is also an instance of B. In other words, whether in each (relational) structure that satisfies all axioms in O, every instance of SBL is also one of SL. For example, in the ontology below (where Sponge and Human are no longer disjoint), DL is subsumed by SBL which, in turn, is subsumed by AL.

 Class: Animal
 Class: Human SubClassOf: Animal
 Class: Sponge SubClassOf: Animal
 Class: SBL EquivalentTo: (likes some (Human and Sponge))
 Class: AL  EquivalentTo: (likes some Animal)
 Class: DL SubClassOf: (likes some Human) and (likes only Sponge)
A model of the ontology described above

The picture above shows a model of this ontology with, for example, x being an insance of Sponge, Human, and Animal, and z being an instance of SBL and AL.

The results of these subsumption tests are shown in the Protégé OWL editor in the form of the inferred class hierarchy, where classes can appear deeper than in the (told) class hierarchy.

Alternatively, we can describe the classification service also in terms of entailments: a reasoner determines all entailments of the form

A SubClassOf B

of a given ontology, where

  • A is owl:Thing and B is owl:Nothing – this is the consistency test, or
  • A is a name and B is owl:Nothing – these are the satisfiability tests, or
  • A and B are either class names in the ontology – these are the subsumption tests.

When we talk about a “reasoner”, we understand that it is sound, complete, and terminating: i.e., all entailments it finds do indeed hold, it finds all entailments that hold, and it always stops eventually. If a “inference engine” relies on an algorithm that is not sound or complete or terminating, we would not usually call it a reasoner.

Other things we can ask a reasoner to do is to retrieve instances, subclasses, or superclasses of a given, anonymous class expression, or to answer a conjunctive query.


How a reasoner works

These days, we usually distinguish between consequence-driven reasoning and tableau-based approaches to reasoning, and both are available in highly optimised implementations, i.e., reasoners.

Consequence-driven approaches have been shown to be very efficient for so-called Horn fragments of OWL
[A Horn fragment is one that excludes any form of disjunction, i.e., we cannot use “or” in class expressions, or (not (A and B)), or other constructs that require some form of reasoning by case.]
. In a nutshell, given an ontology O, they apply deduction rules (not to be confused with those in rule-based systems or logic programs) to infer entailments from O and other axioms that have already been inferred. For example, one of these deduction rules would infer

M SubClassOf (p some B)

from

A SubClassOf: B
M SubClassOf (p some A)

For example, ELK employs such a consequence-driven approach for the OWL 2 EL profile.

Tableau-based reasoners have been developed for more expressive logics, e.g., Fact++, Pellet, Racer, Konclude. As you can see, the first 2 tests (consistency and satisfiability) ask for a (!) model of the ontology – where a given class name has an instance, for the satisfiability test. And a tableau-based reasoner tries to construct such a model using so-called completion rules that work on (an abstraction of) such a model, trying to extend it so that it satisfies all the axioms. For example, assume it is started with the following ontology and asked whether DL is satisfiable in it:

 Class: Animal SubClassOf: (hasParent some Animal)
 Class: Human SubClassOf: Animal
 Class: Sponge SubClassOf: Animal
 Class: SBL EquivalentTo: Animal and (likes some (Human and Sponge))
 Class: AL  EquivalentTo: Animal and  (likes some Animal)
 Class: DL SubClassOf: Animal and  (likes some Human) and (likes only Sponge)

The reasoner will try to generate a model with an instance x of DL. Because of the last axiom, a completion rule will determine that x also needs to be an instance of

 Animal and  (likes some Human) and (likes only Sponge)

A so-called “and” completion rule will take this class expression apart and will explicate that x has to be an instance of DL (from the start), Animal, (likes some Human), and (likes only Sponge). Yet another rule, the “some” rule, will generate another individual, say y, as a likes-filler of x and makes y an instance of Human. Now various other rules are applicable: the “only” rule can add the fact that y is an instance of Sponge because y is likes-related to x and x is an instance of (likes only Sponge). The “is-a” rule can spot that y is an instance of Animal (because it is an instance of Human and the second axiom). And we can also have the “some” rule add a hasParent-filler of x, say z, and make z an Animal to satisfy the first axiom. We would also need to do a similar thing to y since y also is an Animal.

In this way, completion rule “spot” constraints imposed by axioms on a model, and thereby build a model that satisfies our ontology. Two things make this a rather complex task: firstly, we can’t continue creating hasParent-fillers of Animal because then we wouldn’t terminate. That is, we have to stop this process while making sure that the result is still a model. This is done by a cycle-detection mechanism called “blocking” (see Baader and Sattler paper at the end of this kblog). Secondly, our ontology didn’t involve a disjunction: if we were to add the following axiom

Class: Human SubClassOf: Child or Adult

then we would need to guess whether y is a Child or an Adult. In our example, both guesses are ok – but in a more general setting our first guess may fail (i.e., lead to an obvious contradiction like y is an instance of 2 disjoint classes) and we would have to backtrack.

So, in summary, our tableau algorithm tries to build a model of the ontology by applying completion rules to the individuals in that model – possibly generating new ones – to ensure that all of them (and their relations) satisfy all axioms in the ontology. For an inconsistent ontology [2] (or a consistent one where we try to create an instance of an unsatisfiable class), this attempt will fail with an obvious “clash”, i.e., an obvious contradiction like an individual being an instance of 2 disjoint classes. And it will succeed otherwise with (an abstraction of) a model. For a subsumption test to decide whether A is subsumed by B, we try to build a model with an instance of A and not (B): if this fails, then A and not(B) cannot have an instance in any model of the ontology, hence A is subsumed by B!


Last words

We have tried to describe what reasoners do, and how they do this. Of course, this explanation had to be high-level and leave out a lot of detail, and we had to concentrate on the classification reasoning service and thus didn’t really mention other ones such as query answering. Moreover, a lot of interaction with a reasoner is currently being realised via the OWL API, and we haven’t mentioned this at all.

To be at least complete in some aspect, we would like to leave you with the following links:

Some helpful web sites:

The paper by Franz Baader and Ulrike Sattler [3] will be useful.

References

  1. U. Sattler, "OWL, an ontology language", Ontogenesis, 2010. http://ontogenesis.knowledgeblog.org/55
  2. U. Sattler, R. Stevens, and P. Lord, "(I can’t get no) satisfiability", Ontogenesis, 2013. http://ontogenesis.knowledgeblog.org/1329
  3. F. Baader, and U. Sattler, "Array", Studia Logica, vol. 69, pp. 5-40, 2001. http://dx.doi.org/10.1023/A:1013882326814
]]>
http://ontogenesis.knowledgeblog.org/1486/feed 0
Ontological commitment: Committing to what an ontology claims http://ontogenesis.knowledgeblog.org/1468 http://ontogenesis.knowledgeblog.org/1468#respond Mon, 30 Nov -0001 00:00:00 +0000 http://ontogenesis.knowledgeblog.org?p=1468&preview_id=1468

Summary

To paraphrase Davis, Shrobe & Szolovits [1], an ontological commitment is an answer to the question of “how should I think about the world?”. An ontology is a model or knowledge representation of a field of interest [2]. By using an ontology, you are committing to its world view; if you borrow from an ontology, you should also be borrowing its commitment. That is, an ontology says “there are these objects and this is how they are related”; the ontology has made that commitment and, by dint of using that ontology, so are you. This has consequences for how ontologies are used and re-used. This kblog gives an introduction to ontological commitment and illustrates how it is worth bearing in mind as you use and re-use ontologies.


Author

Robert Stevens
Bio-health Informatics Group
School of Computer Science
University of Manchester
Oxford Road
Manchester
United Kingdom
M13 9PL
robert.stevens@Manchester.ac.uk

Phillip Lord
School of Computing Science
Newcastle University+ Newcastle
United Kingdom
NE3 7RU
phillip.lord@newcastle.ac.uk


What is it?

wikipedia’s summary of ontological commitment [3] goes like this:

[… an] ontology refers to a specific vocabulary and a set of explicit assumptions about the meaning and usage of these words, then an ontological commitment is an agreement to use the shared vocabulary in a coherent and consistent manner within a specific context.

That is, when using an ontology there should be a commitment to its view on the field of interest it describes. This is the sharing or shared view that is often refered to in an information systems version of an ontology. To keep it succinct, when using terms (classes) and axioms from an ontology, one is buying into the world view of that ontology. An ontology is a theory of a field of interest [2]; that theory uses objects in its explanation of that field – those objects are its ontological commitment.

This is not only true when using an ontology for annotation, but when an ontology is authored – either de novo or when re-using another ontology.

The way knowledge is modelled makes a commitment to a world view. When modelling de novo the modelling style makes a statement about how the entities in the field of interest are arranged. When an existing domain neutral or top ontology [4] is used, a modeller is making a commitment to that upper ontology’s understanding of how entities are arranged. By, for example, using BFO, an ontologist is commiting to a view where continuants can have qualities, but processes may not; if DOLCE is used as a domain neutral ontology, a commitment is made, for example, to dividing qualities into qualia and qualities. It has to be added, that the user need not believe it in the slightest, but the bottom line is that in re-using another ontology or ontology fragment, that ontology’s interpretation of a field of interest should also be re-used.

These views provided by another’s ontology are not a la carte; they are more like the menu – you take the whole view or you leave it altogether. One shouldn’t pick and choose pieces of viewpoint from a variety of ontological commitments, especially radically different upper level, domain neutral, ontologies. One can’t just take the partial viewpoint (though one can take part of an ontology and not break ontological commitment) of an ontology to which one agrees; it’s the whole commitment or nothing – just as one cannot be a bit pregnant one cannot be a bit ontology x.

Another way of achieving an ontological commitment is by re-using parts of another ontology. For instance, by re-using relationships from the Relationships Ontology [5] an commitment is made to the ontological view taken on by that ontology fragment. When using those relationships, for instance, a modeler should commit to re-using them in the way intended. For example, part_of should be used for describing the relationships between an entity and its parts, not for describing the relationship between an entity and that in which it is contained. Similarly, taking a class from another’s ontology and changing its definition and assuming it’s the same class doesn’t work; one commits to the donor ontology’s definition or one needs a new class.

By using an ontology I’m committing to its “world view”. When using, for instance, the Gene Ontology [6] to describe the major attributes of a gene product, the annotator should be committing to adopting the GO’s view of the world – making arbitrary decisions like “GO says x means blah,b but I’ll interpret x as meaning yadda yadda” leads to confusion and the world will collapse into a maelstrom of sin and corruption. Possible sins include:

  • Answers to queries are different in different uses of the ontology as interpretations change
  • One should be able to re-combine re-used parts of an ontology from gvarious re-using ontologies back into the original and everything work fine. if interpretations have changed in the re-using ontologies, then bad things will happen.

In re-using a concept, one should look not only at the label of that concept, but its definition and its context to be sure the ontological commitments being made are the ones you want.


Practicalities

When we say that in using an ontology, you should effectively commit to it, this does not, of course, mean that you should use all of the ontology, in the sense of referring to all the terms in that ontology. In fact, many ontologies, refer to very few of the terms in the domain neutral ontologies that they import.

There are also important practical considerations; reasons why you might not wish to commit to all of an ontology. Consider, for example, an OWL ontology [7]. If we import another ontology into ours, we get all of the axioms from the imported ontology whether we like it or not. This can be extremely problematic if, for instance, the other ontology is very large and we wish to use a single term. An obvious use case, is refering to Homo sapiens from the NCBI taxonomy. There are a number of ways to address this problem; tools associated with MIREOT[8] for example, will extract just the terms you need. This is a relatively safe thing to do, but there are still some potential pitfalls. For example, if you just import ViridiPlantae (green plants) and Homo sapiens, and then make one a subclass of the other, your ontology will be consistant. But if anyone else imports both your ontology and the NCBI taxonomy, these statements will be see to contradict. In this case you’re also breaking the ontological commitment to the NCBI taxonomy, where plants and humans do not have this relationship.

There are also reasons why you might wish to add new axioms describing terms from an ontology that you import. For example, say you import the Pizza Ontology [9], and discover the lack of labels in Italian is problematic; you could add these annotations in a second ontology. Another common reason for adding more axioms to an existing ontology is to switch OWL profile. For example, versions of DOLCE can be found in the EL, QL and DL, all refering to the same terms. The DL version contains axioms which will disallow statements that would be valid against the EL version.

In each of these cases, the practical considerations must be weighed up against the potential consequences. Ignoring axioms will mean that you are also ignoring potential inconsistencies; adding multi-lingual labels may be safe, but equally make break software which is expecting only English; adding axioms to terms from another ontology may cause undesired consequences in a third.

In general, the rule should be that if you ignore part of, or add to another ontology you should be careful to retain as much ontological commitment as you can by not contradicting the original.


Layers of commitment

Davis, Shrobe & Szolovits [1] talk of layers of commitment; what’s gone above is towards a “top” layer. Down at the bottom layer, we have the ontological commitment of the representation language itself. An knowledge representation language makes it possible to say some things about a field of interest and not others. Thus an ontology’s language is itself making an ontological commitment; it takes a view upon what things are important in the world and it is these that can be represented.

Moving up a layer, the modelling choices made in the ontology are making a commitment about the field of interest; by classifying aalong one primary axis as opposed to another, an ontologist is saying what he or she believes to be important about that aspect of the domain. By using, re-using or extending that ontology a modeller should “buy-in” to that commitment and is implicitly doing so by taking on the task.


Final words

An ontology makes commitments about the world it is representing. When using an ontology one is taking on that ontological commitment – there should be no “ontology x makes interpretation y, but I’ll make interpretation z”. Similarly, an ontology’s commitments are not a la carte, one cannot just leave out the bits of a point of view one doesn’t like and re-use the bits of a viewpoint one does like – by committing to a bit one is committing to the whole (though one, of course, doesn’t have to commit to the whole thing in a mechanical sense – that is, import the whole thing). In summary, an ontology is about shared, common understanding and ontological commitment is one of its central aspects. As ontologies are used and re-used, especially when re-using fragments of an ontology, ontological commitment should be born in mind.

References

  1. R. Davis, H. Shrobe, and P. Szolovits, "What is a Knowledge representation?", AI Magazine, 1993.
  2. R. Stevens, A. Rector, and D. Hull, "What is an ontology?", Ontogenesis, 2010. http://ontogenesis.knowledgeblog.org/66
  3. "Ontological commitment - Wikipedia", Wikipedia, 2016. http://en.wikipedia.org/wiki/Ontological_commitment
  4. F. Gibson, "Upper Level Ontologies", Ontogenesis, 2010. http://ontogenesis.knowledgeblog.org/343
  5. B. Smith, W. Ceusters, B. Klagges, J. Köhler, A. Kumar, J. Lomax, C. Mungall, F. Neuhaus, A.L. Rector, and C. Rosse, "Array", Genome Biology, vol. 6, pp. R46, 2005. http://dx.doi.org/10.1186/gb-2005-6-5-r46
  6. M. Ashburner, C.A. Ball, J.A. Blake, D. Botstein, H. Butler, J.M. Cherry, A.P. Davis, K. Dolinski, S.S. Dwight, J.T. Eppig, M.A. Harris, D.P. Hill, L. Issel-Tarver, A. Kasarskis, S. Lewis, J.C. Matese, J.E. Richardson, M. Ringwald, G.M. Rubin, and G. Sherlock, "Gene Ontology: tool for the unification of biology", Nature Genetics, vol. 25, pp. 25-29, 2000. http://dx.doi.org/10.1038/75556
  7. U. Sattler, "OWL, an ontology language", Ontogenesis, 2010. http://ontogenesis.knowledgeblog.org/55
  8. . Melanie Courtot, Frank Gibson, Allyson L. Lister, James Malone, Daniel Schober, Ryan R. Brinkman, Alan Ruttenberg, "MIREOT: the Minimum Information to Reference an External Ontology Term ", Nature Precedings, 2009. http://precedings.nature.com/documents/3574/version/1
  9. R. Stevens, "Why the Pizza Ontology Tutorial?", Robert Stevens' Blog, 2010. http://robertdavidstevens.wordpress.com/2010/01/22/why-the-pizza-ontology-tutorial/
]]>
http://ontogenesis.knowledgeblog.org/1468/feed 0
Walking your fingers through the trees in OWL: why there are things you want to say but can’t http://ontogenesis.knowledgeblog.org/1446 http://ontogenesis.knowledgeblog.org/1446#respond Mon, 30 Nov -0001 00:00:00 +0000 http://ontogenesis.knowledgeblog.org?p=1446&preview_id=1446

Summary

OWL has a tree model property. This property is (one of the) reasons why the reasoning problems underlying, say, the computation of the inferred class hierarchy, are decidable [1]. As a modeller in OWL, we notice this property because it restricts what can be said in an ontology. Very roughly speaking, in OWL, we describe tree-shaped structures: we illustrate what this means using pictures and videos of two fingers walking along the trees. Some things one would like to say in OWL require using three fingers to do the tracing between the objects in our axioms – making triangles between the objects being related. However, some form of triangles can be described in OWL, and we illustrate these as well.

 

Authors

Robert Stevens and Uli Sattler

Bio-Health and Information Management Groups
School of Computer Science
University of Manchester
Oxford Road
Manchester
M13 9PL

Robert.Stevens@Manchester.ac.uk and sattler@cs.man.ac.uk

 

The tree model Property of OWL

 

The description logic upon which OWL is based uses a form of the tree-model property: the expressive power of OWL is such that the axioms in an ontology can only enforce the (anonymous) objects in a model [2]. We use the core of an ontology about family history to illustrate how OWL restricts ontologies to descriptions of trees. Take the ontology:

 

Class: Man SubClassOf Person
Class: Woman SubClassOf Person
Class: Person SubClassOf (hasMother some Woman) and (hasFather some Man)
ObjectProperty: hasMother Characteristics: Functional SubPropertyOf hasParent
ObjectProperty: hasFather Characteristics: Functional SubPropertyOf hasParent

Here we have a simple ontology that introduces two classes, Man and Woman, that are each a subclass of Person. A person hasFather some Man and hasMother some Woman; hasFather and hasMother are functional and are sub-properties of hasParent. We have illustrated the restriction that OWL places on how instances of the classes introduced can be related via a video.  In the video you can see how the axioms enforce a world where persons and their parents are related via the properties hasMother and hasFather in a tree-shaped structure.

Now assume we add another axiom that describes happy grandchildren (HGC) as those who have a mother who has a mother who is Nice (a nice grandmother):

Class: HGC EquivalentTo: Person and (hasMother some (hasMother some Nice)

An explanation of how we can figure out the meaning of HGC by walking our (family) tree with 2 fingers is given in the next video. Again, you can see that it is possible to use just the two fingers to talk about how the objects in these axioms are related to each other.

Making triangles of properties and objects is (mostly) not allowed

So far, what we have seen is what we can say in OWL. If we, however, wished to define a class of happy children as those children whose mother loves their father, we are facing some difficulties, as explained in our third video. In a nutshell, this idea of happy children would involve the description of a “triangle” between the child, their mother, and their father – and thus it would involve “3 fingers” to keep hold of the relationships between those elements. This breaks the tree shaped structure of the axioms – the triangle forms a non-tree shaped graph. In fact, we cannot define such a concept in OWL, since OWL class descriptions are mostly of the kind that is traceable with two fingers – that is, the tree.

When triangles of things can be done in OWL

 

Next, we will discuss exceptions to this rule of thumb (or fingers!).

First, we can describe triangles – or any form of structure – on named individuals. For example, see the following assertions:

Individual: Peter
Facts: hasMother Sue,
hasFather Bob

Individual: Sue
Facts: loves Bob

Here the facts asserted about Bob, Sue and Peter form a triangle with the hasMother, hasFather and loves properties. This happens all the time with individuals and it is fine to do lots of this sort of non-tree like thing.

 

 

 

Second, we can describe triangular forms of relations via transitive properties or via sub-property chains, as in the following property axioms:

ObjectProperty: hasGrandParent
SubPropertyChain: hasParent o hasParent

ObjectProperty: hasUncle
SubPropertyChain: hasParent o hasBrother

With the hasGrandParent property, where we see objects linked along the hasParent property, we’re forming triangles: a person’s grandparent is also linked to them via their parent. A similar observation holds true for a person’s uncle. Also, transitive properties such as has Ancestor lead to the formation of whole series of triangles between a person and all their ancestors via the direct has Parent relationships.

Last words

 

So, while OWL is great at describing trees, we face some difficulties when we want to describe classes via some features that involve triangular shapes – but triangular structures are ok in other parts of OWL, such as in facts about named individuals and as general characteristics of properties. The tree model property keeps the description logic underlying OWL decidable. Sometimes the things we’d like to say as a modeller break this tree-shaped aspect of OWL’s axioms – you can check what’s going on in your axioms and explain the constraint to yourself and others by walking along a set of objects in a model of your ontology with your fingers; when you need more than two to walk along the objects involved in your description, then you’re often going to be in problems, apart from the exceptions outlined above.

 

 

References

  1. Q.Y.E. GrÀdel, "Why Are Modal Logics So Robustly Decidable?"http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.28.5238
]]>
http://ontogenesis.knowledgeblog.org/1446/feed 0
Nine Variations on two Pictures: What’s in an Arrow Between two Nodes http://ontogenesis.knowledgeblog.org/1423 http://ontogenesis.knowledgeblog.org/1423#respond Thu, 10 Oct 2013 11:48:03 +0000 http://ontogenesis.knowledgeblog.org/?p=1423

Overview

When designing an ontology, we often start by drawing some pictures, like the one below. This is a good starting point: it allows us to agree on the main categories and terms we use, and the relations we want to consider between them. How to translate this picture into an OWL ontology may be, however, more tricky than you may think. In this kblog we consider various interpretations of simple “blob and line” diagrams and thus questions you may wish to ask yourself as you use such diagrams.


The Authors

Uli Sattler and Robert Stevens
Information Management and BioHealth Informatics Groups
School of Computer Science
University of Manchester
Oxford Road
Manchester
United Kingdom
M13 9PL
sattler@cs.man.ac.uk and robert.stevens@Manchester.ac.uk


Introduction

When designing or talking about an ontology, we often start by drawing some pictures, like this one:

An ER type picture

This is, in many cases, a good thing because we sketch out relevant concepts, coin names for them, identify relevant relations between concepts, and name these as well. If we ignore, for this post, finer details such as the exact shape and form of the diagrams and if we also ignore individuals or objects – i.e., we concentrate on concepts and relationships between them – then we can find two kinds of arrows or edges between them:

  • is-a links, i.e., unlabelled edges from a concept to a more general one, e.g., from Mouse to Mammal, and
  • relationship links, i.e, labelled edges from a concept to another concept that describe a relationship between these, e.g., Mouse – hasPart → Tail

Now, when we try to agree on such a picture, and then attempt to cast it into OWL, we may find that we are wondering how to exactly “translate” these edges into axioms.

Next, we discuss three possibilities for the first kind of links and six for the second one – basically as a comprehensive catalogue of what the author of such a picture might have had in mind when drawing them. In our experience, the choice as to which of these “readings” is to be applied is often quite dependent on the concepts the edges link – which can cause a lot of misunderstanding if left unresolved.

These are not novel observations; in fact, they go back to the 70ies and 80ies (see, e.g., the works by Woods, Brachman, and Schmolze [1] [2] [3]), when people working in knowledge representation and reasoning tried to give precise, logic-based semantics to semantic networks, i.e., to pictures with nodes and edges and labels on them that were used to describe the meaning of and relations between terms.


Is-a links between concepts

Let us start with is-a links between concepts, that is, with parts of pictures that look like the following examples:

a blob and line showing some kind of sub class thingy

Of course, we read these as “every Feline is a Mammal” or “every Z is an X“. Thus, when translating the two arrows on the right hand side into OWL, we would add the following two axioms to our ontology:

Feline SubClassOf: Mammal
Mouse SubClassOf: Mammal

In this way, we ensure that every feline is a mammal and that every mouse is a mammal (following the OWL semantics, this is without exception; a discussion of exceptions is outside the scope of this kblog). Looking at the more anonymous picture on the left hand side, we may also wonder whether it means that everything that is an X is also a Y or a Z – which corresponds to covering constraints in other conceptual modelling formalisms such as ER or UMLS diagrams; of course, we know that this isn’t true for the right hand picture because we know of dogs, etc. To capture such a covering constraint, we would add the following axiom to our ontology:

X SubClassOf: Y or Z

Finally, for both sides, we could further ensure that the two subclasses are disjoint, i.e., that nothing can be an instance of both Y and Z or of both Feline and Mouse:

DisjointClasses: Y Z
DisjointClasses: Feline Mouse

In summary, there are two additional readings of “is-a” edges between classes, and they may be both intended, or only one of them, or none of them. Faithfully capturing the intended meaning in their translation into OWL not only leads to a more precise model, but also to more entaillements.


Relationship links between concepts

Now let us consider relationship links, like to the ones in the following picture:

a blob and line of other relationships

Understanding and capturing their meaning is a rather tricky task, and we start with the most straighforward ones.

First, we can read them as saying that every Mouse has some part that is a Tail, or that every instance of X is related, via P, to an instance of Y.

Mouse SubClassOf: hasPart some Tail
X SubClassOf: p some Y

Second and symmetrically, we can read them as saying that every Tail is a part of some Mouse, or that every instance of Y is related, via the inverse of P, to an instance of X.

Tail SubClassOf: Inverse(hasPart) some Mouse
Y SubClassOf: Inverse(p) some X

We may find the first reading more “natural” than the second, but this may be entirely due to the direction in which we have drawn these edges and the fact that we know that other animals have tails, too.

Thirdly, we can read these edges as restricting the range of the relationship p or hasPart; that is, they express that, whenever anything has an incoming p edge, they are a Y. Of course, we know that it makes no sense to say that whenever anything is a part of something else, then it is a +Tail, hence we only translate the former:

Thing SubClassOf: p only Y

There are alternative forms of expressing the latter axiom in OWL, e.g.:

p Domain: Y
(Inverse(p) some Thing) SubClassOf: Y

We may find this translation odd, yet if we imagine we had drawn a similar picture with “Teacher teaches courses”, then we may find it less odd.

Fourthly and symmetrically, we can read this as restricting the domain of the property labelling the edges; that is, they say that anything that has an outgoing p edge is an X, or that anything that can possibly have a part is a Mouse. Again, the latter reading makes little sense (but makes some for our teacher/courses example), so we only translate the former:

Thing SubClassOf: Inverse(p) only X

Again, there are alternative forms of expressing the latter axiom in OWL, e.g.:

p Domain: X
(p some Thing) SubClassOf: X

Finally, for reading 5 and 6, we can understand these links as restricting the number of incoming/outgoing edges to one: we could clearly read the right hand picture as saying that every Mouse has at most one Tail and/or that every Tail is part of at most one Mouse:

Mouse SubClassOf: (hasPart atmost 1 Tail)
Tail SubClassOf: (Inverse(hasPart) atMost 1 Mouse)

Interestingly, we can combine all 6 readings: for example, for our Mouse and Tail example, we could choose to apply reading 1, 5, and 6; if we rename the class Tail to MouseTail, we could, additionally, adopt reading 2.

If X is Teacher, p is teaches, and Y is Course, we could reasonably choose to adopt readings 1 – 4, and even possibly 6 depending on the kind of courses we model.


Summary

There are so many ways of interpreting pictures with nodes and arrows between them – knowing these possible interpretations should help us to make the right decisions when translating them into OWL. Pictures are useful for sketching, but have their ambiguities; OWL axioms have a precise semantics [4] and asking the appropriate questions of your picture can help draw out what you really mean to say and what you need to say.

References

  1. . Brachman, "What IS-A Is and Isn't: An Analysis of Taxonomic Links in Semantic Networks", Computer, vol. 16, pp. 30-36, 1983. http://dx.doi.org/10.1109/MC.1983.1654194
  2. R.J. Brachman, and J.G. Schmolze, "An Overview of the KL-ONE Knowledge Representation System*", Cognitive Science, vol. 9, pp. 171-216, 1985. http://dx.doi.org/10.1207/s15516709cog0902_1
  3. R.J. Brachman, and J.G. Schmolze, "An Overview of the KL-ONE Knowledge Representation System*", Cognitive Science, vol. 9, pp. 171-216, 1985. http://dx.doi.org/10.1207/s15516709cog0902_1
  4. M. Aranguren, S. Bechhofer, P. Lord, U. Sattler, and R. Stevens, "Array", BMC Bioinformatics, vol. 8, pp. 57, 2007. http://dx.doi.org/10.1186/1471-2105-8-57
]]>
http://ontogenesis.knowledgeblog.org/1423/feed 0
An object lesson in choosing between a class and an object http://ontogenesis.knowledgeblog.org/1418 http://ontogenesis.knowledgeblog.org/1418#respond Fri, 16 Aug 2013 20:38:04 +0000 http://ontogenesis.knowledgeblog.org/?p=1418

Overview

The Web Ontology Language (OWL) and other knowledge representation languages allow an ontologist to distinguish between classes of individuals and the individuals themselves. It is not always obvious when to choose to use a class and when to use an individual. This kblog seeks to help with this choice by offering a series of questions; no one single solution is offered (though one is, but it is rejected).


Authors

Robert Stevens and Uli Sattler
Bio-health Informatics and Information Management Groups
School of Computer Science
University of Manchester
Oxford Road
Manchester
United Kingdom
M13 9PL
Ulrike.Sattler@Manchester.ac.uk and robert.stevens@Manchester.ac.uk


Introduction

OWL is all about modelling objects and their properties. Often, an OWL ontology describes only classes, but it can also explicitly mention objects.

Let’s first fix our terminology: Classes are called classes, and stand for sets of “things”. What to call these “things” is a little more problematic – the official OWL spec calls them individuals, but this is really clunky. OWL also has object properties, that relate two things, so we can also call these things “objects”, which is less clunky and much easier to type – also a name used in the OWL specification. The word “instance” is often bandied about, but we really need to say of what a thing is an instance of – “an instance of X”, so we don’t use that one either (unless it is in the form just described). In this kblog we’re going to stay with using “objects” for things, which may or may not be instances of classes.

So the central question we are discussing here is when to introduce a class and when to introduce an explicit object for a thing you want to model (a concept, notion, idea,…). There are at least two approaches to decide this question: attempting to model things as they actually are, and attempting to model things according to the needs of the target application for the ontology.


Representing the field of interest “as it is”

We can choose to believe that there exists an objective reality, and that people who know a lot about a certain area of this reality, say molecular biology, have a similar conceptualisation of this reality in their minds – and then we can attempt to describe this conceptualisation in an OWL ontology.

This appears to be a simple approach to deciding when to use an object and when to use a class for a given thing:

  • the class Person and the object Robert Stevens
  • the class Car and the object car1 that robert was driven to the station in
  • the class red blood cell and the object rbc7 that is the individual red blood cell in the capillary at the end of robert’s finger
  • the class Haemoglobin and the object hgc24 in the red blood cell rbc7 in that capillary at the end of Robert’s finger

The English indicator of this is the use of articles, particularly the definite article; “the entity” suggests an object, where the indefite article “a” or “an” suggests a collection of possible entities or a class of entities. There are other linguistic indicators of class and instances – not wholly reliable, but they can act as a guide.

Given that this seems a rather straightforward approach – can it go wrong? Robert Stevens makes sense (ignoring the fact that Robert Stevens existed at different times with different properties) but the individual haemoglobin in the individual red blood cell etc is probably at far too fine a grain for using named objects. There are indeed “things” where our approach doesn’t seem to help: soup, love, pdf files, the bible, the prime minister, weather forecast, green, etc.


Modelling according to your application’s needs, in a robust way

Given that we may use our ontology in an application, this usage can nicely inform our design choices. So, once reality ceases to work, we can consider the following questions:

  • are there different manifestations of the thing in question around, and does this matter? Is the love you want to describe always the same as, e.g., the love that Romeo feels – or do you want to distinguish between motherly love, romantic love, and love of ice cream? Does everybody who reads the bible read the same book? And if so, what happens if it is lost? In the latter case, of course we should distinguish between the content of a book and its physical manifestation — and possibly also its different editions. A similar observations holds for pdf files.
  • are you ever going to refine the thing in question? If you want to talk about green now, but possibly also about lightgreen later, then make both classes, so that you can make lightgreen a subclass of green…and limegreen a subclass of lightgreen, etc.
  • is the thing in question in a relation to other objects or values? E.g., if the green you consider has the rgb value of, say, 34-139-34, and is related via isColourOf to car1, then you may want to make this green an object – and also an instance of the class Green.
  • does uniqueness matter? If Robert Stevens has met the queen and Uli Sattler has met the queen, have we met the same person? And has that person met at least two people? If the answer to the last two questions is “yes”, then the queen should clearly be modelled as an object.

Tips

When in doubt, make it a class.

Use punning when needed: punning refers to the practice of using the same name for both an object and a class, and use it “as a” class or object or even both where appropriate; e.g., we could use Queen both as a class and as an object, and then consider its super classes and the classes it is an instance of.


Summary

OWL is all about objects; it’s just that we usually talk talk about classes of objects and the things that are true of all objects in that class. However, we can explicitly talk about the objects themselves and a frequent quesiton is “when do I use a class and when should I use an individual instead”. We offered two routes: A exact representation of the field of interest; otherwise, simply doing what is best for your application’s needs. In the former one can model ad nauseam, and so end up staying at the class level (and that’s fine). an application’s needs also works, but can ultimately lead to high-variation from ontology to ontology. Deciding where the boundary is can be hard, but the default decision is to keep modelling with classes.

]]>
http://ontogenesis.knowledgeblog.org/1418/feed 0
Modelling in multiple dimensions is great in so many ways http://ontogenesis.knowledgeblog.org/1401 http://ontogenesis.knowledgeblog.org/1401#respond Thu, 15 Aug 2013 14:16:36 +0000 http://ontogenesis.knowledgeblog.org/?p=1401

Overview

We describe what multi-dimensional modelling is, why it’s good for you, and how it works in OWL.


The Authors

Uli Sattler and Robert Stevens
Information Management and BioHealth Informatics Groups
School of Computer Science
University of Manchester
Oxford Road
Manchester
United Kingdom
M13 9PL
sattler@cs.man.ac.uk and robert.stevens@Manchester.ac.uk


Multi-dimensional modelling

When modelling a domain, we often find that there are many aspects to be considered, and many things to be said when describing the concepts relevant in this domain. As a result, we’d expect to see a concept at many places in the ontology’s hierarchy, or have many routes through the hierarchy to reach a concept. For example when talking about animals and cats, we can talk about their location in the taxonomy, what they eat, their anatomic structure, where they live, etc. Trying to squeeze these aspects into a single hierarchy is at least difficult – and may lead to a large and unwieldy model. Pulling these aspects or dimensions apart, modelling these dimensions separately, along with the relations between them, then putting them back together again is what OWL and reasoners have been designed to support: OWL supports us in modelling, say, animals, their habitat, prey, food, anatomy, etc. and the relations between them like lives in or eats.

Assume you want to model documents about animals. Then of course we distinguish the document dimension and the animal dimension, and structure the concepts in each of them hierarchically. For the animal dimension, we can consider standard classes such as mammals, feline, cat, etc., and for the document dimension, we can consider books, collections, articles, etc.

One thing we can observe is that we have both is-a or subsumption relationships, e.g., between collection and book, and between feline and mammal, but also other relationships, e.g., a collection contains some articles (and it is not the case that an article is-a collection), and that a cat eats some mouse.

Also, there are loads of other dimensions that would be useful to model which are neither document nor animal, e.g., contributors, authors, editors, (publication) time, target audience, etc. about documents, as well as habitat, bodyparts, diet, etc., about animals. And again, the relationship between, say, a document and it’s audience isn’t is-a, and neither is that between a leg and an animal.

Next, we will see how we can build models that faithfully represent these different dimensions, and what the benefit of doing so is.


Multi-dimensional modelling in OWL

In OWL, first of all, we can model is-a, namely via subclass. Hence we would start with introducing a class name for the top level of each of our dimensions, and then add other relevant class names as subclasses.

Class: Document
Class: Animal
Class: Bodypart
Class: Habitat
Class: Readership
Class: Food


Mammal SubClassOf: Animal
Feline SubClassOf: Mammal
Rodent SubClassOf: Mammal
Mouse SubClassOf: Rodent

Book SubClassOf: Document
Article SubClassOf: Document

Arm SubClassOf: Bodypart
Trunk SubClassOf: Bodypart

Children SubClassOf: Readership

Desert SubClassOf: Habitat

Second, we can link dimensions via properties. We can start by introducing domains and ranges of these properties. This isn’t required, but it will usefully cause errors – namely an unsatisfiabilities – in case we use properties on classes that they weren’t designed for. For the examples below, please note that the range of about here is Animal since we consider documents about animals only – in general, this range should be rather Topic.

Property: about
       Domain: Document
Range: Animal

Property: writtenFor
       Domain: Document
Range: Readership

Property: eats
       Domain: Animal
Range: Food

Property: livesIn
      Domain: Animal
Range: Habitat

Property: has
      Domain: Animal
Range: Bodypart

Thirdly, we can describe classes and individuals in terms of where they sit in this multi-dimensional space.

ChildrensBook EquivalentTo: Document and (writtenFor some Children)

DesertCat  EquivalentTo: Cat and (livesIn some Desert)

Cat SubClassOf: Feline and (eats some Mouse)

Elephant SubClassOf: Mammal and (has some Trunk)

Carnivore  EquivalentTo: Animal and (eats some (Animal or (inverse(has) some Animal)))

myFavBook Types: Book, (about some (Cat and livesIn some House)), about value Kitty

Kitty Types: Cat and (eats some Grass) and (eats some  (inverse(has) some Animal))

In these descriptions, we can distinguish between necessary conditions – described via SubClassOf – and necessary and sufficient conditions – described EquivalentTo. And we can use complex class expressions both to describe the types of individuals and classes.

a picture of our ontology with the dimensions pulled apart and put together again

Figure 1. A picture showing a possible hierarchical structure of our documents, and one of our ontology with the dimensions separated out

Then of course we can ask a reasoner to classify our ontology and infer classes that our individuals are instances of. For example, we can infer from our example ontology that Cat is a subclass of Carnivore and that Kitty is a Carnivore. this pulling apart of different dimensions into their own hierarchies, then relating a class of objects to its dimensions via properties, then building a polyhierarchy by using defined classes is the technique advocated in ontology normalisation.


Advantages of Multi-dimensional modelling in OWL

First, by separating the different dimensions we want to talk about in our ontology and relating them via suitable properties, we can build an informed model of our application that reflects the different kinds of things that we want to talk about and the different relations between these. For example, we can distinguish between the things that animals eat, that they hunt, and that they play with. And we can distinguish the audience a document was written for from the one it is then read by or the one that it is advertised to. OWL also allows us to relate properties to reflect relations between properties: we can use hunts SubPropertyOf eats to state that things hunted by an animal are also eat by it, but not necessarily vice versa (clearly, we wouldn’t want, say, carrots to be hunted – but we also may want to describe scavenging animals).

Secondly, these nicely separated dimensions can be readily re-used; we can take our hierarchy of habitats and use it somewhere else too. More importantly, we are free to reuse existing (sub)ontologies to describe some of these dimensions; e.g., if we were really interested in building an ontology for documents about animals, we would be foolish not to reuse existing animal ontologies for the animal dimension (and possible others as well).

Thirdly, in our experience, this style of modelling not only leads to clearer, but also smaller ontologies: rather than having one big hierarchy where subhierarchies get multiplied, we keep those subhierarchies separate and simply relate to them via properties. For example, we will have one hierarchy of animals – rather than a hierarchy of animals by habitat and another one by feeding habit, and then a hierarchy of books by animals by habitat and another one by animals by feeding habit.

Fourthly, the multi-dimensional modelling can be nicely exploited by post-coordination: as in the examples above, we can describe (the type of) individuals via complex class expressions and we can also relate them to each other via properties. All this is then being taken into account when classifying them (i.e., determining the named classes an individual is an instance of), and we can also exploit this in tools where we filter individuals, either on named classes or complex class expression. This allows us to generate different views of our ontology: we can simply view/expose/export the (inferred) class hierarchy or relevant subtrees thereof (e.g., subclasses of Document), or we can generate a multi-dimensional view that allows us to browse simultaneously through the different dimensions of our ontology, see our N8 ontology browser for a small example.

Finally, separating a domain of interest out into its different dimensions is what ontology modelling is all about.

]]>
http://ontogenesis.knowledgeblog.org/1401/feed 0
Friends and Family: Exploring Transitivity and Subproperties http://ontogenesis.knowledgeblog.org/1376 http://ontogenesis.knowledgeblog.org/1376#respond Thu, 08 Aug 2013 14:03:20 +0000 http://ontogenesis.knowledgeblog.org/?p=1376

Summary

An exploration of the relationship between subproperties and property characteristics, in particular transitivity.

Author

Sean Bechhofer
Information Management Groups
School of Computer Science
University of Manchester
Oxford Road
Manchester
United Kingdom
M13 9PL
sean.bechhofer@manchester.ac.uk

Property Characteristics and Subproperties

Transitive properties can be very useful in ontologies. Recall that a property P is transitive if and only if the following is true:

* For all x, y, and z: P(x,y) and P(y,z) => P(x,z)

An example of a transitive property is “ancestor”. Any ancestor of an ancestor of mine is also an ancestor of mine. OWL provides us with an axiom for stating that a particular property is transitive.

ObjectProperty: ancestor
  Characteristics: Transitive

The notion of subproperties are also useful. For a property Q, R is a subproperty if and only if

* For all x, y: R(x,y) => Q(x,y)

An example of a subproperty relationship is “hasParent” and “hasFather”. Any two individuals that are related via the father relationship must be related via the parent relationship.

ObjectProperty: hasParent

ObjectProperty: hasFather
  SubPropertyOf: hasParent

Sometimes there is confusion over the way in which characteristics like transitivity interact with the sub/super property hierarchy. As far as transitivity is concerned, the characteristic is not “inherited” by sub-properties — we cannot infer that the sub property of a transitive property is transitive. The same holds for super properties.

To illustrate this, consider the following example. We have three (object) properties: knows, hasFriend, marriedTo. One of these (hasFriend) is transitive (now you might question this as a piece of modelling, but please just go with my rosy world-view that all the friends of my friends are also friends), and the properties are arranged in a hierarchy. In Manchester syntax we would have:

ObjectProperty: knows

ObjectProperty: hasFriend
  Characteristics: Transitive
  SubPropertyOf: knows

ObjectProperty: isMarriedTo
  SubPropertyOf: hasFriend

And yes I know that expecting marriage to imply friendship is again hopelessly optimistic, but I’m a hopeless optimist.

Now, consider a domain with four elements, Arthur, Betty, Charlie and Daphne. They are related as follows:

* Arthur knows Betty.

* Betty knows Charlie and Daphne.
* Betty hasFriend Charlie and Daphne.
* Betty isMarriedTo Charlie.

* Charlie knows Daphne.
* Charlie hasFriend Daphne.
* Charlie isMarriedTo Daphne.

The situation is as pictured below.

transitivity

If we look at the ontology presented above, we can see that all the axioms hold — the subproperty axioms are being respected, as is the transitivity of hasFriend. Thus this situation is a model of the ontology.

Now if we consider isMarriedTo, we can see that our conditions for transitivity do not hold. There are three elements with isMarriedTo(Betty,Charlie) and isMarriedTo(Charlie,Daphne), but we do not have isMarriedTo(Betty,Daphne). So we cannot infer that isMarriedTo is transitive from the axioms. Similarly, there are three elements where knows(Arthur,Betty) and knows(Betty,Charlie) but we don’t have knows(Arthur,Charlie).

Recall that the inferences we can make from an ontology or collection of axioms are those things that necessarily hold in all models of the ontology. This little sample model provides us a “witness” for the fact that we cannot infer that knows is transitive from the axioms. Similarly, we cannot infer that isMarriedTo is transitive.

Of course, this is just saying that we can’t in general make such an inference. We are not saying that superproperties cannot (sometimes) be transitive. If we add to our interpretation the fact that Arthur knows Charlie and Daphne, then in this interpretation, knows is indeed transitive. And if we allow Betty to marry Daphne — hey, it’s 2013! — then we have a transitive subproperty (in this interpretation).

On the topic of transitivity and sub properties, the thesaurus representation SKOS uses a common modelling pattern, where a non-transitive property (skos:broader) has a transitive superproperty (skos:broaderTransitive) defined. The superproperty is not intended to be used for asserting relationships, but can be used to query for transitive chains of skos:broader relationships (assuming our query engine is performing inference). As we now know, this doesn’t mean that skos:broader is necessarily transitive.

This pattern is also often used for representing partonomy. Here we would use a (non-transitive) hasDirectPart for asserting parts of a whole, with a transitive superproperty hasPart allowing us to query the transitive closure. We can use counting with hasDirectPart — for example min or max cardinality restrictions — which we would not be able to do in OWL DL if hasDirectPart was transitive, due to restrictions relating to simple properties (see the OWL2 Structural Specification).

For other property characteristics the situation is different. For example, a subproperty of a functional property must be functional. Why? We’ll leave that as an exercise for the interested reader……

]]>
http://ontogenesis.knowledgeblog.org/1376/feed 0
Common reasons for ontology inconsistency http://ontogenesis.knowledgeblog.org/1343 http://ontogenesis.knowledgeblog.org/1343#respond Wed, 12 Jun 2013 20:18:05 +0000 http://ontogenesis.knowledgeblog.org/?p=1343

Summary

Following on from the previous Ontogenesis article “(I can’t get no) satisfiability” [1], this post explores common reasons for the inconsistency of an ontology. Inconsistency is a severe error which implies that none of the classes in the ontology can have instances (OWL individuals), and (under standard semantics) no useful knowledge can be inferred from the ontology.

Introduction

In the previous Ontogenesis article “(I can’t get no) satisfiability” [1], the authors discussed the notions of “unsatisfiability”, “incoherence”, and “inconsistency”. We recall that a class is “unsatisfiable” if there is a contradiction in the ontology that implies that the class cannot have any instances (OWL individuals); an ontology is “incoherent” if it contains at least one unsatisfiable class. If the ontology is “inconsistent” it is impossible to interpret the axioms in the ontology such that there is at least one class which has an instance; we say that “every class is interpreted as the empty set”.

While incoherent OWL ontologies can be (and are) published and used in applications, inconsistency is generally regarded as a severe error: most OWL reasoners cannot infer any useful information from an inconsistent ontology. When faced with an inconsistent ontology, they simply report that the ontology is inconsistent and then abort the classification process, as shown in the Protégé screenshot below. Thus, when building an OWL ontology, inconsistency (and some of the typical patterns that often lead to inconsistency) needs to be avoided.

Protege Screenshot

In what follows, we will outline and explain common reasons for the inconsistency of an OWL ontology which we separate into errors caused by axioms on the class level (TBox), on the instance level (ABox), and by a combination of class- and instance-related axioms. Note that the examples are simplified versions which represent, in as few axioms as possible, the effects multiple axioms in combination can have on an ontology.

Instantiating an unsatisfiable class (TBox + ABox)

Instantiating an unsatisfiable class is commonly regarded as the most typical cause of inconsistency. The pattern is fairly simple – we assign the type of an unsatisfiable class to an individual:

Individual: Dora
  Types: MadCow

where MadCow is an unsatisfiable class. The actual reason for the unsatisfiability does not matter; the contradiction here is caused by the fact that we require a class that cannot have any instances (MadCow) to have an instance named Dora. Clearly, there is no ontology in which the individual Dora can fulfil this requirement; we say that the ontology has no model. Therefore, the ontology is inconsistent. This example shows that, while incoherence is not a severe error as such, it can quickly lead to inconsistency, and should therefore be avoided.

Instantiating disjoint classes (TBox + ABox)

Another fairly straightforward cause of inconsistency is the instantiation of two classes which were asserted to be disjoint:

Individual: Dora
  Types: Vegetarian, Carnivore
  DisjointClasses: Vegetarian, Carnivore

What we state here is that the individual Dora is an instance of both the class Vegetarian and the class Carnivore. However, we also say that Vegetarian and Carnivore are disjoint classes, which means that no individual can be both a Vegetarian and a Carnivore. Again, there is no interpretation of the ontology in which the individual Dora can fulfil both requirements; therefore, the ontology has no models and we call it inconsistent.

Conflicting assertions (ABox)

This error pattern is very similar to the previous one, but all assertions now happen in the ABox, that is, on the instance level of the ontology:

Individual: Dora
  Types: Vegetarian, not Vegetarian

Here, the contradiction is quite obvious: we require the individual Dora to be a member of the class Vegetarian and at the same time to not be a member of Vegetarian.

Conflicting axioms with nominals (all TBox)

Nominals (oneOf in OWL lingo) allow the use of individuals in TBox statements about classes; this merging of individuals and classes can lead to inconsistency. The following example, based on an example in [2], is slightly more complex than the previous ones:

Class: MyFavouriteCow
  EquivalentTo: {Dora}
Class: AllMyCows
  EquivalentTo: {Dora, Daisy, Patty}
  DisjointClasses: MyFavouriteCow, AllMyCows

The first axiom in this example requires that every instance in the class MyFavouriteCow must be equivalent to the individual Dora. In a similar way, the second axiom states that any instance of AllMyCows must be one of the individuals Dora, Daisy, or Patty. However, we then go on to say that MyFavouriteCow and AllMyCows are disjoint; that is, no member of the class  MyFavouriteCow can be a member of AllMyCows. Since we already stated that Dora is a member of both MyFavouriteCow and AllMyCows, the final disjointness axiom causes a contradiction which means there cannot be any interpretation of the axioms that fulfils all three requirements. Therefore, the ontology is inconsistent.

No instantiation possible (all TBox)

The following examples demonstrates an error which may not occur in a single axiom as it is shown here (simply because it is unlikely that a user would write down a statement which is obviously conflicted), but could be the result of several axioms which, when taken together, have the same effect as the axiom below. It is also non-trivial to express the axiom in Manchester syntax (the OWL syntax chosen for these examples) since it contains a General Concept Inclusion (GCI)[3], so we will bend the syntax slightly to illustrate the point.

Vegetarian or not Vegetarian
  SubClassOf: Cow and not Cow

Let’s unravel this axiom. First, in order for any individual satisfy the left-hand side of the axiom, it has to be either a member of Vegetarian or not a member of Vegetarian. Clearly, since either something is a member of a class or it is not (there are no values “in between”), the statement holds for all individuals in the ontology. The right-hand side (or, second line) of the axiom then requires all individuals to be a member of the class Cow and not Cow at the same time; again, this falls into the same category as the examples above, which means that no individual can meet this requirement. Due to this contradiction, there is no way to interpret the axiom to satisfy it, which renders the ontology inconsistent.

Conclusion

In this post, we have discussed some of the most common reasons for inconsistency of an OWL ontology by showing – simplified – examples of the error patterns. While some of these – such as instantiation of an unsatisfiable class – can be identified fairly easily, others – such as conflicting axioms involving nominals – can be more subtle.

References

  1. U. Sattler, R. Stevens, and P. Lord, "(I can’t get no) satisfiability", Ontogenesis, 2013. http://ontogenesis.knowledgeblog.org/1329
  2. B. Parsia, E. Sirin, and A. Kalyanpur, "Debugging OWL ontologies", Proceedings of the 14th international conference on World Wide Web - WWW '05, 2005. http://dx.doi.org/10.1145/1060745.1060837
  3. U. Sattler, and R. Stevens, "Being complex on the left-hand-side: General Concept Inclusions", Ontogenesis, 2012. http://ontogenesis.knowledgeblog.org/1288
]]>
http://ontogenesis.knowledgeblog.org/1343/feed 0