knowledge representation – Ontogenesis http://ontogenesis.knowledgeblog.org An Ontology Tutorial Wed, 12 Jun 2013 21:08:12 +0000 en-US hourly 1 https://wordpress.org/?v=5.2 Common reasons for ontology inconsistency http://ontogenesis.knowledgeblog.org/1343 http://ontogenesis.knowledgeblog.org/1343#respond Wed, 12 Jun 2013 20:18:05 +0000 http://ontogenesis.knowledgeblog.org/?p=1343

Summary

Following on from the previous Ontogenesis article “(I can’t get no) satisfiability” [1], this post explores common reasons for the inconsistency of an ontology. Inconsistency is a severe error which implies that none of the classes in the ontology can have instances (OWL individuals), and (under standard semantics) no useful knowledge can be inferred from the ontology.

Introduction

In the previous Ontogenesis article “(I can’t get no) satisfiability” [1], the authors discussed the notions of “unsatisfiability”, “incoherence”, and “inconsistency”. We recall that a class is “unsatisfiable” if there is a contradiction in the ontology that implies that the class cannot have any instances (OWL individuals); an ontology is “incoherent” if it contains at least one unsatisfiable class. If the ontology is “inconsistent” it is impossible to interpret the axioms in the ontology such that there is at least one class which has an instance; we say that “every class is interpreted as the empty set”.

While incoherent OWL ontologies can be (and are) published and used in applications, inconsistency is generally regarded as a severe error: most OWL reasoners cannot infer any useful information from an inconsistent ontology. When faced with an inconsistent ontology, they simply report that the ontology is inconsistent and then abort the classification process, as shown in the Protégé screenshot below. Thus, when building an OWL ontology, inconsistency (and some of the typical patterns that often lead to inconsistency) needs to be avoided.

Protege Screenshot

In what follows, we will outline and explain common reasons for the inconsistency of an OWL ontology which we separate into errors caused by axioms on the class level (TBox), on the instance level (ABox), and by a combination of class- and instance-related axioms. Note that the examples are simplified versions which represent, in as few axioms as possible, the effects multiple axioms in combination can have on an ontology.

Instantiating an unsatisfiable class (TBox + ABox)

Instantiating an unsatisfiable class is commonly regarded as the most typical cause of inconsistency. The pattern is fairly simple – we assign the type of an unsatisfiable class to an individual:

Individual: Dora
  Types: MadCow

where MadCow is an unsatisfiable class. The actual reason for the unsatisfiability does not matter; the contradiction here is caused by the fact that we require a class that cannot have any instances (MadCow) to have an instance named Dora. Clearly, there is no ontology in which the individual Dora can fulfil this requirement; we say that the ontology has no model. Therefore, the ontology is inconsistent. This example shows that, while incoherence is not a severe error as such, it can quickly lead to inconsistency, and should therefore be avoided.

Instantiating disjoint classes (TBox + ABox)

Another fairly straightforward cause of inconsistency is the instantiation of two classes which were asserted to be disjoint:

Individual: Dora
  Types: Vegetarian, Carnivore
  DisjointClasses: Vegetarian, Carnivore

What we state here is that the individual Dora is an instance of both the class Vegetarian and the class Carnivore. However, we also say that Vegetarian and Carnivore are disjoint classes, which means that no individual can be both a Vegetarian and a Carnivore. Again, there is no interpretation of the ontology in which the individual Dora can fulfil both requirements; therefore, the ontology has no models and we call it inconsistent.

Conflicting assertions (ABox)

This error pattern is very similar to the previous one, but all assertions now happen in the ABox, that is, on the instance level of the ontology:

Individual: Dora
  Types: Vegetarian, not Vegetarian

Here, the contradiction is quite obvious: we require the individual Dora to be a member of the class Vegetarian and at the same time to not be a member of Vegetarian.

Conflicting axioms with nominals (all TBox)

Nominals (oneOf in OWL lingo) allow the use of individuals in TBox statements about classes; this merging of individuals and classes can lead to inconsistency. The following example, based on an example in [2], is slightly more complex than the previous ones:

Class: MyFavouriteCow
  EquivalentTo: {Dora}
Class: AllMyCows
  EquivalentTo: {Dora, Daisy, Patty}
  DisjointClasses: MyFavouriteCow, AllMyCows

The first axiom in this example requires that every instance in the class MyFavouriteCow must be equivalent to the individual Dora. In a similar way, the second axiom states that any instance of AllMyCows must be one of the individuals Dora, Daisy, or Patty. However, we then go on to say that MyFavouriteCow and AllMyCows are disjoint; that is, no member of the class  MyFavouriteCow can be a member of AllMyCows. Since we already stated that Dora is a member of both MyFavouriteCow and AllMyCows, the final disjointness axiom causes a contradiction which means there cannot be any interpretation of the axioms that fulfils all three requirements. Therefore, the ontology is inconsistent.

No instantiation possible (all TBox)

The following examples demonstrates an error which may not occur in a single axiom as it is shown here (simply because it is unlikely that a user would write down a statement which is obviously conflicted), but could be the result of several axioms which, when taken together, have the same effect as the axiom below. It is also non-trivial to express the axiom in Manchester syntax (the OWL syntax chosen for these examples) since it contains a General Concept Inclusion (GCI)[3], so we will bend the syntax slightly to illustrate the point.

Vegetarian or not Vegetarian
  SubClassOf: Cow and not Cow

Let’s unravel this axiom. First, in order for any individual satisfy the left-hand side of the axiom, it has to be either a member of Vegetarian or not a member of Vegetarian. Clearly, since either something is a member of a class or it is not (there are no values “in between”), the statement holds for all individuals in the ontology. The right-hand side (or, second line) of the axiom then requires all individuals to be a member of the class Cow and not Cow at the same time; again, this falls into the same category as the examples above, which means that no individual can meet this requirement. Due to this contradiction, there is no way to interpret the axiom to satisfy it, which renders the ontology inconsistent.

Conclusion

In this post, we have discussed some of the most common reasons for inconsistency of an OWL ontology by showing – simplified – examples of the error patterns. While some of these – such as instantiation of an unsatisfiable class – can be identified fairly easily, others – such as conflicting axioms involving nominals – can be more subtle.

References

  1. U. Sattler, R. Stevens, and P. Lord, "(I can’t get no) satisfiability", Ontogenesis, 2013. http://ontogenesis.knowledgeblog.org/1329
  2. B. Parsia, E. Sirin, and A. Kalyanpur, "Debugging OWL ontologies", Proceedings of the 14th international conference on World Wide Web - WWW '05, 2005. http://dx.doi.org/10.1145/1060745.1060837
  3. U. Sattler, and R. Stevens, "Being complex on the left-hand-side: General Concept Inclusions", Ontogenesis, 2012. http://ontogenesis.knowledgeblog.org/1288
]]>
http://ontogenesis.knowledgeblog.org/1343/feed 0
Review of What is an ontology? http://ontogenesis.knowledgeblog.org/511 http://ontogenesis.knowledgeblog.org/511#respond Fri, 22 Jan 2010 13:31:49 +0000 http://ontogenesis.knowledgeblog.org/?p=511

This is a review of What is an ontology?

This well written article spans both logical and philosophical considerations in Ontology so as to provides insight into the kinds of entities that are believed to exist and how we might formally represent them and the basic relations that may exist between them.

The discussion on relations relating to identity (transformedInto, derivedFrom) necessitates further explanation. Are the criterion for identity embedded in physical continuity or in the conscious self? Indeed, we observe that a from develops from a tadpole, the idea lies in the material *largely* persisting spatiotemporally, and that the gain and loss of parts (and the corresponding qualities) is gradual and acceptably identity-preserving.  Yet, we wonder whether the addition of even a single atom to a molecule through some chemical reaction maintains identity. To what extent does the gain or loss of parts become sufficiently important that it requires the distinction of forming a new entity? Perhaps more challenging is if we were to replace a person’s brain with another, we might perceive them to be the same individuals throughout the operations, but would this criterion for identity change if consciousness followed the brain? Then what might we say of identity? Important questions indeed for formal ontology and the representation of biological knowledge.

]]>
http://ontogenesis.knowledgeblog.org/511/feed 0
OWL, an ontology language http://ontogenesis.knowledgeblog.org/55 http://ontogenesis.knowledgeblog.org/55#comments Thu, 21 Jan 2010 15:33:16 +0000 http://ontogenesis.knowledgeblog.org/?p=55

This article takes the reader on an introductory tour of OWL, with particular attention on the meaning of OWL statements, their entailments, and what reasoners do. Related Knowledge Blog posts include one on ontology components, one on OWL syntaxes, and one on the extent of classes.

There are numerous ontology languages around, most prominently the Web Ontology Language OWL. OWL has been developed based on experiences with its predecessors DAML+OIL and OIL, and its design has been carried out by W3C working groups. OWL 2 is an extension and revision of the OWL (published in 2004) and is a W3C recommendation.

OWL and OWL 2 are called Web Ontology Languages because they are based on web standards  such as XML, IRIs, and RDF, and because they are designed in such a way that they can be used over the web (for example, one OWL file can import others by their URI). There are numerous usages of OWL and OWL 2, however, that are rather local, for example to a software or information system.

These languages come with a lot of options and choices, which we will only briefly mention here, and only come back to when they are important. OWL comes in three flavours (OWL Full, OWL lite, and OWL DL), and OWL 2 comes with two semantics (i.e., two ways of determining the meaning of an ontology, direct  and RDF-based) and three profiles (i.e., fragments or syntactic restrictions, called OWL 2 EL, QL and RL), and you can choose between a number of syntaxes to save your ontology in. Since the tools and especially the reasoners around mostly support OWL 2’s direct semantics and OWL DL, we will concentrate here on those. Also, OWL 2 is backwards compatible to OWL, so we can discuss advantages and new features of OWL 2 elsewhere, and can forget the difference for now and just talk about OWL (and mean both OWL and OWL 2).

Next, we would like to utter a warning: OWL has been designed to be consumed by computers, so in its natural form (especially in certain syntaxes), it is really hard to read or write for humans: e.g., the following snippet of an OWL ontology in the RDF syntax says that

a:Boy owl:equivalentClass _:x .
_:x rdf:type owl:Class .
_:x owl:intersectionOf ( Child Male)

boys are exactly those children who are male. The same example in the Manchester syntax looks more readable,

EquivalentClasses( Boy ObjectIntersectionOf( Child Male ) )

but we can easily imagine a much nicer presentation of this statement, and tool developers have designed useful, goal- or application-oriented tools or visualisations. This is clearly a good thing: it helps the user to interact with an (OWL) ontology, without requiring them to be fluent in the ontology language and while supporting the task at hand.

Now what is in an OWL ontology? There is some stuff like headers and declarations around an ontology but, in essence, an OWL ontology is a set of axioms, and each of these makes a statement that we think is true about our view of the world. An axiom can say something about classes, individuals, and properties. For example, the following axioms (in Manchester syntax) talk about two classes, Man and Person,  and one property, hasChild, and two individuals, Alice and Bob.

SubClassOf( Man Person )

SubClassOf(Person (hasChild only Person))

ClassAssertion(Bob Man)

PropertyAssertion(hasChild Bob Alice)

Roughly speaking, these axioms say something about these classes, properties, and individuals, and this meaning is fixed through their semantics, which allows us to distinguish interpretations/structures/worlds/… that satisfy these axioms from those that don’t. For example, a structure where every Man is a Person would satisfy the first axiom, whereas one where we have a Man who is not a Person would not satisfy the first axiom. Rather confusingly for modelers in general, we call those interpretations/structures/worlds/… that satisfy all axioms of an ontology a model of this ontology. It is worth pointing out that one ontology can have many many models, of varying size and even infinite ones. And here we can even have a sneak preview at reasoning or inferencing: assume the axioms in our ontology are such that in all its models, it happens that every GrandParent is a Parent. Then we call this an entailment or a consequence of our ontology, and we expect a reasoner to find this out and let us know (if you are familiar with Protégé, then you might have seen an inferred class hierarchy, which is basically this).

More detailed, this semantics works as follows: first, fix a set — any set of things will do, finite or infinite, as long as it is not empty. Then, take each class name (such as Man) and interpret it as a set — any set is fine, it can even be empty. Then, take each property name (such as hasChild) and interpret it as a relation on your set (basically by drawing edges between your elements) — again, you are free to choose whatever relation you like. Then, take each individual name (such as Bob) and interpret it as one of your elements. All together, you have now an interpretation (but remember that 1 ontology can have many many interpretations). Now, to check whether your interpretation satisfies your ontology, you can go through your ontology axiom by axiom and check whether your interpretation satisfies each axiom. For example, in order for your interpretation to satisfy

  • the first axiom, SubClassOf( Man Person ), the set that interprets Man has to be a subset of the set that interprets  Person. Since this kind of sentence will soon become horribly contrived, we rather say ‘every instance of Man is also an instance of Person’.
  • the second axiom, SubClassOf(Person (hasChild only Person)), every instance of  Man is related, via the property hasChild, to instances of Person only. I.e., for an instance of Man, if it has an out-going hasChild edge, then this must link it to an instance of Person.
  • the third axiom, ClassAssertion(Bob Man), the element that interprets Bob must be an instance of Man (see, now it becomes quite easy?).
  • the fourth axioms, PropertyAssertion(hasChild Bob Alice), the element that interprets Bob must be related, via the hasChild property, to the element that interprets Alice.

So, in this case, we could in principle, construct or invent interpretations and test whether they satisfy our ontology, i.e., whether it’s a model of it or not. This would, however, hardly enable us to say something about what holds in all models in our ontology because, as mentioned earlier, there can be loads of those, even infinitely many…so we rather leave this to tools called reasoners (and they do this in a more clever way). This whole exercise should, however, help us understand the above mentioned entailment. Consider the following two axioms:

EquivalentClass(Parent (Person and isParentOf some Person))

EquivalentClass(GrandParent (Person and (isParentOf some (isParentOf some Person)))

The first axiom says that the instances of Parent are exactly those elements who are related, via isParentOf, to some instance of Person. The second axiom says that the instances of GrandParent are exactly those elements who are related, via isParentOf, to some element who is related, via isParentOf, to an instance of Person. Please note that the GrandParent axiom does not mention Parent. Now you can try to construct an interpretation that satisfies both axioms and where you have an instance of GrandParent that  is not a Parent…and it will be impossible…then you can think some more and come to the conclusion that these two axioms entail that every GrandParent is a Parent, i.e., that  GrandParent is a sub class of Parent!

Coming back to Protégé: if you look at the inferred class hierarchy in Protege, then you see both the ‘told’ plus these entailed subclass relationships. In OWL, we also have two special classes, thing and nothing, and they are interesting for the following reasons:

  • if thing is a subclass of a user-defined class, say X, then every element in every interpretation is always an instance of X. This is often regarded as problematic, e.g., for reuse reasons.
  • if your class, say Y, is a subclass of nothing, then Y can never have any instance at all, because nothing is according to the OWL specification, always interpreted as the empty set. In many cases, this thus indicates a modelling error and requires some repair.

Finally, we also ask our reasoner to answer a query, e.g. to give us all instances of Person. If you look again at the four axioms above, then we only have that Bob is an instance of Man, so we might be tempted to not return Bob to this query. On the other hand, we also have the axiom that says that every instance of Man is also an instance of Person, so we should return Bob because our ontology entails that Bob is a Person. Reasoners can be used to answer such queries, and they are not restricted to class names: for example, we could also query for all instances of (Person and (hasChild some Person)). Now, from the four axioms we have, we can’t infer that Bob should be returned to this query because, although we know that Bob is a Person and is hasChild related to Alice, we don’t know anything about her, and thus we don’t know whether she is a Person or not. Hence Bob can’t be returned to this query. Similarly, if we query for all instances of (Person and (hasChild atmost 1)), we cannot expect Bob to be in the answer:  although we know that Bob is a Person and is hasChild related to Alice, we don’t know whether he has possibly other children, unbeknownst to us. This kind of behaviour is referred to as OWL’s  open world assumption.

It is quite common to distinguish class-level ontologies (which only have axioms about classes, but don’t mention individuals), from instance-level ontologies (i.e., assertions about the types and relations between individuals). We find ontologies that are purely class-level, such as Snomed-CT and NCIt, and where reasoning is used purely to make sure that the things said about classes and the resulting entailed class hierarchy are correct, and that no contradictory things have been said that would lead to subclasses of nothing or to the whole ontology being contradictory. One interesting option is then, e.g., to export the resulting class hierarchy as a SKOS vocabulary to be used for navigation. We also find ontologies with both class- and instance-level axioms, and which are used with the above query answering mechanism for flexible, powerful mechanism for accessing data.

Finally, if you want to use OWL for your application, you will first have to clarify whether this involves a purely class-level  ontology, or whether you want to use OWL for accessing data. In the latter case, you have two options: you can leave the data in the database, files, or formats that it currently resides in, and use existing approaches (e.g., using Quonto, OWLGres or Requiem)  to map this data to your class-level ontology and thus query it through the OWL ontology. Or you can extract and load it into an instance-level ontology and go from there. Both clearly have advantages and disadvantages, whose discussion goes beyond the scope of this article (as many other aspects).

So, where to go next if you want to learn more about OWL? First, you could download an OWL editor such as Protégé 4, and follow a tutorial on how to build an OWL ontology (see below for more links). You could also read the substantial OWL Primer (it has a cool feature which lets you decide which syntaxes to show and which to hide!) and take it from there. Or you could read some of the papers on experiences with OWL in modelling biology. Regardless of what you do, building your own OWL ontology and asking reasoners to make entailments salient seems always to be a good plan.

Helpful links:

PS: I need to point out that (i) OWL is heavily influence by classical first order predicate logic and by research in description logics (these are fragments of first order logic that have been developed in knowledge representation and reasoning since the late 80ies), and that (ii) OWL is much more than what is mentioned here: e.g., we can annotate axioms and classes, import other ontologies, etc., and in addition to the OWL constructors such as ‘and’, ‘some’, ‘only’, used here, there are numerous others, far too many to be mentioned here.

]]>
http://ontogenesis.knowledgeblog.org/55/feed 5