When talking about OWL ontologies and automated reasoning, the words (un)satisfiability and (in)consistency are bandied about like nobody’s business. We also discuss the less commonly used word incoherence. They are often used inaccurately, as well as inconsistently (ha ha) — in particular, incoherence and inconsistency are often confused. In this KBlog we define and illustrate these words.
Uli Sattler and Robert Stevens
Information Management and Bio-health Informatics Groups
School of Computer Science
University of Manchester
Ulrike.Sattler@Manchester.ac.uk and robert.stevens@Manchester.ac.uk
School of Computing Science
Consider the following ontology, that introduces names for eukaryotic cells, red blood cells, and blood:
EukaryoticCell SubClassOf: Cell, hasPart some Nucleus RedBloodCell SubClassOf: EukaryoticCell, (hasPart only (not Nucleus)) Blood SubClassOf: hasPart some RedBloodCell
As you may have spotted already, this ontology contains “an impossibility”: a red blood cell has a nucleus because it is necessarily a eukaryotic cell, and it has no nucleus, because it is described in this way. Therefore, any model  of this ontology, i.e., a structure that obeys all its axioms, can’t have an instance of red blood cell — as this instance would have to have both a nucleus and no nucleus. However, we can have models of our ontology, with as many instances of
EukaryoticCell as we want – because that class is “realisable”, i.e., its description contains no contradictions. In contrast, any model of our ontology has no instances of red blood cells and therefore of blood either: an instance of
Blood would require there to be an instance of
RedBloodCell which, as we have seen, isn’t possible. Hence the above ontology is said to be consistent – it has models – but it has unsatisfiable classes, namely
An ontology in which some named classes are unsatisfiable – like in our example – is called incoherent. In general, we’d think that these unsatisfiabilities need repairing: it is unlikely that we really meant to describe them in this contradictory way; worse, as we will see next, things can get seriously wrong if we try to instantiate unsatisfiable classes.
So, if we were to instantiate
RedBloodCell by adding an assertion
Individual: MyRBC type: RedBloodCell
then we would make our ontology inconsistent: in order to also satisfy this new axiom, there would need to be an instance of
RedBloodCell, and this would need to have at least one and no nucleus at the same time – a contradiction. to re-use an earlier word, we’re trying to realise that which cannot be realised; there can be no instances of this class. Hence there can’t be a model of this extended ontology, and therefore we’d call it inconsistent.
To repeat, there is no way to interpret the terms in an inconsistent ontology in such a way that we make all axioms hold… which we describe as “this ontology has no model”. Now entailments/inferences of an ontology are those axioms that are true in all models of it. Since there are none, this notion becomes basically meaningless, and indeed, it makes little sense to talk about the inferences we can draw from an inconsistent ontology. Hence an inconsistent ontology indeed requires fixing.
In Protégé, whenever we ask the reasoner to classify our ontology, it turns unsatisfiable classes red in the inferred class hierarchy, and makes them subclasses of
Nothing. The latter is because, like
Nothing, an unsatisfiable class has no instances, ever. If our ontology is inconsistent, this is indicated by a new window warning us about the inconsistency and offering some help to identify its sources.
So, while we can have unsatisfiable classes in a consistent ontology, all classes in an inconsistent ontology are unsatisfiable because an inconsistent ontology simply has no model, and thus it can’t have one with instances of any given class.
In summary, we can consider the following cases:
Finally, things can be more tricky than in the given example and therefore harder to fix. For example, take the following ontology fragment (based on the Pets ontology):
Cow SubClassOf: Vegetarian Vegetarian SubClassOf: Animal and eats only Plant DisjointClasses: Plant, Animal MadCow SubClassOf: Cow and eats some Sheep Sheep SubClassOf: Animal Individual: Dora type: MadCow
This ontology is inconsistent because we have made
Dora an instance of an unsatisfiable class, namely
MadCow. It becomes consistent but incoherent if we remove
Dora or only make it an instance of
Cow. Repairing the unsatisfiability of
MadCow, however, is tricky: we could do this, e.g., by changing the definition of
Vegetarian as those animals who
(mainlyEats only Plant) (because they will swallow flies and other little insects involuntarily) and make
mainlyEats a subproperty of
eats. This “repaired” ontology is consistent and coherent, and thus we can even have an instance of
MadCow without “breaking” it:
Cow SubClassOf Vegetarian Vegetarian SubClassOf Animal and mainlyEats only Plant Disjoint(Plant, Animal) mainlyEats SubPropertyOf: eats MadCow SubClassOf Cow and eats some Sheep Sheep SubClassOf Animal Individual: Dora type: MadCow
Unfortunately, many descriptions of ontological technology just assume that you understand these distinctions. For example, the OWL API defines an
isConsistent method as returning “true if the […] ontology is consistent”; it also provides no mechanism for directly checking coherency — instead, you count the unsatisfiable classes, and if there is more than 1 (i.e. not just
Nothing), the ontology is incoherent.
We use simple English mnemonics as an easy way to remember the distinctions between these two.
None of the following statements makes sense, according to the definitions that we have given here, although you will see statements of this sort in common literature.