on April 1, 2015 by Robert Stevens in Under Review, Comments (0)

Pub Quiz for people who want to learn about OWL.


the semantics of OWL can be a little tricksy at times. Having a firm hold of the semantics of axioms, the implications of those axioms and how the descriptions logics underlying OWL work should help in understanding the behaviur of one’s ontology when it’s given to a reasoner. This quiz presents some axioms and then asks some questions that allows one’s knowledge to be tested and possiblely enhanced once the answers are shown. …you are cordially invited to make up your own example, and your own modifications/enforcements! And to drink beer while doing this quiz.

The Authors

Uli Sattler and Robert Stevens
Information Management and BioHealth Informatics Groups
School of Computer Science
University of Manchester
Oxford Road
United Kingdom
M13 9PL
sattler@cs.man.ac.uk and robert.stevens@Manchester.ac.uk


There are quite a lot of facts about OWL ontologies to know that may affect our modelling, and these are rarely spelled out completely. Rather than making our “Top 16 things you need to know about OWL ontologies, but were too scared to ask”, we have made a quiz for you to test your understanding of them, together with explanations and examples. While we try to figure out a way to make this quiz interactive, you can try it out as it is, and we hope you enjoy the experience.


Our example ontology is:

A EquivalentTo P some B
B SubClassOf C
C SubClassOf Q only D
C DisjointFrom D
Dom(Q) = C


All questions are “True or False” questions.

Questions about models

1) For an axiom to be entailed by an ontology, it has to be true in one of the ontology’s models (http://ontogenesis.knowledgeblog.org/55) ?

False: An entailed axiom has to be true in all models of an ontology – and there may be many of these.

2) A model has some minimum size?

True: Sort of – it has to have at least one element in it. In fact, our example ontology has a model of size 1, where b is P-related to itself, and an instance of A, B, and C.

If we want larger models, we need to say so….see below.

3) Different individual names mean different things?

False: If we add c:B to our example ontology, we can still have a model with only 1 element that is both b and c, and P-related to itself, and an instance of A, B, and C.

If we want a form of unique name assumption, we need to say so: b DifferentIndividualFrom c.

4) Different class names denote different sets?

False: Unless we force this to be the case, we can always have a model where two classes have exactly the same instances. In our example ontology, B and C can have exactly the same instances.

An extreme way of forcing classes to be different is a disjointness axiom.

5) A class always has an instance?

False: In our example, we can have models where, e.g., D has no instances at all, like the one sketched under (1).

If we want a class to always have an instance, we need to introduce it: we need to coin a name for an individual and say its type is that class.

6) (We assume that we now all know the answer to “Does a property always have a pair of elements in it?”) If Dom(Q) = C is in our ontology, then each instance of C has a Q-successor?

False: We can still have a model where not even a single pair of elements is related by Q! The domain axiom only says that if two elements are related by Q, then the first element is an instance of C.

7) If we write C SubClassOf Q only D, this means that all instance of C have a Q-successor, and all its Q-successors are in D?

False: We can have models where instances of C don’t have a single Q-successor. All we know is that, if they have Q-successors, then they are instances of D.

If we want to force that C+s do have indeed one or more +Q-successors, then we need to add something like C SubClassOf Q some Thing.

8) A property P can relate an element to itself, say P(b,b)?

True: It can – unless we prevent it explicitly, e.g., by making P‘s domain and range disjoint, or by saying that P is irreflexive.

9) The following ontology has a model: A SubClass (P some (B and not B)), b:A?

False: It doesn’t have one: A now has an instance, b, which needs a P-successor, which would need to be both a B and not a B. Since the latter is impossible, our model is impossible.

10) The following ontology has a model: A SubClass (B and not B), b:B?

True: It has one – just not one with an instance of A.

11) There is an upper bound on the size of a model?

False: Models can be of any size we want – as long as they have at least 1 element.

12) Can a model be infinite in size?

True: It can be. And in it, we can have some classes with no instances, some with finitely many instances, and others with infinitely many.

13) Can an ontology have infinitely many models?

True: It can – our example ontology from the beginning has!

14) Given class names A, B, and C, I can build only finitely many class expressions?

False: Even with a single name A, I can build A, A and A, A and A and A, …. so infinitely many class expressions. Of course these are all equivalent, but they are still syntactically different. Similarly, I could consider A, A and B, A and (B or A), A and (B or (not A)), A and (B or (not (A and B))), A and (B or (not (A and (B or A)))), …. a slightly more diverse infinite sequence of class expressions.

15) Given a class name A and a property P, I can build only finitely many class expressions that are not equivalent?

False: Consider A, P some A, P some (P some A), P some (P some (P some A)),…and we have an infinite sequence of (increasingly long) class expressions, no two of which are equivalent. If you read P as hasChild and A as happy, then this sequence describes happy (people), parents of happy (people), grandparents of happy (people), great grandparents of happy (people), etc., and of course these are all different/non-equivalent concepts.

16) Given an ontology O that has class names A, B, and C, it can entail infinitely many axioms?

True: In fact, every ontology entails infinitely many axioms – but most of them are boring. For example, every ontology entails A SubClassOf A, A SubClassOf (A and A), A SubClassOf (A and A and A), …, so we already have infinitely many entailments (these boring entailments are called tautologies). Now assume our ontology entails A SubClassOf B, then it also entails A SubClassOf (B and B), … and it also entails (A and B) SubClassOf B, so there are many boring variants of originally interesting axioms.

Finally, there are ontologies that have infinitely many interesting entailments: we can easily build an ontology that entails A SubClassOf (P some B), A SubClassOf (P some (P some B)), A SubClassOf (P some (P some (P some B))), and none of these axioms is redundant, for example the following 2-axiom ontology:

A SubClassOf P some B
B SubClassOf P some B

So, when an ontology editor like Protege shows us the inferred class hierarchy or selected entailments, it only shows us a tiny fraction of entailments: there are always many more, and it depends on your ontology and your application how many of these are interesting or boring.


The Green Green Grass of OWL

on October 18, 2014 by Robert Stevens

on by Robert Stevens in Under Review, Comments (0)

Overview We are going to discuss various options for modelling properties or qualities of objects, using colour as an example. We try to compare the different options with respect […]


How does a reasoner work?

on August 12, 2014 by Robert Stevens

on by Robert Stevens in Under Review, Comments (0)

Summary Reasoners should play a vital role in developing and using an ontology written in OWL. Automated reasoners such as Pellet, FaCT++, HerMiT, ELK and so on take a […]


Ontological commitment: Committing to what an ontology claims

on August 1, 2014 by Robert Stevens

on by Robert Stevens in Under Review, Comments (0)

Summary To paraphrase Davis, Shrobe & Szolovits (a4c38dd5900b5cccbdc1f7157f47cfd9), an ontological commitment is an answer to the question of “how should I think about the world?”. An ontology is a […]


Walking your fingers through the trees in OWL: why there are things you want to say but can’t

on July 25, 2014 by Robert Stevens

on by Robert Stevens in Articles, Under Review, Comments (0)

Summary OWL has a tree model property. This property is (one of the) reasons why the reasoning problems underlying, say, the computation of the inferred class hierarchy, are decidable […]


Nine Variations on two Pictures: What’s in an Arrow Between two Nodes

on October 10, 2013 by Robert Stevens

on by Robert Stevens in Under Review, Comments (0)

Overview When designing an ontology, we often start by drawing some pictures, like the one below. This is a good starting point: it allows us to agree on the […]


An object lesson in choosing between a class and an object

on August 16, 2013 by Robert Stevens

on by Robert Stevens in Under Review, Comments (0)

Overview The Web Ontology Language (OWL) and other knowledge representation languages allow an ontologist to distinguish between classes of individuals and the individuals themselves. It is not always obvious […]


Modelling in multiple dimensions is great in so many ways

on August 15, 2013 by Robert Stevens

on by Robert Stevens in Under Review, Comments (0)

Overview We describe what multi-dimensional modelling is, why it’s good for you, and how it works in OWL. The Authors Uli Sattler and Robert Stevens Information Management and BioHealth […]


Friends and Family: Exploring Transitivity and Subproperties

on August 8, 2013 by Sean Bechhofer

on by Sean Bechhofer in Uncategorized, Under Review, Comments (0)

Summary An exploration of the relationship between subproperties and property characteristics, in particular transitivity. Author Sean Bechhofer Information Management Groups School of Computer Science University of Manchester Oxford Road […]


Common reasons for ontology inconsistency

on June 12, 2013 by Samantha Bail

on by Samantha Bail in Articles, Comments (0)

Summary Following on from the previous Ontogenesis article “(I can’t get no) satisfiability” (http://ontogenesis.knowledgeblog.org/1329), this post explores common reasons for the inconsistency of an ontology. Inconsistency is a severe error […]