In my previous post I explained existential property restrictions. In this post I want to deal with universal property restrictions. In designing ontologies existential property restrictions tend to be used more often than universal property restrictions. However, beginners in ontology design tend to prefer universal property restrictions, which can lead to inconsistencies that can be very difficult to debug, even for experienced ontology designers [1, 2].
We again start with a very simple ontolgy:
ObjectProperty: owns Class: Cat SubClassOf: Pet Class: Dog SubClassOf: Pet Class: Person DisjointWith: Pet Class: Pet DisjointUnionOf: Cat, Dog DisjointWith: Person
We will use this ontology as basis for explaining universal property restrictions. We assume we want to extend this ontology with a DogLover
class to represent persons owning only dogs. In this post:
- I will introduce the
DogLover
class which I will define asClass: DogLover EquivalentTo: owns only Dog
, - I will show you how this definition can lead to inconsistencies that can be difficult to debug, and
- I will explain how we can fix this error.
Class: DogLover EquivalentTo: owns only Dog
Let us start out by defining the DogLover
class as
Class: DogLover EquivalentTo: owns only Dog
We can run the reasoner to ensure that our ontology is currently consistent. We can test our ontology by adding a aDogOnlyOwner
individual owning a Dog
.
Individual: aDog Types: Dog Individual: aDogOnlyOwner Facts: owns aDog
If we run the reasoner it will not infer that aDogOnlyOwner
is a DogLover
, reason being that due to the open world assumption the reasoner has no information from which it can derive that aDogOnlyOwner
owns only dogs. We can try and fix this by stating that
Individual: aDogOnlyOwner Types: owns max 0 Cat
Again the reasoner will not infer that aDogOnlyOwner
owns only dogs. This is because it still allows for the possibility that aDogOnlyOwner
can own, say a house. To exclude all other options we have to define aDogOnlyOwner
as follows:
Individual: aDogOnlyOwner Types: owns max 0 (not Dog)
which enforces that aDogOnlyOwner
owns nothing besides dogs. The reasoner will now infer that aDogOnlyOwner
is a DogLover
. We can also test that individuals of type DogLover
cannot own cats, for example:
Individual: aCat Types: Cat Individual: aDogLover Types: DogLover Facts: owns aCat
If we run the reasoner, it will give an inconsistency. As I said in my previous post, it is always a good idea to review the explanations for an inconsistency, even when you expect an inconsistency, as seen in Figure 1. It states that the inconsistency is due to
aDogLover
owning a cat (aDogLover owns aCat
),- a cat is not a dog (
Pet DisjointUnionOf Cat, Dog
), - an individual that loves dogs owns only dogs (
DogLover EquivalentTo owns only Dog
), - the
aDogLover
individual is of typeDogLover
(aDogLover Type DogLover
), and - the
aCat
individual is of typeCat
(aCat Type Cat
).

Figure 1
At this point you may think our definition of DogLover
is exactly what we need, but it contains a rather serious flaw.
A Serious Flaw
To keep our ontology as simple as possible for this section, please remove any
individuals you may have added to your ontology, but leave the DogLover
class as we have defined it in the previous section. Just to ensure that our ontology is consistent, you can run the reasoner to confirm that it is consistent. What we now want to do is to infer that when an individual owns something, that individual is a person. The way we can achieve this is by defining the domain for the owns
property:
ObectProperty: owns Domain: Person
Now, this looks like a rather innocent change, but when you run the reasoner again you will find that Pet
, Cat
and Dog
are all equivalent to owl:Nothing
while Person
is equivalent to owl:Thing
. An explanation for why Pet
is equivalent to owl:Nothing
is given in Figure 2.

Figure 2
The explanation given in Figure 2 can be difficult to understand. Indeed, research has shown that there are explanations that are difficult to understand even for experienced ontology designers [1, 2]. In cases where it is hard to understand explanations, using laconic explanations can be helpful. Laconic justifications aim to remove subexpressions from axioms that do not contribute to explaining an entailment or inconsistency [1, 2]. Ticking the “Display laconic explanation” displays the laconic explanation in Figure 3.

Figure 3
The main difference between the explanations in Figures 2 and 3 are
DogLover EquivalentTo owns only Dog
versus
owns only owl:Nothing SubClassOf DogLover
Where does owns only owl:nothing SubClassOf DogLover
come from? Recall that A EquivalentTo B
is just syntactical sugar for the two axioms A SubClassOf B
and B SubClassOf A
. What this explanation is saying is that there is a problem with the owns only Dog SubClassOf DogLover
part of our axiom (there is no problem with the DogLover SubClassOf owns only Dog
part of our axiom). Furthermore, it states that Dog
is inferred to be equivalent to owl:Nothing
. For this we need to understand the meaning of owns only Dog
better.

Figure 4
In Figure 4 I give an example domain where I make the assumption that for all individuals all ownership information is specified explicitly. Thus, individuals with no ownership links own nothing and individuals with ownership links own only what is specified and nothing else. The owns only Dog
class includes all individuals that are known to own nothing besides dogs. However, a confusing aspect of the semantics of universal restrictions is that it also includes those individuals that owns nothing. To confirm this for yourself you can use the following ontology (from which I removed the domain restriction for the moment). This ontology will infer that anIndividualOwningNothing
is of type DogLover
.
ObjectProperty: owns Class: Cat SubClassOf: Pet Class: Dog SubClassOf: Pet Class: Person DisjointWith: Pet Class: Pet DisjointUnionOf: Cat, Dog DisjointWith: Person Class: DogLover SubClassOf: Person owns only Dog SubClassOf: DogLover Individual: anIndividualOwningNothing Types: owns only (not owl:Thing)
We are now ready to explain why Pet
is equivalent to owl:Nothing
:
owns Domain Person
states that whenever an individual owns something, that individual is aPerson
.owns only Pet SubClassOf DogLover
includes saying that when an individual owns nothing at all, that individual is aDogLover
.- Since
DogLover
is a subclass ofPerson
it meansPerson
now includes individuals that owns something and individuals that owns nothing, which meansPerson
is equivalent toowl:Thing
. Person
andPet
are disjoint, hencePet
must be equivalent toowl:Nothing
.
Conclusion
So how do we fix our ontology? Simple, we enforce that for someone to be a DogLover
, they must own at least 1 dog and nothing else but dogs.
Class: DogLover EquivalentTo: owns some Dog and owns only Dog
The example ontologies of this post can be found in github.
Bibliography
[1] M. Horridge, Justification Based Explanation in Ontologies, Ph.D. Thesis, University of Manchester, 2011.
[2] M. Horridge, S. Bail, B. Parsia, and U. Sattler, Toward Cognitive Support for OWL Justifications., Knowl.-Based Syst. 53 (2013), 66–79.
Great post! Helped a lot.
One question, I tried running your FixingIt (from the repo) example but I doesn’t as I expect.
I tried adding two individuals “aPerson” and “aDog” each of its corresponding class. If aPerson owns aDog, I would expect the reasoner to infer that aPerson is a DogLover. However this is not the case. From the OWA I understand aPerson might own anything and it will never be inferred to be a DogLover, the only exception being if you explicitly say that aPerson owns only aDog and nothing else. That said, universal property restrictions will only be satisfied when explicitly “closing the world”, correct?
Hi there Nicolas,
Thank you for your feedback. I appreciate it.
Yes, adding the aPerson and aDog individuals will NOT result in the reasoner inferring that aPerson is a DogLover exactly because of OWA: That is because you have to explicitly close the world for aPerson by stating that aPerson owns nothing besides dogs.
However, your statement “universal property restrictions will only be satisfied when explicitly closing the world” is not mathematically correct. Here I am being pedantic because satisfaction in mathematical logic has specific and precise semantics. Based on the definition of satisfaction, aPerson is satisfiable, even if you do not close the world. In particular the ontology with aPerson will be consistent despite not closing the world for aPerson. The problem is not that of satisfaction but that aPerson will not be CLASSIFIED as a DogLover.
Rather, I think the question you are trying to ask is “If you want to classify an individual based on universal property restrictions, whether you need to close the world?”. The answer is, YES, you will have to close the world.
Thank you for your question. If you have any further questions regarding DLs, feel free to ask!
Keep well,
Henriette
Thank you Henriette for your reply!
You nailed it, that was actually what I had in mind. Been reading your blog, very cool stuff. Keep it up!
Best,
Nicolas
PD: do you have any book recommendations on OWL and semantic data? I’m looking for something with a practical approach but with references to theory (just like your blog!)
Hi there Nicolas,
You can have a look at Demystifying OWL for the Enterprise by Michael Upschold (Amazon) and The Knowledge Graph Cookbook (under Downloads) – but they are not very theoretical. There is the An Introduction to Ontology Engineering by Maria Keet (also under Downloads) – which does provide the theory but it is not that strong on application.
When I started out with OWL, what I found really useful is the link between Object Orientation and OWL. I have written about that here and you can read more about in my MSc Dissertation (also under Downloads) where I use it to validate business requirements and find various cohesion errors in designs.
Keep well,
Henriette