Henriette's Notes

Classification with SHACL Rules

In my previous post, Rule Execution with SHACL, we have looked at how SHACL rules can be utilized to make inferences. In this post we consider a more complex situation where SHACL rules are used to classify baked goods as vegan friendly or gluten free based on their ingredients.

Why use SHACL and not RDF/RDFS/OWL?

In my discussion I will only concentrate on the definition of vegan friendly baked goods since the translation to gluten free baked goods is similar. Gluten free baked goods are included to give a more representative example.

Essentially what we need to do is look at a baked good and determine whether it includes non-vegan friendly ingredients. If it includes no non-vegan friendly ingredients, we want to assume that it is a vegan friendly baked good. This kind of reasoning uses what is called closed world reasoning, i.e. when a fact does not follow from the data, it is assumed to be false. SHACL uses closed world reasoning and hence the reason for why it is a good fit for this problem.

RDF/RDFS/OWL uses open world reasoning, which means when a fact does not follow from data or schema, it cannot derive that the fact is necessarily false. Rather, it is both possible (1) that the fact holds but it is not captured in data (or schema), or (2) the fact does not hold. For this reason RDF/RDFS/OWL will only infer that a fact holds (or does not hold) if it explicitly stated in the data or can be derived from a combination of data and schema information. Hence, for this reason RDF/RDFS/OWL are not a good fit for this problem.

Baked Goods Data

Below are example baked goods RDF data:


Bakery RDF data

A couple of points are important w.r.t. the RDF data:

  1. Note that we define both VeganFriendly and NonVeganFriendly ingredients to be able to identify ingredients completely. Importantly we state that VeganFriendly and NonVeganFriendly are disjoint so that we cannot inadvertently state that an ingredient is both VeganFriendly and NonVeganFriendly.
  2. We state that AppleTartAAppleTartD are of type BakedGood so that when we specify our rules, we can state that the rules are applicable only to instances of type BakedGood.
  3. We enforce the domain and range for bakery:hasIngredient which results in whenever we say bakery:a bakery:hasIngredient bakery:b, the reasoner can infer that bakery:a is of type bakery:BakedGood and bakery:b is of type bakery:Ingredient.

Baked Good Rules

Now we define the shape of a baked good:


BakedGood shape

We state that bakery:BakedGood a rdfs:Class which is important to be able to apply rules to instances of bakery:BakedGood. We also state that bakery:BakedGood a sh:NodeShape which allows us to add shape and rule information to bakery:BakedGood. Note that our bakery:BakedGood shape state that a baked good has at least one property called bakery:hasIngredient with range bakery:Ingredient.

We now add a bakery:NonVeganFriendly shape


NonVeganFriendly shape

which we will use in the rule definition of bakery:BakedGood:


VeganBakedGood and NonVeganBakedGood rules

We add two rules, one for identifying a bakery:VeganBakedGood and one for a bakery:NonVeganBakedGood. Note that these rules are of type sh:TripleRule, which will infer the existence of a new triple if the rule is triggered. The first rule states that the subject of this triple is sh:this, which refers to instances of our bakery:BakedGood class. The predicate is rdf:type and the object is bakery:VeganBakedGood. So if this rule is triggered it will infer that an instance of bakery:BakedGood is also an instance of type bakery:VeganBakedGood.

Both rules have two conditions which instances must adhere to before these rules will trigger. These rules will only apply to instances of bakery:BakedGood according to the first condition. The second condition of the rule for bakery:VeganBakedGood checks for bakery:hasIngredient properties of the shape bakery:NonVeganFriendly. This ensures that the range of bakery:hasIngredient is of type bakery:NonVeganFriendly. If bakery:hasIngredient has a maximum count of 0, it will infer that this instance of bakery:BakedGood is of type bakery:VeganBakedGood. The rule for bakery:NonVeganBakedGood will also check for bakery:hasIngredient properties of the shape bakery:NonVeganFriendly, but with minimum count of 1 for which it will then infer that this instance is of type bakery:NonVeganBakedGood.

Jena SHACL Rule Execution Code

The Jena SHACL implementation provides command line scripts (/bin/shaclinfer.sh or /bin/shaclinfer.bat) which takes as arguments a data file and a shape file which can be used to do rule execution. However, for this specific example you have to write your own Java code. The reason being that the scripts creates a default model that has no reasoning support. In this section I provide the SHACL Jena code needed to do the classification of baked goods.


Shacl rule execution

Running the Code

Running the code will cause an inferences.ttl file to be written out to
$Project/src/main/resources/. It contains the following output:


Classification of baked goods


In this post I gave a brief overview of how SHACL can be used to do classification based on some property. This code example is available at shacl tutorial. This post was inspired by a question on Stack Overflow.

If you have any questions regarding SHACL or the semantic web, please leave a comment and I will try to help where I can.


Rule Execution with SHACL

In my previous post, Using Jena and SHACL to validate RDF Data, I have looked at how RDF data can be validated using SHACL. A closely related concern to that of constraints checking, is rule execution, for which SHACL can also be used.

A SHACL Rule Example

We will again use an example from the SHACL specification. Assume we have the a file rectangles.ttl that contains the following data:



Assuming we want to infer that when the height and width of a rectangle are equal, the rectangle represents a square, the following SHACL rule specification can be used (which we will store in rectangleRules.ttl):



A Code Example using Jena

Naturally you will need to add SHACL to your Maven pom dependencies. Then the following code will execute your SHACL rules:


SHACL rule execution using Jena

Running the Code

Running the code will cause an inferences.ttl file to be written out to $Project/src/main/resources/. It contains the following output:



Note that ex:InvalidRectangle has been ignore because it does not adhere to sh:condition ex:Rectangle, since it does not have ex:height and ex:width properties. Also, ex:NonSquareRectangle is a rectangle, not a square.


In this post I gave a brief overview of how SHACL can be used to implement rules on RDF data. This code example is available at shacl tutorial.

Using Jena and SHACL to validate RDF Data

RDF enables users to capture data in a way that is intuitive to them. This means that data is often captured without conforming to any schema. It is often useful to know that an RDF dataset conforms to some (potential partial) schema. This is where SHACL (SHApe Constraint Language), a W3C standard, comes into play. It is a language for describing and validating RDF graphs. In this post I will give a brief overview of how to use SHACL to validate RDF data using the Jena implementation of SHACL.

A SHACL Example

We will use an example from the SHACL specification. Assume we hav a file person.ttl that contains the following data:


Example RDF data

To validate this data we create a shape definition in personShape.ttl containing:


Person shape definition

A Code Example using Jena

To validate our RDF data using our SHACL shape we will use the Jena implementation of SHACL. Start by adding the SHACL dependency to your Maven pom.xml. Note that you do not need to add Jena as well as the SHACL pom already includes Jena.


SHACL Maven dependency

In the code we will assume the person.ttl and personShape.ttl files are in $Project/src/main/resources/. The code for doing the validation is the following then:


Java code using Jena implementation of SHACL

Running the Code

Running the code will cause a report.ttl file to be written out to $Project/src/main/resources/. We can determine that our data does not conform by checking the sh:conforms property. We have 4 violations of our ex:PersonShape:

  1. For ex:Alice the ex:ssn property does not conform to the pattern defined in the shape.
  2. ex:Bob has 2 ex:ssn properties.
  3. ex:Calvin works for a company that is not of type ex:Company.
  4. ex:Calvin has a property ex:birthDate that is not allowed by ex:PersonShape since it is close by sh:closed true.

A corrected version of our person data may look as follows:


Person data that conforms to our person shape


In this post I given a brief overview of how SHACL can be used to validate RDF data using the SHACL implementation of Jena. This code example is available at shacl tutorial.

Why does the OWL Reasoner ignore my Constraint?

A most frustrating problem often encountered by people, with experience in relational databases when they are introduced to OWL ontologies, is that OWL ontology reasoners seem to ignore constraints. In this post I give examples of this problem, explain why they happen and I provide ways to deal with each example.

An Example

A typical example encountered in relational databases is that of modeling orders with orderlines, which can be modeled via Orders and Orderlines tables where the Orderlines table has a foreign key constraint to the Orders table. A related OWL ontology is given in Figure 1. It creates as expected Order and Orderline classes with a hasOrder object property. That individuals of Orderline are necessarily associated with one order is enforced by Orderline being a subclass of hasOrder
exactly 1 owl:Thing


Figure 1: Order ontology

Two Problems

Two frustrating and most surprising errors given the Order ontology are: (1) if an Orderline individual is created for which no associated Order individual exists, the reasoner will not give an inconsistency, and (2) if an Orderline individual is created for which two or more Order individuals exist, the reasoner will also not give an inconsistency.

Missing Association Problem

Say we create an individual orderline123 of type Orderline, which is not associated with an individual of type Order, in this case the reasoner will not give an inconsistency. The reason for this is due to the open world assumption. Informally it means that the only inferences that the reasoner can make from an ontology is based on explicit information stated in the ontology or what can derived from explicit stated information.

When you state orderline123 is an Orderline, there is no explicit information in the ontology that states that orderline123 is not associated with an individual of Order via the hasOrder property. To make explicit that orderline123 is not in such a relation, you have to define orderline123 as in Figure 2. hasOrder max 0 owl:Thing states that it is known that orderline123 is not associated with an individual via the hasOrder property.


Figure 2: orderline123 is not in hasOrder association

Too Many Associated Individuals Problem

Assume we now change our definition of our orderline123 individual to be associated via hasOrder to two individuals of Order as shown in Figure 3. Again, most frustratingly the reasoner does not find that the ontology is inconsistent. The reason for this is that OWL does not make the unique name assumption. This means that individuals with different names can be assumed by the reasoner to represent a single individual. To force the reasoner to see order1 and order2 as necessarily different, you can state order1 is different from order2 by adding DifferentFrom:order2 to order1 (or similarly for order2).


Figure 3: orderline123 has two orders

Constraint Checking versus Deriving Inferences

The source of the problems described here is due to the difference between the
purposes of a relational database and an OWL reasoner. The main purpose of a
relational database is to enable view and edit access of the data in such a way that the integrity of the data is maintained. A relational database will ensure that the data adheres to the constraints of its schema, but it cannot make any claims beyond what is stated by the data it contains. The main purpose of an OWL reasoner is to derive inferences from statements and facts. As an example, from the statement Class: Dog SubclassOf: Animal and the fact Individual: pluto Type: Dog it can be derived that pluto is an Animal, even though the ontology nowhere states explicitly that pluto is an Animal.


Many newcomers to OWL ontologies get tripped up by the difference in purpose of relational databases and OWL ontologies. In this post I explained these pitfalls and how to deal with them.

If you have an ontology modeling problem, you are welcome leaving a comment detailing the problem.

Risk Based Testing

A question that is asked regularly in testing circles is: “When should you stop testing?”. With an infinite budget it may be reasonably easy to answer this question. When the budget is severely constrained, this question becomes more difficult to answer. An even more difficult situation to address is when the project starts out with a given budget, but during the project lifetime the budget gets significantly reduced (i.e. due to economic downturn). This forces us be able to provide the highest level of quality for the least amount of money.


How Testing Fails

A mistake that is often made is that the testing effort is distributed equally across the system – both critical and less critical portions of the system are tested equally. This results in critical parts of the system not being tested sufficiently and less critical parts being tested to the point of diminishing returns.

A further mistaken mindset of developers is that there is no such thing as a useless test, which may entice developers to add tests for the sake of adding tests. In actual fact every test (unit-, integration- or systems test) has to earn its place in the codebase. If there no good motivation for a test, the test must be deleted from the codebase. Why is that? It adds to volume of code that developers have to master to be productive members of the team. It adds to the volume of code that has to be maintained. As such tests adds to the overall cost of maintenance of a system. Remember: The most cost effective code to maintain is the code that has never been written.


Risk Based Testing

Risk based testing is a testing approach that is helpful in addressing these concerns. The main advantage of risk based testing is to enable prioritization of the testing effort. The heart of risk based testing is to provide a set of criteria for evaluating risk. Some criteria for determining risk can be found in [1]. The criteria may differ depending on the project. Ideally the set of criteria needs to be negotiated with the project owners. Another important aspect is to decide on the level at which risk will be determined. For example, risk may be determined at business process, component or use case levels.


Some general criteria are provided below:

  • How frequently is this use case used?
  • What will be the cost of getting this use case wrong?
  • What is the complexity of this use case?
  • How often will the use case be changed?


A value from 1 to 5 is assigned for each of these criteria and the product determined. This product provides a way for ranking the risk of each use case. A higher risk value will indicate a use case with a higher risk.


Ideally determining risk should not be left to developers. Developers need to be guided by the project owner, the architect and the business analyst on their project.


Advantages/Disadvantages of Risk Based Testing

Advantages of this approach are:

  • The highest risk items can be developed and tested first which reduces the overall risk on the project.
  • If testing has to be watered down, a guideline exists for deciding what to test and what not to test.
  • At any given time an indication can be given of the risk of the project by considering the highest risk use cases that has not been tested.


A possible disadvantage of risk based testing is that not everything will be tested, but as stated earlier, if a testcase cannot be motivated, it may be more cost effective to not have the test as part of the codebase.



Risk based testing provides a means via which one can estimate how well, or how poorly, a system has been tested.



Why TDD can be Dangerous to your Project

Some developers who are passionate about testing are sometimes also fanatical advocates of Test Driven Development (TDD), to the point that they believe TDD is the only correct way to test applications. Strict adherence to TDD requires a test to be created first before any code has been written. As such proponents of TDD tend to see TDD as a design mechanism rather than as a test mechanism. Irrespective of whether TDD is used as a design and/or test mechanism, it has a number of pitfalls.

In this post I will:

  1. give a brief overview of the TDD cycle,
  2. explain why TDD as software design mechanism is inadequate,
  3. explain why TDD as software test mechanism is inadequate,
  4. explain the conditions under which TDD can be applied successfully to projects.


In TDD the steps to add new code are as follows [1]:

  1. Add a (failing) test that serves as a specification for the new functionality to be added.
  2. Run all the tests and confirm that the newly added test fails.
  3. Implement the new functionality.
  4. Run all tests to confirm that they all succeed.
  5. Refactor the code to remove any duplication.

The aim of TDD is to provide a simple way to grow the design of complex software one decision at a time.

TDD as a Design Mechanism

From a design perspective TDD emphasizes mirco level design rather than macro level design [2]. The weaknesses of this approach are:

  • TDD forces a developer to necessarily focus on a single interface. This often neglects the interaction between interfaces and leads to poor abstraction. Bertrand Meyer gives a balanced review of the challenges regarding TDD and Agile in this regard [3].
  • Quality attributes (like performance, scalability, integrability, security, etc.) are easily overlooked with TDD. In the context of Agile, where an upfront architecture effort is typically frowned upon, TDD is particularly dangerous due to poor consideration of quality attributes.

TDD as a Testing Mechanism

From a testing perspective TDD has the following drawbacks:

  • Interfaces/classes may be polluted in order to make them testable. A typical example is that a private method is made public in order to test it. This obfuscates the intended use of the class which will cause developers to more easily digress from the intended use of the class. In the long term this creates an unmaintainable system.
  • Often when TDD is used on projects, unit testing is used to exclusion, with limited or no regard for the broad spectrum of testing, which should at least include integration testing and systems testing.
  • TDD has no appreciation for the prioritization of the testing effort: Equal amounts of effort are expended to test all code irrespective of the associated risk profile. Typically TDD expects all further development to be blocked until all tests pass. This ignores the reality that some functionality has greater business value than others.

Guidelines for using TDD Effectively on Projects

TDD can be used with no side effect if the following process is adhered to:

  1. The architecture has to be complete, which should include details as to how testability of the system at all levels (unit-, integration- and systems testing) will be achieved.
  2. The risk profile of the sub system (module/use case) has been established, which informs the testing effort that must be expended on the sub system.
  3. The design for the sub system (module/use case) should be complete in adherence to the architecture and risk profile. Since testability is considered as part of the architecture, and the design is informed by the architecture, the design should be by definition testable.
  4. Only at this point is the developer now free to follow a TDD approach in implementing the code.



The very positive thing that TDD emphasizes is the need for testing. However, to naively embrace TDD, is to do it at the peril of your project.



  1. Kent Beck, Test-Driven Development by Example, The Addison-Wesley Signature Series, Addison-Wesley, 2003.
  2. Cédric Beust and Hani Suleiman, Next Generation Java Testing : TestNG and Advanced Concepts, Addison-Wesley, Upper Saddle River, NJ, 2008.
  3. Bertrand Meyer, Agile!: The Good, the Hype and the Ugly, Springer, 2014.

DBPedia Extraction Framework and Eclipse Quick Start

I recently treid to compile the DBPedia Extraction Framework. What was not immediately clear to me is whether I have to have Scala installed. It turns out that having Scala installed natively is not necessary, seeing as the scala-maven-plugin is sufficient.

The steps to compile DBPedia Extraction Framework from the command line are:

  1. Ensure you have the JDK 1.8.x installed.
  2. Ensure Maven 3.x is installed.
  3. mvn package

Steps to compile DBPedia Extraction Framework from the Scala IDE (which can be downloaded from Scala-ide.org) are:

  1. Ensure you have the JDK 1.8.x installed.
  2. Ensure you have the Scala IDE installed.
  3. mvn eclipse:eclipse
  4. mvn package
  5. Import existing Maven project into Scala IDE.
  6. Run mvn clean install from within the IDE.