Henriette's Notes

Home » Posts tagged 'SHACL'

Tag Archives: SHACL

Using SHACL validation with Ontotext GraphDB

Today I have 1 of those moments where I am absolutely sure if I do not write this down, I will forget how to do this next time. For one of the projects I am working on, we need to do SHACL validation of RDF data that will be stored in Ontotext GraphDB. Here are the 10 things I needed to learn in doing this. Some of these are rather obvious, but some were less than obvious to me.

Number 1: To be able to do SHACL validation, your repository needs to be configured for SHACL when you create your repository. This cannot be done after the fact.

Number 2: It seems to be better to import your ontology (or ontologies) and data into different graphs. This is useful when you want to re-import your ontology (or ontologies) or your data, because then you can replace a specific named graph completely. This was very useful for me while prototyping. Screenshot below:

Number 3: SHACL shapes are imported into this named graph

http://rdf4j.org/schema/rdf4j#SHACLShapeGraph

by default. At configuration time you can provide a different named graph or graphs for your SHACL shapes.

Number 4: To find the named graphs in your repository, you can do the following SPARQL query:

select distinct ?g 
where {
  graph ?g {?s ?p ?o }
}

You can then query a specific named graph as follows:

select * 
from <myNamedGraph>
where { 
	?s ?p ?o .
}

Number 5: However, getting the named graphs does not return the SHACL named graph. On StackOverflow someone suggested SHACL shapes can be retrieved using:

http://address:7200/repositories/myRepo/rdf-graphs/service?graph=http://rdf4j.org/schema/rdf4j#SHACLShapeGraph

However, this did not work for me. Instead, the following code worked reliably:

import org.eclipse.rdf4j.model.Model;
import org.eclipse.rdf4j.model.impl.LinkedHashModel;
import org.eclipse.rdf4j.model.vocabulary.RDF4J;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.http.HTTPRepository;
import org.eclipse.rdf4j.rio.RDFFormat;
import org.eclipse.rdf4j.rio.Rio;
import org.eclipse.rdf4j.rio.WriterConfig;
import org.eclipse.rdf4j.rio.helpers.BasicWriterSettings;

import java.util.stream.Collectors;

public class RetrieveShaclShapes {
public static void main(String[] args) {
String address = args[0]; /* i.e. http://localhost/ */
String repositoryName = args[1]; /* i.e. myRepo */

HTTPRepository repository = new HTTPRepository(address, repositoryName);
try (RepositoryConnection connection = repository.getConnection()) {
Model statementsCollector = new LinkedHashModel(
connection.getStatements(null, null,null, RDF4J.SHACL_SHAPE_GRAPH)
.stream()
.collect(Collectors.toList()));
Rio.write(statementsCollector, System.out, RDFFormat.TURTLE, new WriterConfig().set(
BasicWriterSettings.INLINE_BLANK_NODES, true));
} catch (Throwable t) {
t.printStackTrace();
}
}
}

using the following dependencies in the pom.xml with

${rdf4j.version} = 4.2.3: 
    <dependency>
        <groupId>org.eclipse.rdf4j</groupId>
        <artifactId>rdf4j-client</artifactId>
        <version>${rdf4j.version}</version>
        <type>pom</type>
    </dependency>

Number 6: Getting the above code to run was not obvious since I opted to using a fat jar. I encountered an “org.eclipse.rdf4j.rio.UnsupportedRDFormatException: Did not recognise RDF format object” error. RFD4J uses the Java Service Provider Interface (SPI) which uses a file in the META-INF/services of the jar to register parser implementations. The maven-assembly-plugin I used, to generate the fat jar, causes different jars to overwrite META-INF/services thereby loosing registration information. The solution is to use the maven-shade-plugin which merge META-INF/services rather overwrite it. In your pom you need to add the following to your plugins configuration:

      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-shade-plugin</artifactId>
        <version>3.4.1</version>
        <executions>
          <execution>
            <goals>
              <goal>shade</goal>
            </goals>
            <configuration>
              <transformers>
                <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
              </transformers>
            </configuration>
          </execution>
        </executions>
      </plugin>

You can avoid this problem by using the separate jars rather than a single fat jar.

Number 7: Importing a new shape into the SHACL shape graph will cause new shape information to be appended. It will not replace the existing graph even when you have both the

  • “Enable replacement of existing data” and
  • “I understand that data in the replaced graphs will be cleared before importing new data.”

options enabled as seen in the next screenshot:

To replace the SHACL named graph you need to clear it explicitly by running the following SPARQL command:

clear graph <http://rdf4j.org/schema/rdf4j#SHACLShapeGraph>

For myself I found it easier to update the SHACL shapes programmatically. Note that I made use of the default SHACL named graph:

import org.eclipse.rdf4j.model.vocabulary.RDF4J;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.http.HTTPRepository;
import org.eclipse.rdf4j.rio.RDFFormat;

import java.io.File;

public class UpdateShacl {
    public static void main(String[] args)  {
        String address = args[0]; /* i.e. http://localhost/ */
        String repositoryName = args[1]; /* i.e. myRepo */
        String shacl = args[2];
        File shaclFile = new File(shacl);

        HTTPRepository repository = new HTTPRepository(address, repositoryName);
        try (RepositoryConnection connection = repository.getConnection()) {
            connection.begin();
            connection.clear(RDF4J.SHACL_SHAPE_GRAPH);
            connection.add(shaclFile, RDFFormat.TURTLE, RDF4J.SHACL_SHAPE_GRAPH);
            connection.commit();
        } catch (Throwable t) {
            t.printStackTrace();
        }
    }
}

Number 8: Programmatically you can delete a named graph using this code and the same maven dependency as we used above:

import org.eclipse.rdf4j.model.IRI;
import org.eclipse.rdf4j.model.ValueFactory;
import org.eclipse.rdf4j.model.impl.SimpleValueFactory;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.http.HTTPRepository;

public class ClearGraph {
    public static void main(String[] args)  {
        String address = args[0]; /* i.e. http://localhost/ */
        String repositoryName = args[1]; /* i.e. myRepo */
        String graph = args[2]; /* i.e. http://rdf4j.org/schema/rdf4j#SHACLShapeGraph */

        ValueFactory valueFactory = SimpleValueFactory.getInstance();
        IRI graphIRI = valueFactory.createIRI(graph);
        
        HTTPRepository repository = new HTTPRepository(address, repositoryName);
        try (RepositoryConnection connection = repository.getConnection()) {
            connection.begin();
            connection.clear(graphIRI);
            connection.commit();
        }
    }
}

Number 9: If you update the shape graph with constraints that are violated by your existing data, you will need to first fix your data before you can upload your new shape definition.

Number 10: When uploading SHACL shapes, unsupported features fails silently. I had this idea to add human readable information to the shape definition to make it easier for users to understand validation errors. Unfortunately “sh:name” and “sh:description” are not supported by GraphDB version 10.0.2. and 10.2.0. Moreover, it fails silently. In the Workbench it will show that it loaded successfully as seen in the next screenshot:

However, in the logs I have noticed the following warnings:

As these are logged as warnings, I was expecting my shape to have loaded fine, except that triples pertaining to “sh:name” and “sh:description” are skipped. However, my shape did not load at all.

You find the list of supported SHACL features here.

Conclusion

This post may come across as being critical of GraphDB. However, this is not the intention. I think it is rather a case of growing pains that are still experienced around SHACL (and Shex, I suspect) adoption. Resources that have been helpful for me in resolving issues are:

Classification with SHACL Rules

In my previous post, Rule Execution with SHACL, we have looked at how SHACL rules can be utilized to make inferences. In this post we consider a more complex situation where SHACL rules are used to classify baked goods as vegan friendly or gluten free based on their ingredients.

Why use SHACL and not RDF/RDFS/OWL?

In my discussion I will only concentrate on the definition of vegan friendly baked goods since the translation to gluten free baked goods is similar. Gluten free baked goods are included to give a more representative example.

Essentially what we need to do is look at a baked good and determine whether it includes non-vegan friendly ingredients. If it includes no non-vegan friendly ingredients, we want to assume that it is a vegan friendly baked good. This kind of reasoning uses what is called closed world reasoning, i.e. when a fact does not follow from the data, it is assumed to be false. SHACL uses closed world reasoning and hence the reason for why it is a good fit for this problem.

RDF/RDFS/OWL uses open world reasoning, which means when a fact does not follow from data or schema, it cannot derive that the fact is necessarily false. Rather, it is both possible (1) that the fact holds but it is not captured in data (or schema), or (2) the fact does not hold. For this reason RDF/RDFS/OWL will only infer that a fact holds (or does not hold) if it explicitly stated in the data or can be derived from a combination of data and schema information. Hence, for this reason RDF/RDFS/OWL are not a good fit for this problem.

Baked Goods Data

Below are example baked goods RDF data:

bakery.png

Bakery RDF data

A couple of points are important w.r.t. the RDF data:

  1. Note that we define both VeganFriendly and NonVeganFriendly ingredients to be able to identify ingredients completely. Importantly we state that VeganFriendly and NonVeganFriendly are disjoint so that we cannot inadvertently state that an ingredient is both VeganFriendly and NonVeganFriendly.
  2. We state that AppleTartAAppleTartD are of type BakedGood so that when we specify our rules, we can state that the rules are applicable only to instances of type BakedGood.
  3. We enforce the domain and range for bakery:hasIngredient which results in whenever we say bakery:a bakery:hasIngredient bakery:b, the reasoner can infer that bakery:a is of type bakery:BakedGood and bakery:b is of type bakery:Ingredient.

Baked Good Rules

Now we define the shape of a baked good:

bakedGoodShape

BakedGood shape

We state that bakery:BakedGood a rdfs:Class which is important to be able to apply rules to instances of bakery:BakedGood. We also state that bakery:BakedGood a sh:NodeShape which allows us to add shape and rule information to bakery:BakedGood. Note that our bakery:BakedGood shape state that a baked good has at least one property called bakery:hasIngredient with range bakery:Ingredient.

We now add a bakery:NonVeganFriendly shape

NonVeganFriendlyShape

NonVeganFriendly shape

which we will use in the rule definition of bakery:BakedGood:

NonVeganBakedGoodRule

VeganBakedGood and NonVeganBakedGood rules

We add two rules, one for identifying a bakery:VeganBakedGood and one for a bakery:NonVeganBakedGood. Note that these rules are of type sh:TripleRule, which will infer the existence of a new triple if the rule is triggered. The first rule states that the subject of this triple is sh:this, which refers to instances of our bakery:BakedGood class. The predicate is rdf:type and the object is bakery:VeganBakedGood. So if this rule is triggered it will infer that an instance of bakery:BakedGood is also an instance of type bakery:VeganBakedGood.

Both rules have two conditions which instances must adhere to before these rules will trigger. These rules will only apply to instances of bakery:BakedGood according to the first condition. The second condition of the rule for bakery:VeganBakedGood checks for bakery:hasIngredient properties of the shape bakery:NonVeganFriendly. This ensures that the range of bakery:hasIngredient is of type bakery:NonVeganFriendly. If bakery:hasIngredient has a maximum count of 0, it will infer that this instance of bakery:BakedGood is of type bakery:VeganBakedGood. The rule for bakery:NonVeganBakedGood will also check for bakery:hasIngredient properties of the shape bakery:NonVeganFriendly, but with minimum count of 1 for which it will then infer that this instance is of type bakery:NonVeganBakedGood.

Jena SHACL Rule Execution Code

The Jena SHACL implementation provides command line scripts (/bin/shaclinfer.sh or /bin/shaclinfer.bat) which takes as arguments a data file and a shape file which can be used to do rule execution. However, for this specific example you have to write your own Java code. The reason being that the scripts creates a default model that has no reasoning support. In this section I provide the SHACL Jena code needed to do the classification of baked goods.

ShaclClassification

Shacl rule execution

Running the Code

Running the code will cause an inferences.ttl file to be written out to
$Project/src/main/resources/. It contains the following output:

classificationInferences

Classification of baked goods

Conclusion

In this post I gave a brief overview of how SHACL can be used to do classification based on some property. This code example is available at shacl tutorial. This post was inspired by a question on Stack Overflow.

If you have any questions regarding SHACL or the semantic web, please leave a comment and I will try to help where I can.

Rule Execution with SHACL

In my previous post, Using Jena and SHACL to validate RDF Data, I have looked at how RDF data can be validated using SHACL. A closely related concern to that of constraints checking, is rule execution, for which SHACL can also be used.

A SHACL Rule Example

We will again use an example from the SHACL specification. Assume we have the a file rectangles.ttl that contains the following data:

rectangle

rectangles.ttl

Assuming we want to infer that when the height and width of a rectangle are equal, the rectangle represents a square, the following SHACL rule specification can be used (which we will store in rectangleRules.ttl):

rectangleRules

rectangleRules.ttl

A Code Example using Jena

Naturally you will need to add SHACL to your Maven pom dependencies. Then the following code will execute your SHACL rules:

shaclRuleExecution

SHACL rule execution using Jena

Running the Code

Running the code will cause an inferences.ttl file to be written out to $Project/src/main/resources/. It contains the following output:

inference

inference.ttl

Note that ex:InvalidRectangle has been ignored because it does not adhere to sh:condition ex:Rectangle, since it does not have ex:height and ex:width properties. Also, ex:NonSquareRectangle is a rectangle, not a square.

Conclusion

In this post I gave a brief overview of how SHACL can be used to implement rules on RDF data. This code example is available at shacl tutorial.