Using SHACL validation with Ontotext GraphDB

Today I have 1 of those moments where I am absolutely sure if I do not write this down, I will forget how to do this next time. For one of the projects I am working on, we need to do SHACL validation of RDF data that will be stored in Ontotext GraphDB. Here are the 10 things I needed to learn in doing this. Some of these are rather obvious, but some were less than obvious to me.

Number 1: To be able to do SHACL validation, your repository needs to be configured for SHACL when you create your repository. This cannot be done after the fact.

Number 2: It seems to be better to import your ontology (or ontologies) and data into different graphs. This is useful when you want to re-import your ontology (or ontologies) or your data, because then you can replace a specific named graph completely. This was very useful for me while prototyping. Screenshot below:

Number 3: SHACL shapes are imported into this named graph

http://rdf4j.org/schema/rdf4j#SHACLShapeGraph

by default. At configuration time you can provide a different named graph or graphs for your SHACL shapes.

Number 4: To find the named graphs in your repository, you can do the following SPARQL query:

select distinct ?g 
where {
  graph ?g {?s ?p ?o }
}

You can then query a specific named graph as follows:

select * 
from <myNamedGraph>
where { 
	?s ?p ?o .
}

Number 5: However, getting the named graphs does not return the SHACL named graph. On StackOverflow someone suggested SHACL shapes can be retrieved using:

http://address:7200/repositories/myRepo/rdf-graphs/service?graph=http://rdf4j.org/schema/rdf4j#SHACLShapeGraph

However, this did not work for me. Instead, the following code worked reliably:

import org.eclipse.rdf4j.model.Model;
import org.eclipse.rdf4j.model.impl.LinkedHashModel;
import org.eclipse.rdf4j.model.vocabulary.RDF4J;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.http.HTTPRepository;
import org.eclipse.rdf4j.rio.RDFFormat;
import org.eclipse.rdf4j.rio.Rio;
import org.eclipse.rdf4j.rio.WriterConfig;
import org.eclipse.rdf4j.rio.helpers.BasicWriterSettings;

import java.util.stream.Collectors;

public class RetrieveShaclShapes {
public static void main(String[] args) {
String address = args[0]; /* i.e. http://localhost/ */
String repositoryName = args[1]; /* i.e. myRepo */

HTTPRepository repository = new HTTPRepository(address, repositoryName);
try (RepositoryConnection connection = repository.getConnection()) {
Model statementsCollector = new LinkedHashModel(
connection.getStatements(null, null,null, RDF4J.SHACL_SHAPE_GRAPH)
.stream()
.collect(Collectors.toList()));
Rio.write(statementsCollector, System.out, RDFFormat.TURTLE, new WriterConfig().set(
BasicWriterSettings.INLINE_BLANK_NODES, true));
} catch (Throwable t) {
t.printStackTrace();
}
}
}

using the following dependencies in the pom.xml with

${rdf4j.version} = 4.2.3: 
    <dependency>
        <groupId>org.eclipse.rdf4j</groupId>
        <artifactId>rdf4j-client</artifactId>
        <version>${rdf4j.version}</version>
        <type>pom</type>
    </dependency>

Number 6: Getting the above code to run was not obvious since I opted to using a fat jar. I encountered an “org.eclipse.rdf4j.rio.UnsupportedRDFormatException: Did not recognise RDF format object” error. RFD4J uses the Java Service Provider Interface (SPI) which uses a file in the META-INF/services of the jar to register parser implementations. The maven-assembly-plugin I used, to generate the fat jar, causes different jars to overwrite META-INF/services thereby loosing registration information. The solution is to use the maven-shade-plugin which merge META-INF/services rather overwrite it. In your pom you need to add the following to your plugins configuration:

      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-shade-plugin</artifactId>
        <version>3.4.1</version>
        <executions>
          <execution>
            <goals>
              <goal>shade</goal>
            </goals>
            <configuration>
              <transformers>
                <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
              </transformers>
            </configuration>
          </execution>
        </executions>
      </plugin>

You can avoid this problem by using the separate jars rather than a single fat jar.

Number 7: Importing a new shape into the SHACL shape graph will cause new shape information to be appended. It will not replace the existing graph even when you have both the

  • “Enable replacement of existing data” and
  • “I understand that data in the replaced graphs will be cleared before importing new data.”

options enabled as seen in the next screenshot:

To replace the SHACL named graph you need to clear it explicitly by running the following SPARQL command:

clear graph <http://rdf4j.org/schema/rdf4j#SHACLShapeGraph>

For myself I found it easier to update the SHACL shapes programmatically. Note that I made use of the default SHACL named graph:

import org.eclipse.rdf4j.model.vocabulary.RDF4J;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.http.HTTPRepository;
import org.eclipse.rdf4j.rio.RDFFormat;

import java.io.File;

public class UpdateShacl {
    public static void main(String[] args)  {
        String address = args[0]; /* i.e. http://localhost/ */
        String repositoryName = args[1]; /* i.e. myRepo */
        String shacl = args[2];
        File shaclFile = new File(shacl);

        HTTPRepository repository = new HTTPRepository(address, repositoryName);
        try (RepositoryConnection connection = repository.getConnection()) {
            connection.begin();
            connection.clear(RDF4J.SHACL_SHAPE_GRAPH);
            connection.add(shaclFile, RDFFormat.TURTLE, RDF4J.SHACL_SHAPE_GRAPH);
            connection.commit();
        } catch (Throwable t) {
            t.printStackTrace();
        }
    }
}

Number 8: Programmatically you can delete a named graph using this code and the same maven dependency as we used above:

import org.eclipse.rdf4j.model.IRI;
import org.eclipse.rdf4j.model.ValueFactory;
import org.eclipse.rdf4j.model.impl.SimpleValueFactory;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.http.HTTPRepository;

public class ClearGraph {
    public static void main(String[] args)  {
        String address = args[0]; /* i.e. http://localhost/ */
        String repositoryName = args[1]; /* i.e. myRepo */
        String graph = args[2]; /* i.e. http://rdf4j.org/schema/rdf4j#SHACLShapeGraph */

        ValueFactory valueFactory = SimpleValueFactory.getInstance();
        IRI graphIRI = valueFactory.createIRI(graph);
        
        HTTPRepository repository = new HTTPRepository(address, repositoryName);
        try (RepositoryConnection connection = repository.getConnection()) {
            connection.begin();
            connection.clear(graphIRI);
            connection.commit();
        }
    }
}

Number 9: If you update the shape graph with constraints that are violated by your existing data, you will need to first fix your data before you can upload your new shape definition.

Number 10: When uploading SHACL shapes, unsupported features fails silently. I had this idea to add human readable information to the shape definition to make it easier for users to understand validation errors. Unfortunately “sh:name” and “sh:description” are not supported by GraphDB version 10.0.2. and 10.2.0. Moreover, it fails silently. In the Workbench it will show that it loaded successfully as seen in the next screenshot:

However, in the logs I have noticed the following warnings:

As these are logged as warnings, I was expecting my shape to have loaded fine, except that triples pertaining to “sh:name” and “sh:description” are skipped. However, my shape did not load at all.

You find the list of supported SHACL features here.

Conclusion

This post may come across as being critical of GraphDB. However, this is not the intention. I think it is rather a case of growing pains that are still experienced around SHACL (and Shex, I suspect) adoption. Resources that have been helpful for me in resolving issues are:

Introduction to ontology semantics and reasoning

I recently had the pleasure to present at the OntoSpot meeting at EBI to help my colleagues gain an intuitive understanding of ontology semantics and reasoning. In this talk I assume that you have a very basic understanding of what an ontology is, but I assume no previous knowledge wrt logic. I provide a number of examples and graphics to explain logic and description logic (DL) concepts.

Here I provide both the slides of this presentation and the link to the recording. If you have any questions or suggestions, please let me know in the comments. I have already had the very helpful suggestion for adding a reference of DL symbols, which I will do shortly.

Errata:

  1. In the section on speaking about propositional logic, I accidentally said predicate logic instead of propositional logic.
  2. At the end while answering questions, I said RFD rather than RDF.

This video will also be made available at the OBO Academy.

The difference between Schema.org and OWL

In this blog post I describe some of the main differences between Schema.org vocabularies and OWL ontologies, the implications of these differences and the kind of steps you will need to take to translate Schema.org vocabularies to OWL ontologies.

Overall I keep this discussion at a high-level. For in-depth reviews of the differences between Schema.org and OWL I provide relevant links at the end of this post.

Key differences

There are 2 main differences between OWL and Schema.org.

  1. Intended purpose: The primary purpose of Schema.org is to enable sharing of structured data on the internet. The primary purpose of OWL is to enable sophisticated reasoning across the structure of your data.
  2. Difference in language: Due to the difference in purpose, there are substantial differences in language. The main reason being that the language for OWL can be translated into precise mathematical logic axioms, which allows for much richer inferences to be drawn. This is the reason for OWL preferring rdfs:domain/rdfs:range to schema:domainIncludes/schema:rangeIncludes. The benefit of using rdfs:domain/rdfs:range is that they have precise defined mathematical logic meaning, whereas schema:domainIncludes/schema:rangeIncludes do not have mathematical meaning.

What does this mean?

Using Schema.org you could draw some limited inferences. For example a reasoner can determine that the SNOMED concept http://purl.bioontology.org/ontology/SNOMEDCT/116154003 is a schema:Patient which is a schema:Person. But the language used in Schema.org by itself is not rich enough to detect inconsistencies. I.e., there is no way to say that schema:Person is disjoint from schema:Product. This allows for stating myexample:john a schema:Person and myexample:john a schema:Product without a reasoner being able to detect the inconsistency. Using OWL it is possible to state that schema:Person and schema:Product are disjoint.

Does this mean you should prefer OWL to Schema.org? No, not if your intended purpose of your ontology is to share data. Then it is best to use concepts from Schema.org and add the axioms that will provide the inferences you need. If reasoning is not your reason for wanting to use Schema.org/OWL, then just use Schema.org.

Can you translate Schema.org to OWL?

Strictly speaking, since RDF & RDFS is a subset of OWL, Schema.org is an OWL definition already, albeit one with limited reasoning capability. Any “translation” to OWL will mean adding axioms to Schema.org to increase the inferences that can be drawn from Schema.org documents. It is a pity that Schema.org does not (the current link to the OWL file is dead) provide an OWL file with the additional axioms that will enable richer reasoning.

  • Add rdfs:domain and rdfs:range restrictions rather than replacing schema:domainIncludes and schema:rangeIncludes. Replacing schema:domainIncludes and schema:rangeIncludes could result in search engines not finding information.
  • Add owl:disjointWith and owl:disjointObjectProperties respectively for all classes and properties that do not share individuals.
  • By looking at the documentation of Schema.org it gives the impression that classes have attributes. I.e., schema:Person has an attribute schema:givenName. However, there is nothing in the definition of schema:Person that enforces that the schema:Person class must have a schema:givenName attribute. I describe here, here and here how to define “attributes” for classes in a way that can be used by OWL reasoners.

Conclusion

Schema.org is mainly for sharing structured data on the Internet. OWL is used mainly to reason over structured data to determine inconsistencies in the schema.

For in-depth discussions on the differences between Schema.org and OWL I highly recommend reading the papers by Patel-Schneider and Hernich et al.