Home » Technology
Category Archives: Technology
Using SHACL validation with Ontotext GraphDB
Today I have 1 of those moments where I am absolutely sure if I do not write this down, I will forget how to do this next time. For one of the projects I am working on, we need to do SHACL validation of RDF data that will be stored in Ontotext GraphDB. Here are the 10 things I needed to learn in doing this. Some of these are rather obvious, but some were less than obvious to me.
Number 1: To be able to do SHACL validation, your repository needs to be configured for SHACL when you create your repository. This cannot be done after the fact.
Number 2: It seems to be better to import your ontology (or ontologies) and data into different graphs. This is useful when you want to re-import your ontology (or ontologies) or your data, because then you can replace a specific named graph completely. This was very useful for me while prototyping. Screenshot below:

Number 3: SHACL shapes are imported into this named graph
http://rdf4j.org/schema/rdf4j#SHACLShapeGraph
by default. At configuration time you can provide a different named graph or graphs for your SHACL shapes.
Number 4: To find the named graphs in your repository, you can do the following SPARQL query:
select distinct ?g
where {
graph ?g {?s ?p ?o }
}
You can then query a specific named graph as follows:
select *
from <myNamedGraph>
where {
?s ?p ?o .
}
Number 5: However, getting the named graphs does not return the SHACL named graph. On StackOverflow someone suggested SHACL shapes can be retrieved using:
http://address:7200/repositories/myRepo/rdf-graphs/service?graph=http://rdf4j.org/schema/rdf4j#SHACLShapeGraph
However, this did not work for me. Instead, the following code worked reliably:
import org.eclipse.rdf4j.model.Model;
import org.eclipse.rdf4j.model.impl.LinkedHashModel;
import org.eclipse.rdf4j.model.vocabulary.RDF4J;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.http.HTTPRepository;
import org.eclipse.rdf4j.rio.RDFFormat;
import org.eclipse.rdf4j.rio.Rio;
import org.eclipse.rdf4j.rio.WriterConfig;
import org.eclipse.rdf4j.rio.helpers.BasicWriterSettings;
import java.util.stream.Collectors;
public class RetrieveShaclShapes {
public static void main(String[] args) {
String address = args[0]; /* i.e. http://localhost/ */
String repositoryName = args[1]; /* i.e. myRepo */
HTTPRepository repository = new HTTPRepository(address, repositoryName);
try (RepositoryConnection connection = repository.getConnection()) {
Model statementsCollector = new LinkedHashModel(
connection.getStatements(null, null,null, RDF4J.SHACL_SHAPE_GRAPH)
.stream()
.collect(Collectors.toList()));
Rio.write(statementsCollector, System.out, RDFFormat.TURTLE, new WriterConfig().set(
BasicWriterSettings.INLINE_BLANK_NODES, true));
} catch (Throwable t) {
t.printStackTrace();
}
}
}
using the following dependencies in the pom.xml
with
${rdf4j.version} = 4.2.3:
<dependency>
<groupId>org.eclipse.rdf4j</groupId>
<artifactId>rdf4j-client</artifactId>
<version>${rdf4j.version}</version>
<type>pom</type>
</dependency>
Number 6: Getting the above code to run was not obvious since I opted to using a fat jar. I encountered an “org.eclipse.rdf4j.rio.UnsupportedRDFormatException: Did not recognise RDF format object
” error. RFD4J uses the Java Service Provider Interface (SPI) which uses a file in the META-INF/services
of the jar to register parser implementations. The maven-assembly-plugin
I used, to generate the fat jar, causes different jars to overwrite META-INF/services
thereby loosing registration information. The solution is to use the maven-shade-plugin which merge META-INF/services
rather overwrite it. In your pom you need to add the following to your plugins configuration:
<plugin>
<groupId>org.apache.maven.plugins</groupId>
<artifactId>maven-shade-plugin</artifactId>
<version>3.4.1</version>
<executions>
<execution>
<goals>
<goal>shade</goal>
</goals>
<configuration>
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
</transformers>
</configuration>
</execution>
</executions>
</plugin>
You can avoid this problem by using the separate jars rather than a single fat jar.
Number 7: Importing a new shape into the SHACL shape graph will cause new shape information to be appended. It will not replace the existing graph even when you have both the
- “Enable replacement of existing data” and
- “I understand that data in the replaced graphs will be cleared before importing new data.”
options enabled as seen in the next screenshot:

To replace the SHACL named graph you need to clear it explicitly by running the following SPARQL command:
clear graph <http://rdf4j.org/schema/rdf4j#SHACLShapeGraph>
For myself I found it easier to update the SHACL shapes programmatically. Note that I made use of the default SHACL named graph:
import org.eclipse.rdf4j.model.vocabulary.RDF4J;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.http.HTTPRepository;
import org.eclipse.rdf4j.rio.RDFFormat;
import java.io.File;
public class UpdateShacl {
public static void main(String[] args) {
String address = args[0]; /* i.e. http://localhost/ */
String repositoryName = args[1]; /* i.e. myRepo */
String shacl = args[2];
File shaclFile = new File(shacl);
HTTPRepository repository = new HTTPRepository(address, repositoryName);
try (RepositoryConnection connection = repository.getConnection()) {
connection.begin();
connection.clear(RDF4J.SHACL_SHAPE_GRAPH);
connection.add(shaclFile, RDFFormat.TURTLE, RDF4J.SHACL_SHAPE_GRAPH);
connection.commit();
} catch (Throwable t) {
t.printStackTrace();
}
}
}
Number 8: Programmatically you can delete a named graph using this code and the same maven dependency as we used above:
import org.eclipse.rdf4j.model.IRI;
import org.eclipse.rdf4j.model.ValueFactory;
import org.eclipse.rdf4j.model.impl.SimpleValueFactory;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.http.HTTPRepository;
public class ClearGraph {
public static void main(String[] args) {
String address = args[0]; /* i.e. http://localhost/ */
String repositoryName = args[1]; /* i.e. myRepo */
String graph = args[2]; /* i.e. http://rdf4j.org/schema/rdf4j#SHACLShapeGraph */
ValueFactory valueFactory = SimpleValueFactory.getInstance();
IRI graphIRI = valueFactory.createIRI(graph);
HTTPRepository repository = new HTTPRepository(address, repositoryName);
try (RepositoryConnection connection = repository.getConnection()) {
connection.begin();
connection.clear(graphIRI);
connection.commit();
}
}
}
Number 9: If you update the shape graph with constraints that are violated by your existing data, you will need to first fix your data before you can upload your new shape definition.
Number 10: When uploading SHACL shapes, unsupported features fails silently. I had this idea to add human readable information to the shape definition to make it easier for users to understand validation errors. Unfortunately “sh:name
” and “sh:description
” are not supported by GraphDB version 10.0.2. and 10.2.0. Moreover, it fails silently. In the Workbench it will show that it loaded successfully as seen in the next screenshot:

However, in the logs I have noticed the following warnings:

As these are logged as warnings, I was expecting my shape to have loaded fine, except that triples pertaining to “sh:name
” and “sh:description
” are skipped. However, my shape did not load at all.
You find the list of supported SHACL features here.
Conclusion
This post may come across as being critical of GraphDB. However, this is not the intention. I think it is rather a case of growing pains that are still experienced around SHACL (and Shex, I suspect) adoption. Resources that have been helpful for me in resolving issues are:
- GraphDB documentation, and
- RDF4J on which GraphDB is built.
scala-logging with log4j2
In this brief post I provide a minimal complete example that uses scala-logging
with log4j2
. scala-logging is a Scala logging library wrapping SLF4J.
Scala code
Here is a minimal complete Scala code example usingscala-logging
. Note that we use a companion object to define an instance of Logger
that will be shared across all instances of SimpleLoggingTest
.
package org.henrietteharmse.tutorial
import com.typesafe.scalalogging.Logger
class SimpleLoggingTest {
SimpleLoggingTest.logger.trace("Hello while instance
of SimpleLoggingTest is created.")
}
object SimpleLoggingTest {
private val logger = Logger[SimpleLoggingTest]
def main(args: Array[String]):Unit = {
logger.trace("Hello from SimpleLoggingTest
companion object")
val simpleLoggingTest = new SimpleLoggingTest
try {
throw new RuntimeException("Some error")
} catch {
case t : Throwable => logger.error(s"Error:
${t.getMessage}")
}
}
}
Log4j configuration
In this section we provide a minimallog4j.xml
configuration file. It defines a trace level logger for our application code with an error level logger for all other errors. Logging for our code will be written to a file in ./logs/App.log
, and all errors, originating from other code than our own code, being logged to the console.
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%markerSimpleName %-5p
%C.%M():%L - %msg %ex{full}%n"/>
</Console>
<File name="Log" fileName="./logs/App.log">
<PatternLayout>
<Pattern>%markerSimpleName %-5p %C.%M():%L
- %msg %ex{full}%n</Pattern>
</PatternLayout>
</File>
</Appenders>
<Loggers>
<Logger name="org.henrietteharmse.tutorial"
level="trace" additivity="false">
<AppenderRef ref="Log"/>
</Logger>
<Root level="error">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
SBT configuration
Thebuild.sbt
file for building this application is given below. Most importantly it states the correct dependencies for using scala-logging
with the libraryDependencies
setting. To get debug information from log4j
, we set the system property log4j2.debug
to true
.
ThisBuild / scalaVersion := "2.13.2"
ThisBuild / organization :=
"org.henrietteharmse.tutorial"
val setLog4jDebug = sys.props("log4j2.debug") = "true"
lazy val root = (project in file("."))
.settings(
name := "scala-logging-with-log4j2",
libraryDependencies ++= Seq(
"com.typesafe.scala-logging" %% "scala-logging" %
"3.9.2",
"org.slf4j" % "slf4j-api" % "1.7.30",
"org.apache.logging.log4j" % "log4j-slf4j-impl" %
"2.13.3"
),
scalacOptions ++= Seq("-deprecation")
)
Conclusion
This post gives a minimal complete example for usingscala-logging with log4j
using SBT. The complete code example can be found here on GitHub.
DBPedia Extraction Framework and Eclipse Quick Start
I recently treid to compile the DBPedia Extraction Framework. What was not immediately clear to me is whether I have to have Scala installed. It turns out that having Scala installed natively is not necessary, seeing as the scala-maven-plugin
is sufficient.
The steps to compile DBPedia Extraction Framework from the command line are:
- Ensure you have the JDK 1.8.x installed.
- Ensure Maven 3.x is installed.
- mvn package
Steps to compile DBPedia Extraction Framework from the Scala IDE (which can be downloaded from Scala-ide.org) are:
- Ensure you have the JDK 1.8.x installed.
- Ensure you have the Scala IDE installed.
mvn eclipse:eclipse
mvn package
- Import existing Maven project into Scala IDE.
- Run
mvn clean install
from within the IDE.