Using SHACL validation with Ontotext GraphDB

Today I have 1 of those moments where I am absolutely sure if I do not write this down, I will forget how to do this next time. For one of the projects I am working on, we need to do SHACL validation of RDF data that will be stored in Ontotext GraphDB. Here are the 10 things I needed to learn in doing this. Some of these are rather obvious, but some were less than obvious to me.

Number 1: To be able to do SHACL validation, your repository needs to be configured for SHACL when you create your repository. This cannot be done after the fact.

Number 2: It seems to be better to import your ontology (or ontologies) and data into different graphs. This is useful when you want to re-import your ontology (or ontologies) or your data, because then you can replace a specific named graph completely. This was very useful for me while prototyping. Screenshot below:

Number 3: SHACL shapes are imported into this named graph

http://rdf4j.org/schema/rdf4j#SHACLShapeGraph

by default. At configuration time you can provide a different named graph or graphs for your SHACL shapes.

Number 4: To find the named graphs in your repository, you can do the following SPARQL query:

select distinct ?g 
where {
  graph ?g {?s ?p ?o }
}

You can then query a specific named graph as follows:

select * 
from <myNamedGraph>
where { 
	?s ?p ?o .
}

Number 5: However, getting the named graphs does not return the SHACL named graph. On StackOverflow someone suggested SHACL shapes can be retrieved using:

http://address:7200/repositories/myRepo/rdf-graphs/service?graph=http://rdf4j.org/schema/rdf4j#SHACLShapeGraph

However, this did not work for me. Instead, the following code worked reliably:

import org.eclipse.rdf4j.model.Model;
import org.eclipse.rdf4j.model.impl.LinkedHashModel;
import org.eclipse.rdf4j.model.vocabulary.RDF4J;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.http.HTTPRepository;
import org.eclipse.rdf4j.rio.RDFFormat;
import org.eclipse.rdf4j.rio.Rio;
import org.eclipse.rdf4j.rio.WriterConfig;
import org.eclipse.rdf4j.rio.helpers.BasicWriterSettings;

import java.util.stream.Collectors;

public class RetrieveShaclShapes {
public static void main(String[] args) {
String address = args[0]; /* i.e. http://localhost/ */
String repositoryName = args[1]; /* i.e. myRepo */

HTTPRepository repository = new HTTPRepository(address, repositoryName);
try (RepositoryConnection connection = repository.getConnection()) {
Model statementsCollector = new LinkedHashModel(
connection.getStatements(null, null,null, RDF4J.SHACL_SHAPE_GRAPH)
.stream()
.collect(Collectors.toList()));
Rio.write(statementsCollector, System.out, RDFFormat.TURTLE, new WriterConfig().set(
BasicWriterSettings.INLINE_BLANK_NODES, true));
} catch (Throwable t) {
t.printStackTrace();
}
}
}

using the following dependencies in the pom.xml with

${rdf4j.version} = 4.2.3: 
    <dependency>
        <groupId>org.eclipse.rdf4j</groupId>
        <artifactId>rdf4j-client</artifactId>
        <version>${rdf4j.version}</version>
        <type>pom</type>
    </dependency>

Number 6: Getting the above code to run was not obvious since I opted to using a fat jar. I encountered an “org.eclipse.rdf4j.rio.UnsupportedRDFormatException: Did not recognise RDF format object” error. RFD4J uses the Java Service Provider Interface (SPI) which uses a file in the META-INF/services of the jar to register parser implementations. The maven-assembly-plugin I used, to generate the fat jar, causes different jars to overwrite META-INF/services thereby loosing registration information. The solution is to use the maven-shade-plugin which merge META-INF/services rather overwrite it. In your pom you need to add the following to your plugins configuration:

      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-shade-plugin</artifactId>
        <version>3.4.1</version>
        <executions>
          <execution>
            <goals>
              <goal>shade</goal>
            </goals>
            <configuration>
              <transformers>
                <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
              </transformers>
            </configuration>
          </execution>
        </executions>
      </plugin>

You can avoid this problem by using the separate jars rather than a single fat jar.

Number 7: Importing a new shape into the SHACL shape graph will cause new shape information to be appended. It will not replace the existing graph even when you have both the

  • “Enable replacement of existing data” and
  • “I understand that data in the replaced graphs will be cleared before importing new data.”

options enabled as seen in the next screenshot:

To replace the SHACL named graph you need to clear it explicitly by running the following SPARQL command:

clear graph <http://rdf4j.org/schema/rdf4j#SHACLShapeGraph>

For myself I found it easier to update the SHACL shapes programmatically. Note that I made use of the default SHACL named graph:

import org.eclipse.rdf4j.model.vocabulary.RDF4J;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.http.HTTPRepository;
import org.eclipse.rdf4j.rio.RDFFormat;

import java.io.File;

public class UpdateShacl {
    public static void main(String[] args)  {
        String address = args[0]; /* i.e. http://localhost/ */
        String repositoryName = args[1]; /* i.e. myRepo */
        String shacl = args[2];
        File shaclFile = new File(shacl);

        HTTPRepository repository = new HTTPRepository(address, repositoryName);
        try (RepositoryConnection connection = repository.getConnection()) {
            connection.begin();
            connection.clear(RDF4J.SHACL_SHAPE_GRAPH);
            connection.add(shaclFile, RDFFormat.TURTLE, RDF4J.SHACL_SHAPE_GRAPH);
            connection.commit();
        } catch (Throwable t) {
            t.printStackTrace();
        }
    }
}

Number 8: Programmatically you can delete a named graph using this code and the same maven dependency as we used above:

import org.eclipse.rdf4j.model.IRI;
import org.eclipse.rdf4j.model.ValueFactory;
import org.eclipse.rdf4j.model.impl.SimpleValueFactory;
import org.eclipse.rdf4j.repository.RepositoryConnection;
import org.eclipse.rdf4j.repository.http.HTTPRepository;

public class ClearGraph {
    public static void main(String[] args)  {
        String address = args[0]; /* i.e. http://localhost/ */
        String repositoryName = args[1]; /* i.e. myRepo */
        String graph = args[2]; /* i.e. http://rdf4j.org/schema/rdf4j#SHACLShapeGraph */

        ValueFactory valueFactory = SimpleValueFactory.getInstance();
        IRI graphIRI = valueFactory.createIRI(graph);
        
        HTTPRepository repository = new HTTPRepository(address, repositoryName);
        try (RepositoryConnection connection = repository.getConnection()) {
            connection.begin();
            connection.clear(graphIRI);
            connection.commit();
        }
    }
}

Number 9: If you update the shape graph with constraints that are violated by your existing data, you will need to first fix your data before you can upload your new shape definition.

Number 10: When uploading SHACL shapes, unsupported features fails silently. I had this idea to add human readable information to the shape definition to make it easier for users to understand validation errors. Unfortunately “sh:name” and “sh:description” are not supported by GraphDB version 10.0.2. and 10.2.0. Moreover, it fails silently. In the Workbench it will show that it loaded successfully as seen in the next screenshot:

However, in the logs I have noticed the following warnings:

As these are logged as warnings, I was expecting my shape to have loaded fine, except that triples pertaining to “sh:name” and “sh:description” are skipped. However, my shape did not load at all.

You find the list of supported SHACL features here.

Conclusion

This post may come across as being critical of GraphDB. However, this is not the intention. I think it is rather a case of growing pains that are still experienced around SHACL (and Shex, I suspect) adoption. Resources that have been helpful for me in resolving issues are:

Why I have no confidence in Oracle ADF

A recent Gartner report has mentioned that Oracle ADF has been plagued by frequent crashes, but that these have been addressed in Oracle Fusion Middleware 12c [1]. I find it alarming that these stability issues have only been addressed recently. This is alarming taking into consideration that Oracle ADF has been in development close to 10 years (The core of Oracle ADF is based on JSR 227 which the Java Community Process voted on already in 2003) [2]. In this post I will share my reasoning as to why I am still not overly optimistic about Oracle ADF.
JSR 227
Oracle ADF is an implementation of JSR 227. JSR 227 is an attempt at providing standard binding between any frontend and any type of service. Data controls can be created for instance for JDBC, EJB and Web Services [3]. JSR 227 ignores the difference between transactional vs. non-transactional resources and data sources vs. services. In comments on JSR 227 these differences has been highlighted with SAP warning that JSR 227 could violate the integrity of JEE [3].
The Java Community is betting against Oracle ADF
According to Shay Shmeltzer (group manager for Oracle JDeveloper [4]) Oracle has withdrawn JSR 227 “since the other members of the Java community process didn’t show interested in pursuing this approach further.” [5] With the initial vote for the JSR, IBM and BEA have raised concerns around the complexity of JSR 227 and stated that they consider the scope to be too broad [3]. In recent writing Paul Dorsey (from BEA), describes Oracle ADF as highly complex and as a high risk endeavor [6] [3].
Lack of Skills
Edwin Biemond, an Oracle ACE[15] and Java Developer of the year 2009 by Oracle Magazine [7], states that there is a general lack of skills in Oracle ADF. His sentiment is supported by other developers on the forum [8].
Learning Curve
The learning curve for Oracle ADF is very steep. Becoming productive (not expert!) in Oracle ADF takes 3-6 months assuming the person is a skilled Java Web developer [8] [9]. The implications for this are:

  • Any Oracle ADF project shorter than 6 months will be disproportionately expensive due to the ramp-up time required by the team.
  • Any Oracle ADF project shorter than 6 months carries an even higher risk due to the team not being allowed the time to gain the needed knowledge on Oracle ADF.
  • An Oracle ADF project has to be staffed with senior web developers.

Lack of in-depth documentation
The declarative nature of Oracle ADF causes Oracle ADF developer guides to describe “the how” and not “the why”. According to Frank Nimphius (Principle Product Manager for Application Development Tools at Oracle Corporation [10]) Oracle does not plan on providing in-depth documentation but rather they plan on compiling a list of topics which will then be sourced out to the community [11]. Since knowledge regarding the intricacies of Oracle ADF resides within Oracle, this is an approach that is doomed to failure. Indeed, the success of Open Source tools like Spring, Hibernate and JBoss is intimately linked to the availability of in-depth documentation supplied by their designers. In absence of in-depth documentation, Oracle ADF developers are forced to trawl through the source code of Oracle ADF [12]. Source code is a rather poor substitute for proper documentation since in the absence of a design context; the intent of the source code is obfuscated.
Substantial increase in availability of Oracle ADF skills and resources is highly unlikely
Progress on JSR 227 has been rather slow. The initial JCP vote on JSR 227 has taken place 7 July 2003 with a first draft only being available by 11 December 2008. No further progress has been made on this JSR and it has been withdrawn [3]. If one compares progress of JSR 227 to for instance the JSF JSR, the initial vote took place on 29 May 2001 with a Final Release available by 11 March 2004 [13].

This means that Oracle ADF has been in the making for at least 9 years. If it has been impossible in the last 9 years to provide in-depth Oracle ADF documentation and to grow the skills base around Oracle ADF, it is rather unlikely that it will miraculously change in the near future (2 years).

Summary
The risk of an Oracle ADF project is disproportionately high due to its complexity, the lack of in-depth documentation and the absence of skills. This is unlikely to change in the near future.

Works Cited
[1] M. Driver, “Oracle Application Development Framework: Past, Present and Future,” Gartner, 2012.
[2] “JSR 227 Status,” 8 May 2011. [Online]. Available: http://groups.google.com/group/adf-methodology/browse_thread/thread/ca3e8c6776e3e2ef. [Accessed 4 August 2011].
[3] Java Community Process, “JSR-000227 A Standard Data Binding & Data Access Facility for J2EETM Platform,” Java Community Process, 2008. [Online]. Available: http://jcp.org/aboutJava/communityprocess/edr/jsr227/index.html. [Accessed 2 Sept 2012].
[4] S. Shmeltzer, “Shay Shmeltzer,” [Online]. Available: http://shayshmeltzer.sys-con.com/. [Accessed 1Sept 2012].
[5] J. 2. Status, Google Groups: ADF Enterprise Methodology Group, [Online]. Available: https://groups.google.com/forum/?fromgroups=#!topic/adf-methodology/yj6MZ3bj4u8. [Accessed 1 Sept 2012].
[6] P. Dorsey, “How Will You Build Your Next System?,” DULCIAN, Inc, 2012. [Online]. Available: http://www.dulcian.com/papers/IOUG/2012/2012_IOUG_Dorsey_BuildNextSystem.pdf. [Accessed 2 Sept 2012].
[7] E. Biemond, “About Me,” [Online]. Available: http://biemond.blogspot.com/p/about-me.html. [Accessed 2 Sept 2012].
[8] E. Biemond, “Less xml in ADF and move forward to Java EE 6 in 12c,” Google Groups: ADF Enterprise Methodology Group, 2 Jan 2012. [Online]. Available: https://groups.google.com/forum/?fromgroups=#!topic/adf-methodology/_fDyNUsXUmo. [Accessed 2 Sept 2012].
[9] S. Davelaar, “How to become an Oracle ADF expert in one week (or in 1 day if you don’t have so much time),” JHeadstart Blog, 13 Sept 2011. [Online]. Available: https://blogs.oracle.com/jheadstart/entry/how_to_become_an_oracle. [Accessed 2 Sept 2012].
[10] F. Nimphius, “Frank Nimphius,” Sys-Con Media, [Online]. Available: http://franknimphius.sys-con.com/. [Accessed 2 Sept 2012].
[11] “ADF’s Learning Curve,” Google Groups: ADF Enterprise Methodology Group, 30 Jan 2009. [Online].Available: https://groups.google.com/forum/#!msg/adf-methodology/KPt0Hf2yudo/TrEW70qkAT4J. [Accessed 2 Sept 2012].
[12] D. Mills, “The GroundBlog by Duncan Mills: Facelets and PanelDashboard Gotchya,” Oracle, 20 Jul 2012. [Online]. Available: https://blogs.oracle.com/groundside/entry/facelets_and_paneldashboard_gotchya. [Accessed 2
Sept 2012].
[13] Java Community Process, “JSR 127: JavaServer Faces,” Java Community Process, 2004. [Online]. Available: http://www.jcp.org/en/jsr/detail?id=127. [Accessed 2 Sept 2012].
[14] J. Kotamraju, Web Services for Java EE, version 1.3, Sun Microsystems, 2009.
[15] Oracle Corporation, “Oracle ACE Program – FAQ,” Oracle Corporation, 16 Jun 2011. [Online]. Available: http://www.oracle.com/technetwork/community/oracle-ace-faq-100746.html. [Accessed 2 Sept 2012].