Home » Semantic technologies
Category Archives: Semantic technologies
Creating, Writing and Reading Jena TDB2 Datasets
Jena TDB2 can be used as an RDF datastore. Note that TDB (version 1 of Jena TDB) and TDB2 are not compatible with each other. TDB2 is per definition transactional (while TDB is not). In this post I give a simple example that
- create a new Jena TDB2 dataset,
- create a write transaction and write data to the datastore,
- create a read transaction and read the data from the datastore, and
- release resources associated with the dataset on writing and reading is done.
Create TDB2 Dataset
To create a Jena TDB2 dataset, we use the TDB2Factory
. Note that the class name is TDB2Factory and not TDBFactory. We need to specify a directory where our dataset will be created. Multiple datasets cannot be written to the same directory.
Path path = Paths.get(".").toAbsolutePath().normalize(); String dbDir = path.toFile().getAbsolutePath() + "/db/"; Location location = Location.create(dbDir); Dataset dataset = TDB2Factory.connectDataset(location);
Create WRITE Transaction and Write
dataset.begin(ReadWrite.WRITE); UpdateRequest updateRequest = UpdateFactory.create( "INSERT DATA {<http://dbpedia.org/resource/Grace_Hopper> " + "<http://xmlns.com/foaf/0.1/name> \"Grace Hopper\" .}"); UpdateProcessor updateProcessor = UpdateExecutionFactory.create(updateRequest, dataset); updateProcessor.execute(); dataset.commit();
Create READ Transaction and Read
dataset.begin(ReadWrite.READ); QueryExecution qe = QueryExecutionFactory .create("SELECT ?s ?p ?o WHERE {?s ?p ?o .}", dataset); for (ResultSet results = qe.execSelect(); results.hasNext();) { QuerySolution qs = results.next(); String strValue = qs.get("?o").toString(); logger.trace("value = " + strValue); }
Release Dataset Resources and Run Application
The dataset resources can be release calling close() on the dataset.
dataset.close();
Running the application will cause a /db
directory to be create in the directory from where you run your application, which consists of the various files that represent your dataset.
Conclusion
In this post I have given a simple example creating a TDB2 dataset and writing to and reading from it. This code can be found on github.
Creating a Remote Repository for GraphDB with RDF4J Programmatically
In my previous post I have detailed how you can create a local Ontotext GraphDB repository using RDF4J. I indicated that there are some problems when creating a local repository. Therefore, in this post I will detail how to create a remote Ontotext GraphDB repository using RDF4J. As with creating a local repository, there are three steps:
- Create a configuration file, which is as for local repositories.
- Create
pom.xml
file, which is as for local repositories. - Create the Java code.
The benefit of creating a remote repository is that it will be under the control of the Ontotext GraphDB Workbench. Hence, you will be able to monitor your repository from the Workbench.
Java Code
package org.graphdb.rdf4j.tutorial; import java.io.FileInputStream; import java.io.InputStream; import java.nio.file.Path; import java.nio.file.Paths; import java.util.Iterator; import org.eclipse.rdf4j.model.Model; import org.eclipse.rdf4j.model.Resource; import org.eclipse.rdf4j.model.Statement; import org.eclipse.rdf4j.model.impl.TreeModel; import org.eclipse.rdf4j.model.util.Models; import org.eclipse.rdf4j.model.vocabulary.RDF; import org.eclipse.rdf4j.repository.Repository; import org.eclipse.rdf4j.repository.RepositoryConnection; import org.eclipse.rdf4j.repository.config.RepositoryConfig; import org.eclipse.rdf4j.repository.config.RepositoryConfigSchema; import org.eclipse.rdf4j.repository.http.config.HTTPRepositoryConfig; import org.eclipse.rdf4j.repository.manager.RemoteRepositoryManager; import org.eclipse.rdf4j.repository.manager.RepositoryManager; import org.eclipse.rdf4j.repository.manager.RepositoryProvider; import org.eclipse.rdf4j.rio.RDFFormat; import org.eclipse.rdf4j.rio.RDFParser; import org.eclipse.rdf4j.rio.Rio; import org.eclipse.rdf4j.rio.helpers.StatementCollector; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.slf4j.Marker; import org.slf4j.MarkerFactory; public class CreateRemoteRepository { private static Logger logger = LoggerFactory.getLogger(CreateRemoteRepository.class); // Why This Failure marker private static final Marker WTF_MARKER = MarkerFactory.getMarker("WTF"); public static void main(String[] args) { try { Path path = Paths.get(".").toAbsolutePath().normalize(); String strRepositoryConfig = path.toFile().getAbsolutePath() + "/src/main/resources/repo-defaults.ttl"; String strServerUrl = "http://localhost:7200"; // Instantiate a local repository manager and initialize it RepositoryManager repositoryManager = RepositoryProvider.getRepositoryManager(strServerUrl); repositoryManager.initialize(); repositoryManager.getAllRepositories(); // Instantiate a repository graph model TreeModel graph = new TreeModel(); // Read repository configuration file InputStream config = new FileInputStream(strRepositoryConfig); RDFParser rdfParser = Rio.createParser(RDFFormat.TURTLE); rdfParser.setRDFHandler(new StatementCollector(graph)); rdfParser.parse(config, RepositoryConfigSchema.NAMESPACE); config.close(); // Retrieve the repository node as a resource Resource repositoryNode = Models.subject(graph .filter(null, RDF.TYPE, RepositoryConfigSchema.REPOSITORY)) .orElseThrow(() -> new RuntimeException( "Oops, no <http://www.openrdf.org/config/repository#> subject found!")); // Create a repository configuration object and add it to the repositoryManager RepositoryConfig repositoryConfig = RepositoryConfig.create(graph, repositoryNode); repositoryManager.addRepositoryConfig(repositoryConfig); // Get the repository from repository manager, note the repository id // set in configuration .ttl file Repository repository = repositoryManager.getRepository("graphdb-repo"); // Open a connection to this repository RepositoryConnection repositoryConnection = repository.getConnection(); // ... use the repository // Shutdown connection, repository and manager repositoryConnection.close(); repository.shutDown(); repositoryManager.shutDown(); } catch (Throwable t) { logger.error(WTF_MARKER, t.getMessage(), t); } } }
Conclusion
In this post I detailed how you can create remote repository for Ontotext GraphDB using RDF4J, as well as the benefit of creating a remote repository rather than a local repository. You can find the complete code of this example on github.
Creating a Local Repository for GraphDB with RDF4J Programmatically
If you want to create a local repository for Ontotext GraphDB, according to the documentation. The are essentially 3 steps:
- Create a configuration file.
- Create a
pom.xml
file. - The Java code.
However, there are reasons why you may not want to do this, which I detail.
Configuration File
@prefix rdfs: <http://www.w3.org/2000/01/rdf-schema#>. @prefix rep: <http://www.openrdf.org/config/repository#>. @prefix sr: <http://www.openrdf.org/config/repository/sail#>. @prefix sail: <http://www.openrdf.org/config/sail#>. @prefix owlim: <http://www.ontotext.com/trree/owlim#>. [] a rep:Repository ; rep:repositoryID "graphdb-repo" ; rdfs:label "graphdb-repo-label" ; rep:repositoryImpl [ rep:repositoryType "graphdb:FreeSailRepository" ; rep:repositoryType "owlim:MonitorRepository" ; sr:sailImpl [ sail:sailType "graphdb:FreeSail" ; owlim:base-URL "http://myexample.ontotext.com/graphdb#" ; owlim:defaultNS "" ; owlim:entity-index-size "10000000" ; owlim:entity-id-size "32" ; owlim:imports "" ; owlim:repository-type "file-repository" ; owlim:ruleset "owl-horst-optimized" ; owlim:storage-folder "storage" ; owlim:enable-context-index "true" ; owlim:cache-memory "256m" ; owlim:tuple-index-memory "224m" ; owlim:enablePredicateList "true" ; owlim:predicate-memory "32m" ; owlim:fts-memory "0" ; owlim:ftsIndexPolicy "never" ; owlim:ftsLiteralsOnly "true" ; owlim:in-memory-literal-properties "true" ; owlim:enable-literal-index "true" ; owlim:index-compression-ratio "-1" ; owlim:check-for-inconsistencies "false" ; owlim:disable-sameAs "false" ; owlim:enable-optimization "true" ; owlim:transaction-mode "safe" ; owlim:transaction-isolation "true" ; owlim:query-timeout "0" ; owlim:query-limit-results "0" ; owlim:throw-QueryEvaluationException-on-timeout "false" ; owlim:useShutdownHooks "true" ; owlim:read-only "false" ; ] ].
pom.xml
File
<dependency> <groupId>com.ontotext.graphdb</groupId> <artifactId>graphdb-free-runtime</artifactId> <version>8.4.1</version> </dependency>
Java Code
package org.graphdb.rdf4j.tutorial; import java.io.File; import java.io.FileInputStream; import java.io.InputStream; import java.nio.file.Path; import java.nio.file.Paths; import org.eclipse.rdf4j.model.Resource; import org.eclipse.rdf4j.model.impl.TreeModel; import org.eclipse.rdf4j.model.util.Models; import org.eclipse.rdf4j.model.vocabulary.RDF; import org.eclipse.rdf4j.repository.Repository; import org.eclipse.rdf4j.repository.RepositoryConnection; import org.eclipse.rdf4j.repository.config.RepositoryConfig; import org.eclipse.rdf4j.repository.config.RepositoryConfigSchema; import org.eclipse.rdf4j.repository.manager.LocalRepositoryManager; import org.eclipse.rdf4j.repository.manager.RepositoryManager; import org.eclipse.rdf4j.rio.RDFFormat; import org.eclipse.rdf4j.rio.RDFParser; import org.eclipse.rdf4j.rio.Rio; import org.eclipse.rdf4j.rio.helpers.StatementCollector; import org.slf4j.Logger; import org.slf4j.LoggerFactory; import org.slf4j.Marker; import org.slf4j.MarkerFactory; public class CreateLocalRepository { private static Logger logger = LoggerFactory.getLogger(CreateLocalRepository.class); // Why This Failure marker private static final Marker WTF_MARKER = MarkerFactory.getMarker("WTF"); public static void main(String[] args) { try { Path path = Paths.get(".").toAbsolutePath().normalize(); String strRepositoryConfig = path.toFile().getAbsolutePath() + "/src/main/resources/repo-defaults.ttl"; // Instantiate a local repository manager and initialize it RepositoryManager repositoryManager = new LocalRepositoryManager(new File(".")); repositoryManager.initialize(); // Instantiate a repository graph model TreeModel graph = new TreeModel(); // Read repository configuration file InputStream config = new FileInputStream(strRepositoryConfig); RDFParser rdfParser = Rio.createParser(RDFFormat.TURTLE); rdfParser.setRDFHandler(new StatementCollector(graph)); rdfParser.parse(config, RepositoryConfigSchema.NAMESPACE); config.close(); // Retrieve the repository node as a resource Resource repositoryNode = Models.subject(graph .filter(null, RDF.TYPE, RepositoryConfigSchema.REPOSITORY)) .orElseThrow(() -> new RuntimeException( "Oops, no <http://www.openrdf.org/config/repository#> subject found!")); // Create a repository configuration object and add it to the repositoryManager RepositoryConfig repositoryConfig = RepositoryConfig.create(graph, repositoryNode); repositoryManager.addRepositoryConfig(repositoryConfig); // Get the repository from repository manager, note the repository id // set in configuration .ttl file Repository repository = repositoryManager.getRepository("graphdb-repo"); // Open a connection to this repository RepositoryConnection repositoryConnection = repository.getConnection(); // ... use the repository // Shutdown connection, repository and manager repositoryConnection.close(); repository.shutDown(); repositoryManager.shutDown(); } catch (Throwable t) { logger.error(WTF_MARKER, t.getMessage(), t); } } }
Why you may not want to do this
new LocalRepositoryManager(new File("."));
will create a repository where ever your Java application is running from. This means the repository will not be under the control of your Ontotext GraphDB Workbench. Hence, you will not be able to run SPARQL queries or monitor your database from the Workbench. I am not aware of any way via which you can instruct GraphDB to look for repositories in an additional directory.
If you change the directory to $GRAPH DB INSTALL$/data/repositories
, the repository will be under the control of Ontotext GraphDB (assuming you have a local GraphDB instance) only if GraphDB is not running. If you start GraphDB after running your program, you will be able to see the repository in GraphDB workbench.
Conclusion
In this post I have detailed how you can create an Ontext GraphDB repository using RDF4J and why you may not want to do this. In my next post I detail how
to create a remote repository, which addresses the problem I detailed here. You can find the complete code of this example on github.
Classification with SHACL Rules
In my previous post, Rule Execution with SHACL, we have looked at how SHACL rules can be utilized to make inferences. In this post we consider a more complex situation where SHACL rules are used to classify baked goods as vegan friendly or gluten free based on their ingredients.
Why use SHACL and not RDF/RDFS/OWL?
In my discussion I will only concentrate on the definition of vegan friendly baked goods since the translation to gluten free baked goods is similar. Gluten free baked goods are included to give a more representative example.
Essentially what we need to do is look at a baked good and determine whether it includes non-vegan friendly ingredients. If it includes no non-vegan friendly ingredients, we want to assume that it is a vegan friendly baked good. This kind of reasoning uses what is called closed world reasoning, i.e. when a fact does not follow from the data, it is assumed to be false. SHACL uses closed world reasoning and hence the reason for why it is a good fit for this problem.
RDF/RDFS/OWL uses open world reasoning, which means when a fact does not follow from data or schema, it cannot derive that the fact is necessarily false. Rather, it is both possible (1) that the fact holds but it is not captured in data (or schema), or (2) the fact does not hold. For this reason RDF/RDFS/OWL will only infer that a fact holds (or does not hold) if it explicitly stated in the data or can be derived from a combination of data and schema information. Hence, for this reason RDF/RDFS/OWL are not a good fit for this problem.
Baked Goods Data
Below are example baked goods RDF data:

Bakery RDF data
A couple of points are important w.r.t. the RDF data:
- Note that we define both
VeganFriendly
andNonVeganFriendly
ingredients to be able to identify ingredients completely. Importantly we state thatVeganFriendly
andNonVeganFriendly
are disjoint so that we cannot inadvertently state that an ingredient is bothVeganFriendly
andNonVeganFriendly
. - We state that
AppleTartA
–AppleTartD
are of typeBakedGood
so that when we specify our rules, we can state that the rules are applicable only to instances of typeBakedGood
. - We enforce the domain and range for
bakery:hasIngredient
which results in whenever we saybakery:a bakery:hasIngredient bakery:b
, the reasoner can infer thatbakery:a
is of typebakery:BakedGood
andbakery:b
is of typebakery:Ingredient
.
Baked Good Rules
Now we define the shape of a baked good:

BakedGood shape
We state that bakery:BakedGood a rdfs:Class
which is important to be able to apply rules to instances of bakery:BakedGood
. We also state that bakery:BakedGood a sh:NodeShape
which allows us to add shape and rule information to bakery:BakedGood
. Note that our bakery:BakedGood
shape state that a baked good has at least one property called bakery:hasIngredient
with range bakery:Ingredient
.
We now add a bakery:NonVeganFriendly
shape

NonVeganFriendly shape
which we will use in the rule definition of bakery:BakedGood
:

VeganBakedGood and NonVeganBakedGood rules
We add two rules, one for identifying a bakery:VeganBakedGood
and one for a bakery:NonVeganBakedGood
. Note that these rules are of type sh:TripleRule
, which will infer the existence of a new triple if the rule is triggered. The first rule states that the subject of this triple is sh:this, which refers to instances of our bakery:BakedGood
class. The predicate is rdf:type
and the object is bakery:VeganBakedGood
. So if this rule is triggered it will infer that an instance of bakery:BakedGood
is also an instance of type bakery:VeganBakedGood
.
Both rules have two conditions which instances must adhere to before these rules will trigger. These rules will only apply to instances of bakery:BakedGood
according to the first condition. The second condition of the rule for bakery:VeganBakedGood
checks for bakery:hasIngredient
properties of the shape bakery:NonVeganFriendly
. This ensures that the range of bakery:hasIngredient
is of type bakery:NonVeganFriendly
. If bakery:hasIngredient
has a maximum count of 0, it will infer that this instance of bakery:BakedGood
is of type bakery:VeganBakedGood
. The rule for bakery:NonVeganBakedGood
will also check for bakery:hasIngredient
properties of the shape bakery:NonVeganFriendly
, but with minimum count of 1 for which it will then infer that this instance is of type bakery:NonVeganBakedGood
.
Jena SHACL Rule Execution Code
The Jena SHACL implementation provides command line scripts (/bin/shaclinfer.sh
or /bin/shaclinfer.bat
) which takes as arguments a data file and a shape file which can be used to do rule execution. However, for this specific example you have to write your own Java code. The reason being that the scripts creates a default model that has no reasoning support. In this section I provide the SHACL Jena code needed to do the classification of baked goods.

Shacl rule execution
Running the Code
Running the code will cause an inferences.ttl
file to be written out to
$Project/src/main/resources/
. It contains the following output:

Classification of baked goods
Conclusion
In this post I gave a brief overview of how SHACL can be used to do classification based on some property. This code example is available at shacl tutorial. This post was inspired by a question on Stack Overflow.
If you have any questions regarding SHACL or the semantic web, please leave a comment and I will try to help where I can.
Rule Execution with SHACL
In my previous post, Using Jena and SHACL to validate RDF Data, I have looked at how RDF data can be validated using SHACL. A closely related concern to that of constraints checking, is rule execution, for which SHACL can also be used.
A SHACL Rule Example
We will again use an example from the SHACL specification. Assume we have the a file rectangles.ttl
that contains the following data:

rectangles.ttl
Assuming we want to infer that when the height and width of a rectangle are equal, the rectangle represents a square, the following SHACL rule specification can be used (which we will store in rectangleRules.ttl
):

rectangleRules.ttl
A Code Example using Jena
Naturally you will need to add SHACL to your Maven pom dependencies. Then the following code will execute your SHACL rules:

SHACL rule execution using Jena
Running the Code
Running the code will cause an inferences.ttl
file to be written out to $Project/src/main/resources/
. It contains the following output:

inference.ttl
Note that ex:InvalidRectangle
has been ignored because it does not adhere to sh:condition ex:Rectangle
, since it does not have ex:height
and ex:width
properties. Also, ex:NonSquareRectangle
is a rectangle, not a square.
Conclusion
In this post I gave a brief overview of how SHACL can be used to implement rules on RDF data. This code example is available at shacl tutorial.
Using Jena and SHACL to validate RDF Data
RDF enables users to capture data in a way that is intuitive to them. This means that data is often captured without conforming to any schema. It is often useful to know that an RDF dataset conforms to some (potential partial) schema. This is where SHACL (SHApe Constraint Language), a W3C standard, comes into play. It is a language for describing and validating RDF graphs. In this post I will give a brief overview of how to use SHACL to validate RDF data using the Jena implementation of SHACL.
A SHACL Example
We will use an example from the SHACL specification. Assume we hav a file person.ttl
that contains the following data:

Example RDF data
To validate this data we create a shape definition in personShape.ttl
containing:

Person shape definition
A Code Example using Jena
To validate our RDF data using our SHACL shape we will use the Jena implementation of SHACL. Start by adding the SHACL dependency to your Maven pom.xml
. Note that you do not need to add Jena as well as the SHACL pom already includes Jena.

SHACL Maven dependency
In the code we will assume the person.ttl
and personShape.ttl
files are in $Project/src/main/resources/
. The code for doing the validation is the following then:

Java code using Jena implementation of SHACL
Running the Code
Running the code will cause a report.ttl
file to be written out to $Project/src/main/resources/
. We can determine that our data does not conform by checking the sh:conforms
property. We have 4 violations of our ex:PersonShape
:
- For
ex:Alice
theex:ssn
property does not conform to the pattern defined in the shape. ex:Bob
has 2ex:ssn
properties.ex:Calvin
works for a company that is not of typeex:Company
.ex:Calvin
has a propertyex:birthDate
that is not allowed byex:PersonShape
since it is close bysh:closed true
.
A corrected version of our person data may look as follows:

Person data that conforms to our person shape
Conclusion
In this post I have given a brief overview of how SHACL can be used to validate RDF data using the SHACL implementation of Jena. This code example is available at shacl tutorial.
Why does the OWL Reasoner ignore my Constraint?
A most frustrating problem often encountered by people, with experience in relational databases when they are introduced to OWL ontologies, is that OWL ontology reasoners seem to ignore constraints. In this post I give examples of this problem, explain why they happen and I provide ways to deal with each example.
An Example
A typical example encountered in relational databases is that of modeling orders with orderlines, which can be modeled via Orders
and Orderlines
tables where the Orderlines
table has a foreign key constraint to the Orders
table. A related OWL ontology is given in Figure 1. It creates as expected Order
and Orderline
classes with a hasOrder
object property. That individuals of Orderline
are necessarily associated with one order is enforced by Orderline
being a subclass of hasOrder
.
exactly 1 owl:Thing

Figure 1: Order ontology
Two Problems
Two frustrating and most surprising errors given the Order ontology are: (1) if an Orderline
individual is created for which no associated Order
individual exists, the reasoner will not give an inconsistency, and (2) if an Orderline
individual is created for which two or more Order
individuals exist, the reasoner will also not give an inconsistency.
Missing Association Problem
Say we create an individual orderline123
of type Orderline
, which is not associated with an individual of type Order, in this case the reasoner will not give an inconsistency. The reason for this is due to the open world assumption. Informally it means that the only inferences that the reasoner can make from an ontology is based on explicit information stated in the ontology or what can derived from explicit stated information.
When you state orderline123
is an Orderline
, there is no explicit information in the ontology that states that orderline123
is not associated with an individual of Order
via the hasOrder
property. To make explicit that orderline123
is not in such a relation, you have to define orderline123
as in Figure 2. hasOrder max 0 owl:Thing
states that it is known that orderline123
is not associated with an individual via the hasOrder
property.

Figure 2: orderline123 is not in hasOrder association
Too Many Associated Individuals Problem
Assume we now change our definition of our orderline123
individual to be associated via hasOrder
to two individuals of Order
as shown in Figure 3. Again, most frustratingly the reasoner does not find that the ontology is inconsistent. The reason for this is that OWL does not make the unique name assumption. This means that individuals with different names can be assumed by the reasoner to represent a single individual. To force the reasoner to see order1
and order2
as necessarily different, you can state order1
is different from order2
by adding DifferentFrom:order2
to order1
(or similarly for order2
).

Figure 3: orderline123 has two orders
Constraint Checking versus Deriving Inferences
The source of the problems described here is due to the difference between the
purposes of a relational database and an OWL reasoner. The main purpose of a
relational database is to enable view and edit access of the data in such a way that the integrity of the data is maintained. A relational database will ensure that the data adheres to the constraints of its schema, but it cannot make any claims beyond what is stated by the data it contains. The main purpose of an OWL reasoner is to derive inferences from statements and facts. As an example, from the statement Class: Dog SubclassOf: Animal
and the fact Individual: pluto Type: Dog
it can be derived that pluto
is an Animal
, even though the ontology nowhere states explicitly that pluto
is an Animal
.
Conclusion
Many newcomers to OWL ontologies get tripped up by the difference in purpose of relational databases and OWL ontologies. In this post I explained these pitfalls and how to deal with them.
If you have an ontology modeling problem, you are welcome leaving a comment detailing the problem.