Lab: RDFS: Difference between revisions

From info216
(Created page with "Lab 5: RDFS Programming in Jena Topics RDFS graph sketching. Basic RDFS graph programming in Jena. Entailments and axioms. Classes/interfaces Model (createRDFSModel) InfModel...")
 
No edit summary
 
(68 intermediate revisions by 6 users not shown)
Line 1: Line 1:
Lab 5: RDFS Programming in Jena
==Topics==
Topics
* Simple RDFS statements/triples
RDFS graph sketching. Basic RDFS graph programming in Jena. Entailments and axioms.
* Basic RDFS programming in RDFlib
* Basic RDFS reasoning with OWL-RL


Classes/interfaces
==Useful materials==
Model (createRDFSModel) InfModel (getRawModel, remove + the same methods as Model) RDFS (label, comment, subClassOf, subPropertyOf, domain, range)
rdflib classes/interfaces and attributes:
* RDF (RDF.type)
* RDFS (RDFS.domain, RDFS.range, RDFS.subClassOf, RDFS.subPropertyOf)
* [https://docs.google.com/presentation/d/13fkzg7eM2pnKGqYlKpPMFIJwnOLKBkbT0A62s7OcnOs Lab Presentation of RDFS]


Tasks
OWL-RL:
Consider the following extensions to the task from lab 2: "University of California, Berkeley and University of Valencia are both Universities. All universities are higher education instituttions (HEIs). Having a B.Sc. from a HEI and having a M.Sc. from a HEI are special cases of gradutating from that HEI. Only persons can graduate from a HEI. That a person has a degree in a subject means that the person has expertise in that subject. Only persons can have expertise, and what they have expertise in is always a subject."
* [https://pypi.org/project/owlrl/ OWL-RL at PyPi]
* [https://owl-rl.readthedocs.io/en/latest/ OWL-RL Documentation]


Sketch this RDFS graph on paper.
OWL-RL classes/interfaces:
* RDFSClosure, RDFS_Semantics


Create and output the RDFS graph in Jena (as an InfModel that wraps a default Model) - if you can, try to build on your example from lab 2!
==Tasks==
'''Task:'''
Install OWL-RL into your virtual environment:
pip install owlrl


Check that simple inference works - make sure that your graph contains triples like these, even if you have not asserted them explicitly:
'''Task:'''
We will use simple RDF statements from the Mueller investigation RDF graph you create in Exercise 1. Create a new rdflib graph and add triples to represent that:  
* Rick Gates was charged with money laundering and tax evasion.


that UCB and UV are HEIs
Use RDFS terms to add these rules as triples:
that Cade and Ines have both graduated from some HEI
* When one thing that is charged with another thing,
that Cade and Ines both have expertises
** the first thing is a person under investigation and
that Cade and Ines are both persons
** the second thing is an offence.
that biology and chemistry are both subjects
Rewrite some of your existing code to use rdfs:label in a triple and add an rdfs:comment to the same resource.


If you have more time...
To add triples, you can use either:
Create a new RDFS graph (or InfModel) that wraps an empty base (or raw) model. This graph contains only RDFS axioms. Write it out in Turtle and check that you understand the meaning and purpose of each axiom.
* simple ''graph.add((s, p, o))'' statements or
* ''INSERT DATA {...}'' SPARQL updates.  
If you use SPARQL updates, you can define a namespace dictionary like this:
EX = Namespace('http://example.org#')
NS = {
    'ex': EX,
    'rdf': RDF,
    'rdfs': RDFS,
    'foaf': FOAF,
}
You can then give NS as an optional argument to graph.update() - or to graph.query() - like this:
g.update("""
    # when you provide an initNs-argument, you do not have
    # to define PREFIX-es as part of the update (or query)
    INSERT DATA {
        # the triples you want to add go here,
        # you can use the prefixes defined in the NS-dict
    }
""", initNs=NS)


Create an RDF (not RDFS) graph that contains all the triples in your first graph (the one with all the people and universities). Subtract all the triples in the axiom graph from the people/university graph. Write it out to see that you are left with only the asserted and entailed triples and that none of the axioms remain.
'''Task:'''
* Write a SPARQL query that checks the RDF type(s) of Rick Gates in your RDF graph.
* Write a similar SPARQL query that checks the RDF type(s) of money laundering in your RDF graph.
* Write a small function that computes the ''RDFS closure'' on your graph.
* Re-run the SPARQL queries to check the types of Rick Gates and of money laundering again: have they changed?


Download the SKOS vocabulary from https://www.w3.org/2009/08/skos-reference/skos.rdf and save it to a file called, e.g., SKOS.rdf . Use the schemagen tool (it is inside your Jena folders, for example under apache-jena-3.1.1/bin) to generate a Java class for the SKOS vocabulary. You need to do this from a console window, using a command like "<path>/schemagen -i <infile.rdf> -o <outfile.java>".
You can compute the RDFS closure on a graph ''g'' like this:
import owlrl
owlrl.DeductiveClosure(owlrl.RDFS_Semantics).expand(g)


Copy the SKOS.java file into your project in the same package as your other Java files, and try to use SKOS properties where they fit, for example to organise the keywords for interests and expertise.
'''Task:'''
Use RDFS terms to add this rule as a triple:
* A person under investigation is a FOAF person.
* Like earlier, check the RDF types of Rick Gates before and after running RDFS reasoning. Do they change?
 
'''Task:'''
Add in "plain RDF" as in Exercise 1:
* Paul Manafort was convicted for tax evasion.
 
Use RDFS terms to add these rules as triples:
* When one thing is ''convicted for'' another thing,
** the first thing is also ''charged with'' the second thing.
 
''Note:'' we are dealing with a "timeless" graph here, that represents facts that has held at "some points in time", but not necessarily at the same time.
 
* What are the RDF types of Paul Manafort and of tax evasion before and after RDFS reasoning?
* Does the RDFS domain and range of the ''convicted for'' property change?
 
==If you have more time...==
'''Task:'''
* Create a Turtle file with all the RDF and RDFS triples from the earlier tasks.
* Fire up GraphDB and create a new GraphDB Repository, ''this time with RDFS Ruleset'' for Inference and Validation.
* Load the graph from the Turtle file and go through each of the above queries to confirm that GraphDB has performed RDFS reasoning as you would expect.
 
You can list all the triples in the graph to see if anything has been added:
SELECT * WHERE { ?s ?p ?o }
 
'''Task:'''
* Create another GraphDB Repository, but with ''No inference''.
* Re-run the above tests and compare with the RDFS inference results.

Latest revision as of 13:18, 2 April 2024

Topics

  • Simple RDFS statements/triples
  • Basic RDFS programming in RDFlib
  • Basic RDFS reasoning with OWL-RL

Useful materials

rdflib classes/interfaces and attributes:

OWL-RL:

OWL-RL classes/interfaces:

  • RDFSClosure, RDFS_Semantics

Tasks

Task: Install OWL-RL into your virtual environment:

pip install owlrl

Task: We will use simple RDF statements from the Mueller investigation RDF graph you create in Exercise 1. Create a new rdflib graph and add triples to represent that:

  • Rick Gates was charged with money laundering and tax evasion.

Use RDFS terms to add these rules as triples:

  • When one thing that is charged with another thing,
    • the first thing is a person under investigation and
    • the second thing is an offence.

To add triples, you can use either:

  • simple graph.add((s, p, o)) statements or
  • INSERT DATA {...} SPARQL updates.

If you use SPARQL updates, you can define a namespace dictionary like this:

EX = Namespace('http://example.org#')
NS = {
    'ex': EX,
    'rdf': RDF,
    'rdfs': RDFS,
    'foaf': FOAF,
}

You can then give NS as an optional argument to graph.update() - or to graph.query() - like this:

g.update("""
    # when you provide an initNs-argument, you do not have 
    # to define PREFIX-es as part of the update (or query)

    INSERT DATA {
        # the triples you want to add go here,
        # you can use the prefixes defined in the NS-dict
    }
""", initNs=NS)

Task:

  • Write a SPARQL query that checks the RDF type(s) of Rick Gates in your RDF graph.
  • Write a similar SPARQL query that checks the RDF type(s) of money laundering in your RDF graph.
  • Write a small function that computes the RDFS closure on your graph.
  • Re-run the SPARQL queries to check the types of Rick Gates and of money laundering again: have they changed?

You can compute the RDFS closure on a graph g like this:

import owlrl

owlrl.DeductiveClosure(owlrl.RDFS_Semantics).expand(g)

Task: Use RDFS terms to add this rule as a triple:

  • A person under investigation is a FOAF person.
  • Like earlier, check the RDF types of Rick Gates before and after running RDFS reasoning. Do they change?

Task: Add in "plain RDF" as in Exercise 1:

  • Paul Manafort was convicted for tax evasion.

Use RDFS terms to add these rules as triples:

  • When one thing is convicted for another thing,
    • the first thing is also charged with the second thing.

Note: we are dealing with a "timeless" graph here, that represents facts that has held at "some points in time", but not necessarily at the same time.

  • What are the RDF types of Paul Manafort and of tax evasion before and after RDFS reasoning?
  • Does the RDFS domain and range of the convicted for property change?

If you have more time...

Task:

  • Create a Turtle file with all the RDF and RDFS triples from the earlier tasks.
  • Fire up GraphDB and create a new GraphDB Repository, this time with RDFS Ruleset for Inference and Validation.
  • Load the graph from the Turtle file and go through each of the above queries to confirm that GraphDB has performed RDFS reasoning as you would expect.

You can list all the triples in the graph to see if anything has been added:

SELECT * WHERE { ?s ?p ?o }

Task:

  • Create another GraphDB Repository, but with No inference.
  • Re-run the above tests and compare with the RDFS inference results.