Lab: RDFS: Difference between revisions

From info216
No edit summary
No edit summary
 
(53 intermediate revisions by 6 users not shown)
Line 1: Line 1:
==Topics==
* Simple RDFS statements/triples
* Basic RDFS programming in RDFlib
* Basic RDFS reasoning with OWL-RL
==Useful materials==
rdflib classes/interfaces and attributes:
* RDF (RDF.type)
* RDFS (RDFS.domain, RDFS.range, RDFS.subClassOf, RDFS.subPropertyOf)
* [https://docs.google.com/presentation/d/13fkzg7eM2pnKGqYlKpPMFIJwnOLKBkbT0A62s7OcnOs Lab Presentation of RDFS]
OWL-RL:
* [https://pypi.org/project/owlrl/ OWL-RL at PyPi]
* [https://owl-rl.readthedocs.io/en/latest/ OWL-RL Documentation]
OWL-RL classes/interfaces:
* RDFSClosure, RDFS_Semantics


=Lab 7: RDFS Programming with rdflib and owlurl=
==Tasks==
'''Task:'''
Install OWL-RL into your virtual environment:
pip install owlrl


==Topics==
'''Task:'''
Basic RDFS graph programming in RDFlib.
We will use simple RDF statements from the Mueller investigation RDF graph you create in Exercise 1. Create a new rdflib graph and add triples to represent that:
Entailments and axioms with owlrl.
* Rick Gates was charged with money laundering and tax evasion.


==Classes/Methods/Vocabularies==
Use RDFS terms to add these rules as triples:
owlrl.RDFSClosure (RDFS_Semantics, closure, flush_stored_triples)
* When one thing that is charged with another thing,
** the first thing is a person under investigation and
** the second thing is an offence.


Vocabularies:  
To add triples, you can use either:
* simple ''graph.add((s, p, o))'' statements or
* ''INSERT DATA {...}'' SPARQL updates.
If you use SPARQL updates, you can define a namespace dictionary like this:
EX = Namespace('http://example.org#')
NS = {
    'ex': EX,
    'rdf': RDF,
    'rdfs': RDFS,
    'foaf': FOAF,
}
You can then give NS as an optional argument to graph.update() - or to graph.query() - like this:
g.update("""
    # when you provide an initNs-argument, you do not have
    # to define PREFIX-es as part of the update (or query)
    INSERT DATA {
        # the triples you want to add go here,
        # you can use the prefixes defined in the NS-dict
    }
""", initNs=NS)


RDF(type)
'''Task:'''
* Write a SPARQL query that checks the RDF type(s) of Rick Gates in your RDF graph.
* Write a similar SPARQL query that checks the RDF type(s) of money laundering in your RDF graph.
* Write a small function that computes the ''RDFS closure'' on your graph.
* Re-run the SPARQL queries to check the types of Rick Gates and of money laundering again: have they changed?


RDFS (label, comment, subClassOf, subPropertyOf, domain, range)
You can compute the RDFS closure on a graph ''g'' like this:
import owlrl
owlrl.DeductiveClosure(owlrl.RDFS_Semantics).expand(g)


==Tasks==
'''Task:'''
pip install owlurl
Use RDFS terms to add this rule as a triple:
* A person under investigation is a FOAF person.
* Like earlier, check the RDF types of Rick Gates before and after running RDFS reasoning. Do they change?


Consider the following extensions to the task from lab 2:
'''Task:'''
"University of California, Berkeley and University of Valencia are both Universities.
Add in "plain RDF" as in Exercise 1:
All universities are higher education instituttions (HEIs). Having a B.Sc. from a HEI and having a M.Sc.
* Paul Manafort was convicted for tax evasion.
from a HEI are special cases of gradutating from that HEI. Only persons can graduate from a HEI. That a person has a degree in a subject means
that the person has expertise in that subject. Only persons can have expertise, and what they have expertise
in is always a subject."


Create and output the RDFS graph in RDFlib - if you can, try to build on
Use RDFS terms to add these rules as triples:
your example from lab 2!
* When one thing is ''convicted for'' another thing,
** the first thing is also ''charged with'' the second thing.


Check that simple inference works -  make sure that your graph contains triples like these, even if
''Note:'' we are dealing with a "timeless" graph here, that represents facts that has held at "some points in time", but not necessarily at the same time.
you have not asserted them explicitly:
* that UCB and UV are HEIs
* that Cade and Emma have both graduated from some HEI
* that Cade and Emma both have expertises
* that Cade and Emma are both persons
* that biology and chemistry are both subjects


Rewrite some of your existing code to use rdfs:label in a triple and add an rdfs:comment to the same resource.
* What are the RDF types of Paul Manafort and of tax evasion before and after RDFS reasoning?
* Does the RDFS domain and range of the ''convicted for'' property change?


==If you have more time...==
==If you have more time...==
Create a new RDFS graph (or InfModel) that wraps an empty base (or raw) model. This graph contains only RDFS axioms. Write it out in Turtle and check that you understand  the meaning and purpose of each axiom.
'''Task:'''
 
* Create a Turtle file with all the RDF and RDFS triples from the earlier tasks.
Create an RDF (not RDFS) graph that contains all the triples in your first graph (the one with all the people and universities). Subtract all the triples in the axiom graph from the people/university graph. Write it out to see that you are left with only the asserted and entailed triples and that none of the axioms remain.
* Fire up GraphDB and create a new GraphDB Repository, ''this time with RDFS Ruleset'' for Inference and Validation.
* Load the graph from the Turtle file and go through each of the above queries to confirm that GraphDB has performed RDFS reasoning as you would expect.


Download the SKOS vocabulary from https://www.w3.org/2009/08/skos-reference/skos.rdf and save it to a file called, e.g., SKOS.rdf .
You can list all the triples in the graph to see if anything has been added:
Use the schemagen tool (it is inside your Jena folders, for example under apache-jena-3.1.1/bin) to generate a Java class for the SKOS vocabulary.
SELECT * WHERE { ?s ?p ?o }
You need to do this from a console window, using a command like "<path>/schemagen -i <infile.rdf> -o <outfile.java>".


Copy the SKOS.java file into your project in the same package as your other Java files,  and try to use SKOS properties
'''Task:'''
where they fit, for example to organise the keywords for interests and expertise.
* Create another GraphDB Repository, but with ''No inference''.  
* Re-run the above tests and compare with the RDFS inference results.

Latest revision as of 13:18, 2 April 2024

Topics

  • Simple RDFS statements/triples
  • Basic RDFS programming in RDFlib
  • Basic RDFS reasoning with OWL-RL

Useful materials

rdflib classes/interfaces and attributes:

OWL-RL:

OWL-RL classes/interfaces:

  • RDFSClosure, RDFS_Semantics

Tasks

Task: Install OWL-RL into your virtual environment:

pip install owlrl

Task: We will use simple RDF statements from the Mueller investigation RDF graph you create in Exercise 1. Create a new rdflib graph and add triples to represent that:

  • Rick Gates was charged with money laundering and tax evasion.

Use RDFS terms to add these rules as triples:

  • When one thing that is charged with another thing,
    • the first thing is a person under investigation and
    • the second thing is an offence.

To add triples, you can use either:

  • simple graph.add((s, p, o)) statements or
  • INSERT DATA {...} SPARQL updates.

If you use SPARQL updates, you can define a namespace dictionary like this:

EX = Namespace('http://example.org#')
NS = {
    'ex': EX,
    'rdf': RDF,
    'rdfs': RDFS,
    'foaf': FOAF,
}

You can then give NS as an optional argument to graph.update() - or to graph.query() - like this:

g.update("""
    # when you provide an initNs-argument, you do not have 
    # to define PREFIX-es as part of the update (or query)

    INSERT DATA {
        # the triples you want to add go here,
        # you can use the prefixes defined in the NS-dict
    }
""", initNs=NS)

Task:

  • Write a SPARQL query that checks the RDF type(s) of Rick Gates in your RDF graph.
  • Write a similar SPARQL query that checks the RDF type(s) of money laundering in your RDF graph.
  • Write a small function that computes the RDFS closure on your graph.
  • Re-run the SPARQL queries to check the types of Rick Gates and of money laundering again: have they changed?

You can compute the RDFS closure on a graph g like this:

import owlrl

owlrl.DeductiveClosure(owlrl.RDFS_Semantics).expand(g)

Task: Use RDFS terms to add this rule as a triple:

  • A person under investigation is a FOAF person.
  • Like earlier, check the RDF types of Rick Gates before and after running RDFS reasoning. Do they change?

Task: Add in "plain RDF" as in Exercise 1:

  • Paul Manafort was convicted for tax evasion.

Use RDFS terms to add these rules as triples:

  • When one thing is convicted for another thing,
    • the first thing is also charged with the second thing.

Note: we are dealing with a "timeless" graph here, that represents facts that has held at "some points in time", but not necessarily at the same time.

  • What are the RDF types of Paul Manafort and of tax evasion before and after RDFS reasoning?
  • Does the RDFS domain and range of the convicted for property change?

If you have more time...

Task:

  • Create a Turtle file with all the RDF and RDFS triples from the earlier tasks.
  • Fire up GraphDB and create a new GraphDB Repository, this time with RDFS Ruleset for Inference and Validation.
  • Load the graph from the Turtle file and go through each of the above queries to confirm that GraphDB has performed RDFS reasoning as you would expect.

You can list all the triples in the graph to see if anything has been added:

SELECT * WHERE { ?s ?p ?o }

Task:

  • Create another GraphDB Repository, but with No inference.
  • Re-run the above tests and compare with the RDFS inference results.