Archive

Posts Tagged ‘Work’

Some NepoMuk ontology types

February 18, 2011 Comments off

I am doing some work on a Top Secret Project to demonstrate on the SkyTwenty[1] platform the use of email data (in place of location data).

I am making use of Aperture[2] to crawl an IMAP store, then allow sharing of contact and message information, so that queries can be run to discover

  • who-knows-who in what domain
  • how many degrees of freedom there are between contacts
  • do selected contacts have any connection
  • how “well” do they know each other and so on.

Aperture makes use of the Nepomuk [3] message and desktop ontologies[4], and they’re fairly extensive, so a graphic helps to understand some of the ontological relationships.

The brilliant Protege4 [5] ontology design tool has plugins for GraphViz[6] and OntoGraf[7] produce some fairly neat images to visualise ontologies, so here they are. I would like if there was a way to include object and data propertys (by annotation perhaps, will try later) but for now have compiled a table of the class properties from a crawl and sparql query I did against the repository I loaded the data into.

Contact class relationships

Note that OntoGraf needs the Sun JDK to work, so on Ubuntu, which has the OpenJDK by default, you need to install and agree to the license terms, then make sure that Protege is using the Sun java at /usr/lib/jvm/java-6-sun-1.6.0.22 (or whatever version).

Nepomuk message and contact classes

Nepomuk message and contact classes

 

These tables are incomplete, and represent the classes and properties from the crawl of my nearly empty inbox. The full set of classes and properties for the Nepomuk ontologies are available on another page on this blog.

Prefix URI
nie http://www.semanticdesktop.org/ontologies/2007/01/19/nie#
nco http://www.semanticdesktop.org/ontologies/2007/03/22/nco#
nmo http://www.semanticdesktop.org/ontologies/2007/03/22/nmo#
rdf http://www.w3.org/1999/02/22-rdf-syntax-ns#
sesame http://www.openrdf.org/schema/sesame#
rdfs http://www.w3.org/2000/01/rdf-schema#Class
nfo http://www.semanticdesktop.org/ontologies/2007/03/22/nfo#
type property
nie:DataObject rdf:type
nie:title
sesame:directType
nie:isPartOf
nie:characterSet
nie:mimeType
nmo:contentMimeType
nmo:messageSubject
nmo:plainTextMessageContent
nmo:messageId
nie:byteSize
nie:contentCreated
nmo:sentDate
nmo:receivedDate
nmo:from
nmo:sender
nmo:to
nmo:inReplyTo
nmo:references
nie:DataSource rdf:type
sesame:directType
nco:Contact rdf:type
sesame:directType
nco:fullname
nco:hasEmailAddress
nco:EmailAddress rdf:type
sesame:directType
nco:emailAddress
nfo:Folder rdf:type
nie:title
sesame:directType
nie:isPartOf
nmo:Email rdf:type
sesame:directType
nie:isPartOf
nie:characterSet
nie:mimeType
nmo:contentMimeType
nmo:messageSubject
nmo:plainTextMessageContent
nmo:messageId
nie:byteSize
nie:contentCreated
nmo:sentDate
nmo:receivedDate
nmo:from
nmo:sender
nmo:to
nmo:inReplyTo
nmo:references
nmo:MailboxDataObject rdf:type
sesame:directType
nie:isPartOf
nie:characterSet
nie:mimeType
nmo:contentMimeType
nmo:messageSubject
nmo:plainTextMessageContent
nmo:messageId
nie:byteSize
nie:contentCreated
nmo:sentDate
nmo:receivedDate
nmo:from
nmo:sender
nmo:to
nmo:inReplyTo
nmo:references
nmo:MimeEntity rdf:type
sesame:directType
nie:isPartOf
nie:characterSet
nie:mimeType
nmo:contentMimeType
nmo:messageSubject
nmo:plainTextMessageContent
nmo:messageId
nie:byteSize
nie:contentCreated
nmo:sentDate
nmo:receivedDate
nmo:from
nmo:sender
nmo:to
nmo:inReplyTo
nmo:references
rdf:List rdf:type
sesame:directType
rdf:Property rdf:type
rdfs:domain
rdfs:range
rdfs:subPropertyOf
sesame:directType
sesame:directSubPropertyOf
rdfs:Class rdf:type
rdfs:subClassOf
sesame:directSubClassOf
sesame:directType
rdfs:Datatype rdf:type
rdfs:subClassOf
sesame:directSubClassOf
sesame:directType
rdfs:Resource rdf:type
rdfs:domain
rdfs:range
rdfs:subPropertyOf
sesame:directType
rdfs:subClassOf
sesame:directSubClassOf
sesame:directSubPropertyOf
nie:title
nie:isPartOf
nie:characterSet
nie:mimeType
nmo:contentMimeType
nmo:messageSubject
nmo:plainTextMessageContent
nmo:messageId
nie:byteSize
nie:contentCreated
nmo:sentDate
nmo:receivedDate
nmo:from
nmo:sender
nmo:to
nmo:inReplyTo
nmo:references
nco:fullname
nco:hasEmailAddress
nco:emailAddress

[1] http://skytwenty.endofinternet.net:8080/treasure/moreInfo.usp
[2] http://aperture.sourceforge.net/
[3] http://nepomuk.semanticdesktop.org/xwiki/bin/view/Main1/
[4] http://www.semanticdesktop.org/ontologies/
[5] http://protege.stanford.edu/
[6] http://graphviz.org/
[7] http://protegewiki.stanford.edu/wiki/OntoGraf

Java Semantic & Linked Open Data webapps – Part 5.1

January 18, 2011 1 comment

How to Architect ?

Well – what before how  – this is firstly about requirements, and then about treatment

Linked Open Data app

Create a semantic repository for a read only dataset with a sparql endpoint for the linked open data web. Create a web application with Ajax and html (no server side code) that makes use of this data and demonstrates linkage to other datasets. Integrate free text search and query capability. Generate a data driven UI from ontology if possible.

So – a fairly tall order : in summary

  • define ontology
  • extract entites from digital text and transform to rdf defined by ontology
  • create an RDF dataset and host in a repository.
  • provide a sparql endpoint
  • create a URI namespace and resolution capability. ensure persistence and decoupling of possible
  • provide content negotiation for human and machine addressing
  • create a UI with client side code only
  • create a text index for keyword search and possibly faceted search, and integrate into the UI alongside query driven interfaces
  • link to other datasets – geonames, dbpedia, any others meaningful – demonstrate promise and capability of linkage
  • build an ontology driven UI so that a human can navigate data, with appropriate display based on type, and appropriate form to drive exploration

Here’s what we end up

Lewis Topographical Dictionary linked data app - system diagram

  1. UserAgent – a browser navigates to Lewis TDI homepage – http://uoccou.endofinternet.net:8080/resources/sparql – and
  2. the webserver (tomcat in fact) returns html and javascript. This is the “application”.
  3. interactions on the webpage invoke javascript that either makes direct calls to Joseki (6) or makes use or permanent URIs (at purl.org) for subject instances from the ontology
  4. purl.org redirects to dynamic dns which resolves to hosted application – on EC2, or during development to some other server. This means we have permanent URIs with flexible hosting locations, at the expense of some network round trips – YMMV.
  5. dyndns calls EC2 where a 303 filter intersects to resolve to either a sparql (6) call for html, json or rdf. pluggable logic for different URIs and/or accept headers means this can be a select, describe, or construct.
  6. Joseki as a sparql endpoint provides RDF query processing with extensions for freetext search, aggregates, federation, inferencing
  7. TDB provides single semantic repository instance (java, persistent, memory mapped) addressable by joseki. For failover or horizontal scaling with multiple sparql endpoints SDB should probably be used. For vertical scaling at TDB – get a bigger machine ! Consider other repository options where physical partitioning, failover/resilience or concurrent webapp instance access required (ie if youre building a webapp connected to a repository by code rather than a web page that makes use of a sparql endpoint).

Next article will provide similar description or architecture used for the Java web application with code that is directly connected to a repository rather than one that talks to a sparql endpoint.

Amazon EC2 t1.micro swizz !

January 6, 2011 5 comments

Just got my bill from Amazon for the 2 instances Im running and find Ive been charged for 728 hours on one of them – I thought this was supposed to be free for a year ! Reading again the small print (ugh) it seems you are entitled to 750 hours free, but it doesn’t explicitely say per instance. So – it seems its per account and you can run as many instances as you like and use a total of 750 hours across them in total before you get charged. Then again, I suppose thats reasonable enough – Amazon wouldn’t want to have every SME in the world running in the cloud for free, for a year when you could be getting cash from them, would you ? I must have been in a daze :-)

Categories: cloud Tags: , , , ,

Final section in Java Semantic Webapps Part 3.1 completed

December 9, 2010 Comments off

I’ve filled out the tools matrix with the 60 or so tools, libraries and frameworks I looked at for the two projects I created. Not all are used of course, and only a few are used in both. Includes comments and opinion, which I used and why, and all referenced. Phew.

Java Semantic & Linked Open Data webapps – Part 3.1

December 8, 2010 Comments off

Community

This is a crucially important aspect in a new and evolving technology domain like the Semantic/Linked-Open-Data web – whether its a commercial or FOSS component you are thinking about using.

For commercual tools, many offer free end-user or community licensing, limited by size or frequency of use, but if you plan to take your application to market you may well need to upgrade to a commercial license, and these are often very expensive – a Semantic Web or Knowledge based application based on what might be an essential technology component, will surely be seen as large value-add area for commercial companies.  While this is true I believe, and commercial licenses can be justified, some technology offerings have small print that takes you straight to commercial licensing once you go to production.  Others have smaller but knobbled versions, while some do have true SME quality licensing.  So, watch out, it can be a barrier to entry, and we do need to see Mid-level, SME and Cloud offerings  for the success of the pervasive or ubiquitous Semantic Linked Open Data web.

Unfortunately, it seems that many tools and libraries born from academic research or OpenSource endeavours, while available for use, are often not maintained. The author or team moves on,  or the tool or library is published but languishes. This ends up with a situation where you may find a tool that does what you need but that has no or poor documentation; no active maintenance; no visible community support forums or user-base;  or compatibility problems with other tools, libraries or runtime environments. While that removes many from “production” usage or deployment, they can still be an important learning resource, and a means of comparing more current tools and libraries. I will itemise what I’ve come across below, but make sure you cast your professional eye over any offering – once you know what you are looking for, and what help in tools, libraries and environments you need : hopefully this article and the previous two have helped you in that.

  • What does it say it does and does-not do ?
  • How old is it ? What are its dependencies ?
  • How often is code being updated ?
  • Is it written in java/php/perl/.NET/ProLog/Lisp/ ? Does it suit you – does it matter if its written in Perl but youre going to write your app in Java – is what you are going to use it for an independent stage in the production of your application, or are all stages inter-twined ? How much will you have to learn ?
  • Who is the author ? What else has he/she/they done ? Are they involved in standards process, coding, design, implementation, community ? Blogs, conferences, presentations ?
  • Is there documentation ? A tutorial ? A reference ? Sample Code ? Production applications ?
  • Is there a means of contacting the authors, and other users ?
  • Are there bugs ? Are there many ? Are they being fixed ?
  • What are answers to questions like – simple, helpful, understanding, presumptuous, brick-wall !? One sentence answers or contextualised for audience ?
  • What are the user group like – beginner, intermediate, advanced, helpful, broad or narrow base, international, academic, commercial,… ?
  • How quickly are questions answered ?
  • Does it seem like the tool/library is successfully used by the community, or is it too early to say, or unfit for purpose :-( ?
  • Under what licensiing is the tool/library made available ?

Results

At the application level, this is how things pan out then.


Linked Open Data webapp Semantic backed J2EE webapp
Metadata, RDF, OWL Need to have entries for each location in gazeteer. Need list of those locations. Then need to relate one to another from what text describes about road links, directions and bearing. Need metadata fields for each of those. Will also pull out administrative region type, population information, natural resources, and “House” information – seats of power/peerage/members of parliament. Will need RDF, RDFS, OWL for this, along with metadata from other ontologies. A further dataset later added for townland names – this allows parish descriptions from Lewis to encompass townland divisions, and potential for crossover to more detailed reporting at the time (eg Parliamentary reports) This application associates a member or person with a list of locations and datetimes. Locations are posted by a device on a platform by a useragent at a datetime, and also associated with an application or group. An application is an anonymous association of people with a webapp page or pages that makes use of locations posted by its members. A group is an association of people who know each other by name/ID/email address and who want to share locations. Application owners cannot see locations or members of other locations unless they own each of the applications. Application owners cannot see with full accuracy the location or datetime information. Group owners can see the location and datetime with more accuracy, but not full accuracy, of their members. A further user type (“Partner”) can see all locations for all groups and applications but cannot see names of groups, applications or people, and has less accuracy on location and datetime. Concept subject tags can be associated with profiles and locations. A query capability is exposed to allow data mining with inference to application owners and partners. Queries can be scheduled and actions performed on “success” or “fail”. Metadata for people, devices, platforms, datetime, location, tags, applications and groups is required. ACL control based on that metadata is performed, but done so at an application logic level, not at a data level.
SPARQL, Description Logic (DL), Ontologies A SPARQL endpoint is provided on top of the extracted and loaded data, and is the primary “API” used by the application logic which is expressed in javascript. Inferrence allows regions for instance to be queried at a very high level rather than by listing specific types. An ontology is created around the Location, location type, direction, bearing, distance, admin type, population, natural resource and peerage. A separate ontology created for peerage relationships and vocabulary, and imported into toplevel Lewis ontology. Some fields used from others notably wgs84 and muo. UI allows navigation by ontology (jOwl plugin). No SPARQL interface directly exposed, but sparql queries for the basis of a data console, but restrictions on queries are applied based on ID and application/group membership, as well as role. A custom ontology is created based around FOAF, and SIOC, extending for RegisteredUser, Administrator, Partner, Application, Group, Device, Location and so on. Object model in Java mirrors this at interface level to simulate multiple inheritance. Some cardinality restrictions, but mostly makes use of domain/range specification from RDFS. Umbel ontology used for querying across tag relations. Inference has huge impact on performance, and data partitioning would be required for query performance, but this also has implications for library code used (named graph and query support, inference configuration) and application architecture and scale-out planning.
Artificial intelligence, machine learning, linguistics Machine learning and linguistic analysus avoid in favour of syntactic a-priori extraction via gazeteer and word list after sentences have been delimited within each delimited location entry or report. Aliases and synonyms added later manually as fixup for OCR errors. Quality restricted by text from PDF and structural artifacts (page headings, numbers) newlines, linefeeds and lack of section headings within locations, location delimiters, and linguistic vagaries of author. Much much more information is available within each entry, but for now the original text is also stored sentence by sentence, with each entry. None required here as no extraction is performed. Tag words and terms are restricted to those available in Umbel (OpenCyc) and condensed to Umberl Subject Concept URIs, which sparql queries can then make use of for broader, narrower and associative queries. “Find everyone who likes sports who posted a location within 1 mile of here”.
Linked Open Data Location name lookups at extraction time link with to WGS84 grid location and ID in geonames, then to dbPedia entry. Former done using traditional web service API, latter by Sparql query. Coverage of about 85% achieved. dbPedia lookup based on name attempted but higher error rate (no or ambiguous hits) and lower coverage found (there are many infobox field variations for same type of information) QA manual/”eyeball” deemed sufficient for expected usage and audience.Link to Dictionaries of Biography for houses,possible using some form of owl:equivalence of peerage ontology. UI level links to Sindice and Uberblic attempted but cross-domain scripting prohibited. Locations mapped to Google maps – could be migrated to OpenStreenMap (geonames basis). Visualisation possible with Google visualisation or other web tool. Server side proxy created for this, and for further dbPedia integration – this provides example link to “people born before 1842 at this location”. Links to Umbel are performed at query time based on Umbel Subject concepts applied by members to their profile and location. Umberl vocabulary is currently directly queried to Structured Dynamics endpoint, but could be loaded into same data repository or a separate but more local repository. Large memory footprint. Federated query capability depends on pluggable persistence technology used in application. Applications built on or off domain are free to make use of owl:sameAs for instance to further link proprietary data with data stored in this system, but need to make that association within their own repository. Links can be made to profile identity (local or OpenID) if known or if user expressly associates (after OAuth verification), to wgs84 location (assuming some proximity calculation), to application or group name (if known).
Community & Tools All opensource tooling required for extraction, repository and application/UI code. (Open public data set, no commercial aspects) 

Some components need handwriting – eg content negotiation.  Most libraries facilitate rather than fulfill requirements – eg RDF generation and serialization, Ontology creation, code generation. Damn – I have to write code !

NLP and ML too advanced, too manual, too time consuming for a beginner, or a one-person prototyping “team”.

UI from RDF a problematic area – would be good to be able to geneate a UI now theres an ontoloy, but no more advanced than any UI or Form generation from XML or structured data.

Link generation code largely manual, could do with abstraction and ease of use (but this is complex area !). Lots and lots to learn, active support and experience required . Cross domain scripting a problem for Linked Open Data.

Where open linked data isnt a primary requirement then most other requirements are met by traditional RDBMS based technology and architecture.  Open source can meet all component requirements for now (tech demo) 

So, 3-tier MVC architecture, DAO and service objects. Enterprise security and ACL.

RDF access – read and write – libraries available, each with differing features, compliance and performance levels.

Federation poorly supported in repository/RDF access libs – complicated area, but Linked Open Data needs it, and forced to devolve to large repositories isn’t an attractive option.

Inferrance slow.

No JDBC type access wrappers to semantic repositories. SPARQL young and evolving.

Concurrency and multi-instance access considerations need to be made up front, early in development.

Some library or repository specific ORM type tools, one (I found) JPA based library being developed. Lots and lots to learn, active support and experience required.

Tools

This is as comprehensive a list as I can come up with based on what I looked at and ended up using (or not). There are many many more for sure, some in Java, others in various other languages. As some of the work types in the text->knowledge progression are often independent, being available in Java many not be important or even a consideration for you. So – look here, there and everywhere. See also Dave Becketts [81] list for a great source of information about available tools and technologies.
.

Category Tool Comment Linked Open Data webapp Semantic backed J2EE webapp
Extraction GATE [56] IDE for configuration of NLP toolsets and training ML engine. Active user group, but tool UI seemed buggy (q1 2010) and documentation was obtuse – not geared towards those not “in the know” IMO. Still, good, but would need a lot of effort and patience. X X
OpenCalais [96] Commercially oriented business and news online entity extraction and linking. Not suitable for historical archive text, commercial. X X
RDF generation nothing This is part of the transformation of source content to “knowledge”. Once entities are extracted they need to be used in RDF triples – how you go about this depends on your vocabulary and ontology and its up to you to use the RDF-Java-Object frameworks (below) that allow you to create a Subject and add a Property with an Object value. I havent found a tool that would allow code to generate RDF from tagged entities say, and its likely not reasonble to think in this way – however convenient. How would such a tool know which relationships in an ontology were asserted in the entity set you gave it ? The only way to about this is to code those things yourself from the knowledge you already have about the information, or what you want to assert, or perhaps, if you are dealing with a database to use its schema as the basis for a set of asserted statements in RDF – using D2R[87] or Triplify[88] say (do you need inference or not ?). This approach was not used in either of these projects however. Perhaos owl2java [99] might have helped ? X X
NLP, ML GATE [56] NLP engine from Sheffield University with support for ML – see also Extraction category above. Tried but not used. X X
OpenNLP[89, 90] NLP library for tokenization, chunking, parsing and coreference. Simple than GATE, less documentation, dormant ? Tried but not used. X X
MinorThird [91] Probably more ML than NLP, but with tokenization and extraction capability. Getting long in the tooth, and had some compatability issues when tested. X X
UIMA[92, 94,95] “Unstructured Information Management Architecture”. A full blown framework for NLP and ML – “text mining”, a la GATE. Now in Apache (contributed by IBM). Good documentation, active support and development. Came close to using for Linked Data app but came too late, and seemed large and time consuming to learn (in my timescale). However, for a version2 of the project I would use it, over GATE and the custom code I built – documentation for end user and developer is less assuming than GATE, and there are various plugins available, and as it is modular (so is GATE btw) you can create and add your own discrete code into the UIMA processing pipeline. Still need something to generate RDF based around your ontology and the extracted entities tho…
SenseRelate[93] NLP-Wordnet disambiguation toolkit. Couldnt see how I would integrate this – what purpose for my application as I was using a-priori knowledge of the text for the Linked Open Data webapp, and the application business logic for the Semantic backed J2EE webapp. Also getting old… X X
LingPipe [106] Very interesting toolkit for NLP, text and document processing, but ultimately with a commercial license X X
Mallet [107] Like LingPipe but opensource, with sequence tagging and topic modelling. X X
Weka [108] Another text mining tool, opensource, good docs, current and maintained, also works with GATE[56] X X
RDF-Java OpenJena [59, 65] Maturing framework for RDF with java. Sparql implementation [61] follows standards closely and previews upcoming versions, as Andy Seaborne on SPARQL w3c group. Has repository capability as well. Used in both projects, but in J2EE app was just on of possibilties for repository integration and RDF capability. Support forum high traffic – popular choice. Expected to provide working code examples when describing problems – discussion not entertained ! HP [64] and now Apache [65] backing. Combined with JenaBean [71] and Empire-JPA[72] in J2EE app. TTL/N3 config may seem alien to java webapp developers. Y Y
KAON [97] Another library – didnt seem as popular as Jena or Sesame. Documentation ? Old, not actively maintained ? X X
Sesame [62] Modular RDF to java library and repository framework. v3 expected soon (Q1 2011 ?). Good documentation and comment available on and off site, but you still need to experiment. Support forum can be slow and low traffic, but still a popular choice. Also home for Elmo [66] (an object-RDF extension), and Alibaba [67] – “the next generation of the Elmo codebase”. Combined with Empire [72] in J2EE app. TTL/N3 config may seem alien to java webapp developers. X Y
Object-RDF JenaBean [71] Appears now dormant, but Jena Object library with custom annotations to model and map Java Classes to RDF classes. Support very slow. Low activity. X Y
Empire-JPA[72] Aka Empire-RDF. From makers of Pellet [75]. JPA implementation for access to semantic repositories, with adapters for Sesame, Jena, Fourstore [74]. Newish, v0.7 about to be released. Support good, interested, helpful. X Y
RDF2GO [79] Abstraction over repository and triplestores, with Jena, Sesame and OWLIM adapters. Decided in favour of Empire. X X
Repository and/or database TDB [60] Single instance in memory repository, with cmdline and Jena integration. No clustering, replication capability – must be local to webapp. Configuration can be awkward, imo, but easy enough  to get started with. Inferencing and custom ontology support, both at configuration and code levels. Single writer multiple reader. Used in both projects but in J2EE app was just one of possible repository technologies. Memory mapped files in 64bit JVM. Y Y
SDB [63] RDBMS backed repository technology for Jena. External connection handling possible. Single writer multiple reader. Slower than TDB, slow compared to Sesame.  In J2EE app was just one of possible repository technologies X Y
Sesame [62] Provides proxy http capability in front of in memory, file based or database backed repositories. Inferrence by configuration, performed on write – inferred statements are asserted and persisted. Allows for multiple web app instances to make use of any of the repositories. Web based “workbench”. Limited reasoning support compared to Jena. Support forum could be described as “slow”. OntoText [73] backing. X Y
BigData [68] Sesame[62] + Zookeeper [77] + MapReduce [78] based clustered semantic repository for very large datasets. Too big for either apps at this stage, but Empire/Sesame usage provides growth path. X X
AllegroGraph [69] Lisp based Semantic Repository with community and commercial licensing options for larger datasets. Http interface – could be used as alternative to Jena/Sesame/Empire. Biggish application and framework to read and learn – too big for now ! X X
OWLIM [70] Large scale repository based around Sesame. Reasoning support better then Sesame, and takes alternative approach to implementation compared with Jena say. Community and commercial license. Too big for now ! X X
Fourstore [80] Python semantic repository. Could be used behind Empire.[72] X X
Content negotiation Pubby [76] WAR file with configuration (N3) for URI mapping, 303 redirect and many other aspects of Linked Data access – for sparql endpoints that support DESCRIBE. Wrote filter that could sit on remote front end as alternative, but may get used later. X X
SPARQL access & Endpoint Joseki [58] Sparql endpoint for use with Jena. Needs URL rewriting for PURLs and content negotiation code in front.(custom code) Y X
Link generation N/A Use custom code from eg Jena or Sesame to create statements in model – once you’ve designed your URI scheme – and get the code to serialise/materialise the URI for you. Y Y
Ontologies Protégé  [82] IDE to create RDFS and OWL ontologies, with reasoning and visualisation. Y Y
NeOn Toolkit [100] Eclipse based tool suite for semantic apps. Broad scope, protege seemed a better fit – easier and quicker to get to grips with at the time. May be used again tho. X X
KAON – OI-Modeler [98] old. still available ? being maintained ? X X
m2t4 [101] looked promising, simple eclipse plugin, had compatability and maintenance issues. switched to Protege in the end however. X X
Inference & Reasoning Jena [59] Jena has built in inference capability, but is considered slower than others.[86]. In the J2EE app, with an RDBMS backed repository it was poor, IMO. With a TDB repo its better, but still something you really need to have before you would deploy in production. This is probably true of all current repostories, but Jena seems to be at the slow end of the scale.However, it does deliver high standards compliance rather than a “degraded” compliance you may get with others. Y Y
Sesame[62] Sesame has “reduced” reasoning support – it can do RDFS based reasoning, and if custom ontologies are added to a repository type with inferrence support it will make use of them. If a “view” of a dataset is required that doesnt contain inferred statements, then a query parameter needs to be used so they are filtered out. X Y
Pellet [75] “Independent” inference and reasoning. Not used except as plugin in Protege. Supposedly faster than some others. X X
OWLIM [73] OWLIM comes with its own flavour of inference and reasoning  “support for the semantics of RDFS, OWL Horst and OWL 2 RL” X X
UI Generation & Rendering Talis Sparql js lib [57] Javascript interface for using datasources hosted on Talis platform. Decided not to host data “offsite” at this stage X X
BrownSauce[83] RDF UI generation that might be possible to plug into servlet code and sparql endpoint. Dormant, unsupported ? Dependency compatabilty and documentation issues. X X
Fenfire [53] Visualisation interface – Last update 2008. Seems like research project for developers only X X
Humboldt[54] Faceted Browser – not publically available it seems X X
ZLinks [55] Linked data link generator – general purpose, browser plugin X X
Facet [84] Standalone faceted browser for RDF datasets, prolog based. (Cant integrate  with Java/js ?) X X
Longwell [85] Standalone faceted browser for RDF datasets, fresnel [104] – dormant ? integratable ? extendable ? X X
jOWL [86] Javscript lib for owl ontology driven browsing. Last release v1 2009. Low traffic support, but code is accessible and customisable. Y X
Fresnel [104] Display vocabulary for RDF. Integrate at java level, may have been possible to create a Spring [105] view module (I use Spring a lot) but was another thing to learn and I wanted to try and use plain old javascript and html as much as possible. Has promise, but documentation, support and maintenance may be an issue. X X
Exhibit [102, 103] “Publishing” for rdf datasets- looked promising and useful but had compatability issues iirc, and integration with existing semantic repository wasnt clear X X

[53] http://events.linkeddata.org/ldow2008/papers/14-hastrup-cyganiak-browsing-with-fenfire.pdf
[54] http://events.linkeddata.org/ldow2008/papers/15-kobilarov-dickinson-humboldt-exploring.pdf
[55] http://zitgist.com/products/zlinks/zlinks.html
[56] http://gate.ac.uk/
[57] http://api.talis.com/stores/bbc-demo/items/demos/lib/sparql.js
[58] http://www.joseki.org/
[59] http://www.openjena.org
[60] http://openjena.org/TDB/
[61] http://openjena.org/ARQ
[62] http://www.openrdf.org
[63] http://openjena.org/SDB/
[64] http://www.hpl.hp.com/semweb/
[65] http://incubator.apache.org/projects/jena.html
[66] http://www.openrdf.org/doc/elmo/1.5/
[67] http://www.openrdf.org/doc/alibaba/2.0-beta3/
[68] http://www.bigdata.com/bigdata/blog/
[69] http://www.franz.com/agraph/allegrograph/
[70] http://www.ontotext.com/owlim/
[71] http://code.google.com/p/jenabean/
[72] http://groups.google.com/group/empire-rdf/
[73] http://www.ontotext.com/owlim/
[74] http://packages.python.org/ordf/index.html
[75] http://clarkparsia.com/pellet/
[76] http://www4.wiwiss.fu-berlin.de/pubby/
[77] http://hadoop.apache.org/zookeeper/
[78] http://en.wikipedia.org/wiki/MapReduce
[79] http://semanticweb.org/wiki/RDF2Go
[80] http://packages.python.org/ordf/index.html
[81] http://planetrdf.com/guide/
[82] http://protege.stanford.edu
[83] http://brownsauce.sourceforge.net/
[84] http://slashfacet.semanticweb.org/
[85] http://simile.mit.edu/wiki/Longwell
[86] http://www4.wiwiss.fu-berlin.de/bizer/BerlinSPARQLBenchmark/results/index.html#results
[87] http://www4.wiwiss.fu-berlin.de/bizer/d2r-server/
[88] http://triplify.org/About
[89] http://opennlp.sourceforge.net/README.html
[90] http://sourceforge.net/apps/mediawiki/opennlp/index.php?title=Main_Page
[91] http://sourceforge.net/apps/trac/minorthird/wiki
[92] http://uima.apache.org/
[93] http://senserelate.sourceforge.net/
[94] http://www.julielab.de/Resources/Software/NLP_Tools.html
[95] http://www.research.ibm.com/journal/sj43-3.html
[96] http://www.opencalais.com/
[97] http://kaon.semanticweb.org/users
[98] http://kaon.semanticweb.org/docus/Manual_KAON-OI-Modeler_November_2002.pdf/view
[99] http://www.incunabulum.de/projects/it/owl2java/owl2java-a-owl2java-generator
[100] http://neon-toolkit.org/wiki/Main_Page
[101] http://www.m3t4.com/semantic/updates/install.html
[102] http://www.simile-widgets.org/exhibit/
[103] http://groups.csail.mit.edu/haystack/
[104] http://www.w3.org/2005/04/fresnel-info/
[105] http://www.springframework.org
[106] http://alias-i.com/lingpipe/index.html
[107] http://mallet.cs.umass.edu/
[108] http://www.cs.waikato.ac.nz/ml/weka/

Follow

Get every new post delivered to your Inbox.