Home > cloud, linked, technology > CAP and RDF storage at web scale

CAP and RDF storage at web scale

November 9, 2012

Requirements:

  • Store RDF – upload/insert at runtime. (Not URI for each triple, done want storm of network requests or lazy load convoy)
  •   Possible inference, tho not top priority
  •   Scale – multiple sync’d datacentre availability and durability, geo-regional partitioning
  •   Interface
    •   sparql
    •   java (did someone say ProLog ?)
    •   programmable standards (JDBC, JPA, in lieu of a JGraphDbConnectivity (“JGBC”) standard )
  •   Triple level security/ACL
  •   Transaction support
  •   Sparql 1.1
  •   FOSS
  •   Non hadoop – dont want batch capability or stop/start reconfiguration/scale : want dynamic load and query.

Any suggestions ? Cant have everything, that would be top much to ask in the 21st century, so was thinking Mongo, RIAK, Redis or Cassandra to get the availability and quick start setup, but suspect performance may be an issue from various things I’ve read, or that there are multiple translation steps into json/what-not, or an effectively proprietary API (I dont want to code to one, and then find out it wont do the job and have to rip lots out). On the other hand, I’ll probably have to take what I can get (and will be grateful), and code/engineer around misgivings as best I can. Hopefully, with a shallow RDF graph I can get away with it. Start small, agile, prove it does work (or does not), re-evalute, progress and change with an eye to the future.

Advertisements
Categories: cloud, linked, technology Tags: , ,
%d bloggers like this: