• Migration Topics
  • Integration
  • Concepts
  • Best Practices
  • Api
    • Changelog
    • HIRO API Overview
    • HIRO Audit API
    • HIRO Graph - Gremlin Query
    • HIRO Graph Action API
    • HIRO Graph Auth API
    • HIRO Graph List API
    • HIRO Graph WebSocket API
    • Refresh Token

Overview

This document will give you an overview about the HIRO Graph, the governing concepts and the developer API.

REST API

  1. All requests(except authenticate) must contain a valid access token. Request a valid access token using /auth/ api.

  2. All requests will be made against a base $url (e.g. https://core.arago.co/api/).

  3. Most of responses are in JSON (to see exception see specific api specs)

    • JSON response format: set header Accept to application/json

    • JSON request format (for requests with body): set header Content-Type to application/json

API Browser

You can access the HIRO REST API browser on your HIRO instance: https://core.arago.co/help/specs/?url=definitions/graph.yaml.

Error Codes

HIRO Graph uses standard HTTP error codes. Please refer to https://en.wikipedia.org/wiki/List_of_HTTP_status_codes

There might be different error codes, depending on which reverse proxy is configured to be in front of HIRO Graph, please refer to the reverse proxy product documentation you are using.

Error codes with special meaning for HIRO Graph:

Table 1. Table Error Codes
Error Code Description Explanation

400

Bad Request

please review request against api documentation

401

Token invalid, expired or missing

make sure you use a valid token; get a new token

403

Access denied

access to resource is not granted by current set of policies

404

Not Found

something does not exist

408

Internal request timeout

request took too long; limit request or response size

409

Conflict

e.g. if you try to create an already existing vertex

500

Internal Error

generic error, please contact us at support@hiro.arago.co

503

Service temporary unavailable

try again later

888

Transaction failure

if two transactions collide, HIRO Graph cannot decide what to do with the failed transaction. Check the condition of the items in the request and retry if still possible

Because everything may fail (software, middleware, network, …​) clients working against the HIRO Graph API are expected to retry failed requests. The expectation, that all requests succeed in the age of the internet is wrong and unrealistic.

A good measure for when to retry is:

if (response.getStatusCode() > 499) {
  retryTheSameRequest();
}

All status code in the 4xx range require, that the client fixes something. Be that request format (400), new access token (401) etc pp. The request can e.g. be retried when a new token has been obtained.

Concepts

HIRO Graph is a semantic graph processing platform.

Semantic

HIRO Graph contains Entities (things) with Attributes (properties), Verbs (connections) as well as other data like Timeseries or BLOBs. The semantic part is described here https://github.com/arago/OGIT.

Requirements of Namespace, IDs and Attributes

One must never depend on the format of the ID that HIRO Graph generates.
  1. All type ids (attribute ids, entity ids, verb ids) in HIRO Graph are namespaced.

  2. To ensure consistency and avoid ambiguity, these have a mandatory prefix http://www.purl.org/. Everything after this prefix is the shortened id, that will be by the graph, e.g. ogit/_id derives from http://www.purl.org/ogit/_id. (see http://www.purl.org/docs/index.html for further info). Replace "/" in "_id" with "%2F".

  3. Attributes, that are not defined in OGIT have an empty namespace, e.g.: /IssueXML.

  4. The properties id and label can never be used due to storage limitations.

  5. Free attributes have to begin with "/", e.g. "/environmentType": "PROD",.

  6. All attribute values must be of type string.

  7. System attributes start with ogit/_.

  8. ogit/_xid is an external ID which could be set on vertex creation used to map to an external system. Setting this id in a local cache will throw a 409 if another entity has the same ogit/_xid. Nevertheless multiple entities with the same ogit/_xid may exist at the same time if data is created with sync.

  9. All attributes are indexed with the keyword analyzer, so wildcards and submatches cannot be used on them, they are also case-sensitive.

  10. ogit/_content when written is indexed with the standard analyzer. It is also indexed with an ngram analyzer. a text like "a sample text of things" can be queried like ogit/_content:sample to find words in it, or ogit/_content.ngram:amp to find ngrams in it. This field is allowed for all entities and can be filled automatically by setting the indexed: property in the entity definition.

  11. ogit/_content is the sum of all attribute values that are specified in OGIT under ogit:indexed-attributes in OGIT. e.g. https://github.com/arago/OGIT/blob/master/NTO/Automation/entities/MARSNode.ttl#L131

  12. ogit/_tags when written is indexed as a list of keywords. The format when writing is "some, tag, with spaces" and internally is indexed as ["some", "tag", "with spaces"]. It can be searched like ogit/_tags:some to find full tags in the list. This field is allowed for all entities.

  13. ogit/_owner could be set to team id to allow sharing data by policies. if not set in request default team from account profile (in case not set default for organization) would be used

  14. ogit/_scope could be set on vertex creation to put vertex in specific data scope, if not default from context would be used

  15. The length of attribute values must not exceed 60kB. If they are larger than 60kB, they will not be indexed. The following attributes can have an arbitrary content length, but can not be used to match in queries:

    • ogit/message

    • ogit/content

    • ogit/description

    • ogit/comment

    • ogit/licenseKey

    • ogit/question

    • ogit/response

    • ogit/reason

    • ogit/serviceContract

    • ogit/taskLog

    • ogit/value

    • ogit/values

  16. Free attributes can have any length but cannot be used in queries if over 60kB.

  17. Limitations on data size (trying to create or update data bigger than allowed would end up with 400 error). For performance reasons it is recommended to build applications in the way that single data piece is not bigger than 1MB.

    • issue/task in engine - 1MB

    • vertex - 10MB

    • timeseries value - 100MB

    • BLOB content - 100MB

    • api request payload 100MB

  18. Stream responses are used to transfer possibly bigger amount of data pieces like query or timeseries values response. each item it send from server separately and could be parsed by json streaming. e. g. {"items":[{$data1},{$data2},…​,{"error":{"code":408,"messge":"internal request timeout"}}]}. Note: client shall check if data piece contain `"error"`and handle it (after "error" no more data would be send)