if (response.getStatusCode() > 499) {
retryTheSameRequest();
}
Overview
This document will give you an overview about the HIRO Graph, the governing concepts and the developer API.
REST API
-
All requests(except authenticate) must contain a valid access token. Request a valid access token using
/auth/
api. -
All requests will be made against a base $url (e.g. https://core.arago.co/api/).
-
Most of responses are in JSON (to see exception see specific api specs)
-
JSON response format: set header
Accept
toapplication/json
-
JSON request format (for requests with body): set header
Content-Type
toapplication/json
-
-
All requests payload unless defined differently in api specification must be utf-8 encoded string.
API Browser
You can access the HIRO REST API browser on your HIRO instance: https://core.arago.co/help/specs/?url=definitions/graph.yaml
.
Error Codes
HIRO Graph uses standard HTTP error codes. Please refer to https://en.wikipedia.org/wiki/List_of_HTTP_status_codes
There might be different error codes, depending on which reverse proxy is configured to be in front of HIRO Graph, please refer to the reverse proxy product documentation you are using.
Error codes with special meaning for HIRO Graph:
Error Code | Description | Explanation |
---|---|---|
400 |
Bad Request |
please review request against api documentation |
401 |
Token invalid, expired or missing |
make sure you use a valid token; get a new token |
403 |
Access denied |
access to resource is not granted by current set of policies |
404 |
Not Found |
something does not exist |
408 |
Internal request timeout |
request took too long; limit request or response size |
409 |
Conflict |
e.g. if you try to create an already existing vertex |
500 |
Internal Error |
generic error, please contact us at support@hiro.arago.co |
503 |
Service temporary unavailable |
try again later |
888 |
Transaction failure / Upstream error |
if two transactions collide, HIRO Graph cannot decide what to do with the failed transaction. Check the condition of the items in the request and retry if still possible |
Because everything may fail (software, middleware, network, …) clients working against the HIRO Graph API are expected to retry failed requests. The expectation, that all requests succeed in the age of the internet is wrong and unrealistic.
A good measure for when to retry is:
All status code in the 4xx range require, that the client fixes something. Be that request format (400), new access token (401) etc pp. The request can e.g. be retried when a new token has been obtained.
Concepts
HIRO Graph is a semantic graph processing platform.
Semantic
HIRO Graph contains Entities (things) with Attributes (properties), Verbs (connections) as well as other data like Timeseries or BLOBs. The semantic part is described here https://github.com/arago/OGIT.
Requirements of Namespace, IDs and Attributes
One must never depend on the format of the ID that HIRO Graph generates. |
-
All type ids (attribute ids, entity ids, verb ids) in HIRO Graph are namespaced.
-
To ensure consistency and avoid ambiguity, these have a mandatory prefix http://www.purl.org/. Everything after this prefix is the shortened id, that will be by the graph, e.g.
ogit/_id
derives fromhttp://www.purl.org/ogit/_id
. (see http://www.purl.org/docs/index.html for further info). Replace "/" in "_id" with "%2F". -
Attributes, that are not defined in OGIT have an empty namespace, e.g.:
/IssueXML
. -
Free attributes have to begin with "/", e.g. "/environmentType": "PROD",.
-
Free attributes containing dot "." are not allowed.
-
All attribute values must be of type string.
-
All attributes are indexed with the keyword analyzer, so wildcards and submatches cannot be used on them, they are also case-sensitive.
-
System attributes start with
ogit/_
. -
Changes to system attributes apart of described below are not allowed.
-
ogit/_xid
is an external ID which could be set on vertex creation used to map to an external system. Setting this id in a local cache will throw a 409 if another entity has the sameogit/_xid
. Nevertheless multiple entities with the sameogit/_xid
may exist at the same time if data is created with sync. -
ogit/_content
when written is indexed with the standard analyzer. It is also indexed with an ngram analyzer. a text like "a sample text of things" can be queried likeogit/_content:sample
to find words in it, orogit/_content.ngram:amp
to find ngrams in it. This field is allowed for all entities and can be filled automatically by setting the indexed: property in the entity definition. -
ogit/_content
is the sum of all attribute values that are specified in OGIT underogit:indexed-attributes
in OGIT. e.g. https://github.com/arago/OGIT/blob/master/NTO/Automation/entities/MARSNode.ttl#L131 -
ogit/_tags
when written is indexed as a list of keywords. The format when writing is "some, tag, with spaces" and internally is indexed as ["some", "tag", "with spaces"]. It can be searched likeogit/_tags:some
to find full tags in the list. This field is allowed for all entities. -
ogit/_owner
could be set to team id to allow sharing data by policies. if not set in request default team from account profile (in case not set default for organization) would be used -
ogit/_scope
could be set on vertex creation to put vertex in specific data scope, if not default from context would be used -
The length of attribute values must not exceed 60kB. If they are larger than 60kB, they will not be indexed. The following attributes can have an arbitrary content length, but can not be used to match in queries:
-
ogit/message
-
ogit/content
-
ogit/description
-
ogit/comment
-
ogit/licenseKey
-
ogit/question
-
ogit/response
-
ogit/reason
-
ogit/serviceContract
-
ogit/taskLog
-
ogit/value
-
ogit/values
-
-
Free attributes can have any length but cannot be used in queries if over 60kB.
-
Limitations on data size (trying to create or update data bigger than allowed would end up with 400 error). For performance reasons it is recommended to build applications in the way that single data piece is not bigger than 1MB.
-
issue/task in engine - 1MB
-
vertex - 10MB
-
timeseries value - 100MB
-
BLOB content - 100MB
-
api request payload 100MB
-
-
Stream responses are used to transfer possibly bigger amount of data pieces like query or timeseries values response. each item it send from server separately and could be parsed by json streaming. e. g.
{"items":[{$data1},{$data2},…,{"error":{"code":408,"messge":"internal request timeout"}}]}
. Note: client shall check if data piece contain `"error"`and handle it (after "error" no more data would be send)