- Summary
- Query Format and routes
- Endpoint:
- Parameter Encoding
- login
- Query
- Response
- Example
- addArchivedFolder
- Query
- Response
- Start a full archive or recall job
- Response
- Viewing running jobs
- Response
- Viewing jobs history
- Response
- Cancelling a running job
- Response
Summary
The Golden Copy GraphQL Api is an authenticated, remote interface that runs over http for querying an eyeglass search appliance for files.
Query Format and routes
Queries to GC API can be issued to any node in the search cluster. Queries run over https with the bulk of the query passed as url parameters.
Endpoint:
All Queries must be issued to https://ip.of.gc.node/graphql
Parameter Encoding
GraphQL queries must be issued as a urlencoded value to the query http parameter:
https://ip.of.gc.node/graphql?query=url_encoded_graphql_query
Authentication
Authentication is achieved through the retrieval of a json web token. This token is to be included in the “Bearer” header of all future calls to GC API.
login
Query
Schema | ||
login(id: String!, pass: String!): LoginResult | ||
Argument | Type | Value |
id (required) | String | Username + domain of the user logging in, in the user@domain.com syntax. For local users, omit the domain. |
pass (required) | String | Password of the user attempting to log in. |
Response
Schema | ||
type: LoginResult { user: User! token: String! } | ||
Field | Type | Value |
user (non-null) | Object (User) | User Object Object for the logged in user. |
token (non-null) | String | The JSON Web Token to be used in authorizing future GC API calls. |
type: User { name: String! role: String! } | ||
Field | Type | Value |
name (non-null) | String | The name of the logged in user, in DOMAIN\username format |
role (non-null) | String | one of: USER or ADMIN |
Example
Login with the username testuser@exampledomain.com, using password NotReal!:
curl -s -G -k https://search.igls.com/graphql --data-urlencode 'query={ login(id:"testuser@exampledomain.com", pass:"NotReal!") { user { name } token } }' |
{ "data": { "login": { "token": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJTSUQ6Uy0xLTUtMjEtMjAxODMyNTY2LTMxODczNTM0MDctMjgyOTk5MTcxMC0xNDQ4Iiwicm9sZSI6IlVTRVIiLCJleHAiOjE1NzAyODk3MTR9.0vs-tnOgs0cOBSYB_SOuNHdmV7NT6YisTwuIbZFMkJE", "user": { "name": "EXAMPLEDOMAIN\\testuser" } } } } |
For any authenticated endpoints, the value of token needs to be used in the Authorization header using the Bearer schema
Authorization: Bearer <token> |
Adding Archived Folders
addArchivedFolder
The main mutation to add an archived folder. Returns information about the added folder.
Query
Schema | ||
addArchivedFolder( accessKey: String, archiveDataAuditCron: String, backupNum: String, bucket: String, cloudtype: String, clusterName: String, container: String, endpoint: String, endpointIps: String, fullCron: String, host: String!, includes: String, incrementalCron: String, metaPrefix: String, path: String!, rateLimit: String, region: String, secretKey: String, skipS3FileExists: String, tier: String, trashBucket: String, excludes: String ): ArchivedFolderInfo
| ||
Argument | Type | Value |
accessKey | String | Access key for authentication to the cloud endpoint. Might be the username of the user when archiving to some endpoints. |
archiveDataAuditCron | String | A cron expression for how often to run the data audit function. |
backupNum | String | For full backups, the number of independent copies to keep. |
bucket | String | the name of the bucket in cloud storage
|
cloudType | String | one of aws, azure, ecs, gcs, or other |
clusterName | String | the name of the source Isilon cluster |
container | String | for Azure, the container name |
endpoint | String | the URI of cloud storage to connect to |
endpointIps | String | for ECS, the group of IPs to load balance over. |
excludes | String | Glob syntax. Exclude these files / folders from archiving. |
fullCron | String | A cron expression for how often to run the full archive function. |
host | String! | the name of the Isilon cluster source. |
includes | String | Glob syntax. Only include matching files in the archive. |
incrementalCron | String | A cron expression for how often to run the incremental archive function. |
metaPrefix | String | Used with “other” cloudTypes. Prefix for custom metadata tags in the payload. |
path | String! | Starting with /ifs, the base location on the source filesystem to archive. |
rateLimit | String | A limit in bytes/sec to set on the outbound traffic. |
region | String | Cloud region. |
secretKey | String | Secret credential for cloud storage. |
skipS3FileExists | String | True/False - skip checking if the records exist in cloud storage before uploading. |
tier | String | cloud tier for archiving |
trashBucket | String | bucket to use as a trash in case of deletes. |
Response
Schema | ||
type ArchivedFolderInfo { id: String! cluster: String! path: String! accessKey: String archiveDataAuditCron: String backupNum: String bucket: String checksum: String cloudtype: String container: String disableIncremental: String endpoint: String endpointIpPool: [String] fullCron: String incrementalCron: String lastArchiveDate: Long lastFullArchiveDate: Long metaPrefix: String rateLimit: String region: String skipS3FileExists: String tier: String trashBucket: String }
| ||
Field | Type | Value |
id | String! | Unique ID for this archivedfolder |
cluster | String! | Name of source cluster |
path | String! | Path of files on source cluster |
accessKey | String | Access key for authentication to the cloud endpoint. Might be the username of the user when archiving to some endpoints. |
archiveDataAuditCron | String | A cron expression for how often to run the data audit function. |
backupNum | String | For full backups, the number of independent copies to keep. |
bucket | String | the name of the bucket in cloud storage
|
checksum | String | True if explicit checksumming enabled. |
cloudType | String | one of aws, azure, ecs, gcs, or other |
container | String | for Azure, the container name |
endpoint | String | the URI of cloud storage to connect to |
endpointIpPool | [String] | for ECS, the group of IPs to load balance over. |
fullCron | String | A cron expression for how often to run the full archive function. |
host | String! | the name of the Isilon cluster source. |
incrementalCron | String | A cron expression for how often to run the incremental archive function. |
lastArchiveDate | Long | Timestamp of last archive job start. |
lastFullArchiveDate | Long | Timestamp of last full archive job start |
metaPrefix | String | Used with “other” cloudTypes. Prefix for custom metadata tags in the payload. |
rateLimit | String | A limit in bytes/sec to set on the outbound traffic. |
region | String | Cloud region. |
secretKey | String | Secret credential for cloud storage. |
skipS3FileExists | String | True/False - skip checking if the records exist in cloud storage before uploading. |
tier | String | cloud tier for archiving |
trashBucket | String | bucket to use as a trash in case of deletes. |
Starting, Viewing, Stopping jobs
Start a full archive or recall job
The mutation to start an archive job is the gcWalk mutation. Control which gets executed through the action parameter
Schema | ||
gcWalk( id: String!, action: String, applyMetadata: Boolean auto_rerun: Boolean, csvPath: String, endTime: String, file: String, s3Update:Boolean, skipAcls: Boolean, skip_meta: Boolean, snapshot: String, sourcePath: String, startTime: String, subdir:String, targetCluster: String, targetPath: String, versionsNewerThan: String, versionsOlderThan: String ): ProcedureResult
| ||
Argument | Type | Value |
id | String! | Unique ID for this archivedfolder |
action | String | UPLOAD to archive, GET to recall |
applyMetadata | Boolean | Apply metadata to recalled files |
auto_rerun | Boolean | Auto-start the rerun job after the main job has completed.
|
csvPath | String | Path to a file containing files to upload. |
endTime | String | For date based recall, the latest date to recall. |
file | String | Path to a file containing files to upload |
s3Update | Boolean | Run the s3update function on archive |
skipAcls | Boolean | Don’t upload or recall ACLs |
skip_meta | Boolean | Don’t upload or recall owner, group, mode |
snapshot | String | Read from this snapshot instead of taking a new one. |
sourcePath | String | Source path for the files on s3. Defaults to <cluster>/<path> |
startTime | String | For date based recall, the earliest date to recall. |
subdir | String | Folder below the archviedfolder’s path to archive/recall. |
targetCluster | String | Recall to this cluster instead. |
targetPath | String | Place files in this folder on recall. |
versionsOlderThan | String | For version-based recall, recall versions older than this date. |
versionsNewerThan | String | For version-based recall, recall versions newer than this date. |
Response
Schema | ||
type ProcedureResult { jobId: String! state: JobState! finishedAt: Long message: String success: Boolean startedAt: Long }
| ||
Field | Type | Value |
jobId | String! | Unique ID for this job |
state | String! | State that the job is currently in. Can be QUEUED, ARCHIVING, or others. |
finishedAt | Long | Timestamp when the job finished |
message | String | Extra information about the job |
success | Boolean | True if the job finished with SUCCESS |
startedAt | Long | Timestamp of when the job started |
Viewing running jobs
Query to view the active running jobs on the system
Schema | ||
runningSearchJobs( type: String! ): [MonitoredJob]
| ||
Argument | Type | Value |
type | String! | “all” to get all jobs. “GoldenCopy Recall” or “Incremental Archive” or “GoldenCopy Archive” |
Response
Schema | ||
type MonitoredJob { acceptedBytes: Long! acceptedFiles: Long! archivedBytes: Long! archivedFiles: Long! duration: String! erroredBytes: Long! erroredFiles: Long! finishedAt: String! folderId: String! jobId: String! skippedBytes: Long! skippedFiles: Long! startedAt: String! state: String! type: String! changelistFileChangeCount: String changelistId: String hasAutoRerun: Boolean metaAcceptedFiles: Long s3WalkBytes: Long s3WalkFiles: Long success: Boolean }
| ||
Field | Type | Value |
acceptedBytes | Long! | Total number of bytes queued. |
acceptedFiles | Long! | File count queued |
archivedBytes | Long! | Number of bytes archived. |
archivedFiles | Long! | File count archived. |
duration | String! | Duration of the job |
erroredBytes | Long! | Total number of bytes that failed. |
erroredFiles | Long! | File count errored. |
finishedAt | String! | Timestamp when the job finished |
folderId | String! | ID of the folder for this job. |
jobId | String! | Unique ID of this job. |
skippedBytes | Long! | Total number of bytes skipped |
skippedFiles | Long! | File count skipped. |
startedAt | String! | Timestamp when the job was started |
state | String! | Current state of the job. |
type | String! | Type of this job. |
changelistFileChangeCount | String | For jobs with changelists, number of files in the changelist. |
changelistId | String | For jobs with changelists, ID of the changelist. |
hasAutoRerun | Boolean | true if this job will auto spawn a rerun job when complete. |
s3WalkBytes | Long | Number of bytes walked on S3 |
s3WalkFiles | Long | Number of files walked on S3 |
success | Boolean | true if this job has succeeded. |
Viewing jobs history
Query to view the active running jobs on the system
Schema | ||
jobsHistory( type: String!, folderId: String, tail: Long, kafkaOffset: Long ): [MonitoredJob]
| ||
Argument | Type | Value |
type | String! | “all” to get all jobs. “GoldenCopy Recall” or “Incremental Archive” or “GoldenCopy Archive” |
folderId | String | filter jobs by this folder ID |
tail | Long | return the most recent number of jobs |
kafkaOffset | Long | used to paginate data, in combination with tail |
Response
Schema | ||
type MonitoredJob { acceptedBytes: Long! acceptedFiles: Long! archivedBytes: Long! archivedFiles: Long! duration: String! erroredBytes: Long! erroredFiles: Long! finishedAt: String! folderId: String! jobId: String! skippedBytes: Long! skippedFiles: Long! startedAt: String! state: String! type: String! changelistFileChangeCount: String changelistId: String hasAutoRerun: Boolean metaAcceptedFiles: Long s3WalkBytes: Long s3WalkFiles: Long success: Boolean }
| ||
Field | Type | Value |
acceptedBytes | Long! | Total number of bytes queued. |
acceptedFiles | Long! | File count queued |
archivedBytes | Long! | Number of bytes archived. |
archivedFiles | Long! | File count archived. |
duration | String! | Duration of the job |
erroredBytes | Long! | Total number of bytes that failed. |
erroredFiles | Long! | File count errored. |
finishedAt | String! | Timestamp when the job finished |
folderId | String! | ID of the folder for this job. |
jobId | String! | Unique ID of this job. |
skippedBytes | Long! | Total number of bytes skipped |
skippedFiles | Long! | File count skipped. |
startedAt | String! | Timestamp when the job was started |
state | String! | Current state of the job. |
type | String! | Type of this job. |
changelistFileChangeCount | String | For jobs with changelists, number of files in the changelist. |
changelistId | String | For jobs with changelists, ID of the changelist. |
hasAutoRerun | Boolean | true if this job will auto spawn a rerun job when complete. |
s3WalkBytes | Long | Number of bytes walked on S3 |
s3WalkFiles | Long | Number of files walked on S3 |
success | Boolean | true if this job has succeeded. |
Cancelling a running job
Mutation to cancel a running job
Schema | ||
cancelSearchJob( jobId: String! ): [Boolean]
| ||
Argument | Type | Value |
jobId | String! | Id of the job to cancel |
Response
Schema | ||
Boolean
| ||
Field | Type | Value |
Boolean | True if the cancel operation succeeded |
© Superna Inc