Administration Guides
Golden Copy GraphQL API
Home

 

Summary

The Golden Copy GraphQL Api is an authenticated, remote interface that runs over http for querying an eyeglass search appliance for files.

 

Query Format and routes

Queries to GC API can be issued to any node in the search cluster. Queries run over https with the bulk of the query passed as url parameters.

Endpoint:

All Queries must be issued to https://ip.of.gc.node/graphql

Parameter Encoding

GraphQL queries must be issued as a urlencoded value to the query http parameter:

https://ip.of.gc.node/graphql?query=url_encoded_graphql_query

 

Authentication

Authentication is achieved through the retrieval of a json web token. This token is to be included in the “Bearer” header of all future calls to GC API.

login

Query

 

Schema

login(id: String!, pass: String!): LoginResult

Argument

Type

Value

id (required)

String

Username + domain of the user logging in, in the user@domain.com syntax. For local users, omit the domain.

pass (required)

String

Password of the user attempting to log in.

 

Response

 

Schema

type: LoginResult {

  user: User!

  token: String!

}

FieldTypeValue
user (non-null)Object (User)User Object Object for the logged in user.
token (non-null)StringThe JSON Web Token to be used in authorizing future GC API calls.

type: User {

  name: String!

  role: String!

}

FieldTypeValue
name (non-null)StringThe name of the logged in user, in DOMAIN\username format
role (non-null)Stringone of: USER or ADMIN

 

Example

Login with the username testuser@exampledomain.com, using password NotReal!:

 

curl -s -G -k https://search.igls.com/graphql --data-urlencode 'query={

  login(id:"testuser@exampledomain.com", pass:"NotReal!") {

    user {

      name

    }

    token

  }

}'

 

{

"data": {

     "login": {

         "token": "eyJhbGciOiJIUzI1NiJ9.eyJzdWIiOiJTSUQ6Uy0xLTUtMjEtMjAxODMyNTY2LTMxODczNTM0MDctMjgyOTk5MTcxMC0xNDQ4Iiwicm9sZSI6IlVTRVIiLCJleHAiOjE1NzAyODk3MTR9.0vs-tnOgs0cOBSYB_SOuNHdmV7NT6YisTwuIbZFMkJE",

         "user": {

             "name": "EXAMPLEDOMAIN\\testuser"         }

     }

}

}

 

For any authenticated endpoints, the value of token  needs to be used in the Authorization header using the Bearer schema

 

Authorization: Bearer <token>

 

Adding Archived Folders

addArchivedFolder

The main mutation to add an archived folder. Returns information about the added folder.

Query

 

Schema

addArchivedFolder(

  accessKey: String,

  archiveDataAuditCron: String,

  backupNum: String,

  bucket: String,

  cloudtype: String,

  clusterName: String,

  container: String,

  endpoint: String,

  endpointIps: String,
  excludes: String,

  fullCron: String,

  host: String!,

  includes: String,

  incrementalCron: String,

  metaPrefix: String,

  path: String!,

  rateLimit: String,

  region: String,

  secretKey: String,

  skipS3FileExists: String,

  tier: String,

  trashBucket: String,  excludes: String   

 ): ArchivedFolderInfo

 

 

ArgumentTypeValue
accessKeyStringAccess key for authentication to the cloud endpoint. Might be the username of the user when archiving to some endpoints.
archiveDataAuditCronStringA cron expression for how often to run the data audit function.
backupNumStringFor full backups, the number of independent copies to keep.
bucketString

the name of the bucket in cloud storage

 

cloudTypeStringone of aws, azure, ecs, gcs, or other
clusterNameStringthe name of the source Isilon cluster
containerStringfor Azure, the container name
endpointStringthe URI of cloud storage to connect to
endpointIpsStringfor ECS, the group of IPs to load balance over.
excludesStringGlob syntax. Exclude these files / folders from archiving.
fullCronStringA cron expression for how often to run the full archive function.
hostString!the name of the Isilon cluster source.
includesStringGlob syntax. Only include matching files in the archive.
incrementalCronStringA cron expression for how often to run the incremental archive function.
metaPrefixStringUsed with “other” cloudTypes. Prefix for custom metadata tags in the payload.
pathString!Starting with /ifs, the base location on the source filesystem to archive.
rateLimitStringA limit in bytes/sec to set on the outbound traffic.
regionStringCloud region.
secretKeyStringSecret credential for cloud storage.
skipS3FileExistsStringTrue/False - skip checking if the records exist in cloud storage before uploading.
tierStringcloud tier for archiving
trashBucketStringbucket to use as a trash in case of deletes.

 

Response

 

Schema

type ArchivedFolderInfo {

  id: String!

  cluster: String!

  path: String!

  accessKey: String

  archiveDataAuditCron: String

  backupNum: String

  bucket: String

  checksum: String

  cloudtype: String

  container: String

  disableIncremental: String

  endpoint: String

  endpointIpPool: [String]

  fullCron: String

  incrementalCron: String

  lastArchiveDate: Long

  lastFullArchiveDate: Long

  metaPrefix: String

  rateLimit: String

  region: String

  skipS3FileExists: String

  tier: String

  trashBucket: String

}

 

 

FieldTypeValue
idString!Unique ID for this archivedfolder
clusterString!Name of source cluster
pathString!Path of files on source cluster
accessKeyStringAccess key for authentication to the cloud endpoint. Might be the username of the user when archiving to some endpoints.
archiveDataAuditCronStringA cron expression for how often to run the data audit function.
backupNumStringFor full backups, the number of independent copies to keep.
bucketString

the name of the bucket in cloud storage

 

checksumStringTrue if explicit checksumming enabled.
cloudTypeStringone of aws, azure, ecs, gcs, or other
containerStringfor Azure, the container name
endpointStringthe URI of cloud storage to connect to
endpointIpPool[String]for ECS, the group of IPs to load balance over.
fullCronStringA cron expression for how often to run the full archive function.
hostString!the name of the Isilon cluster source.
incrementalCronStringA cron expression for how often to run the incremental archive function.
lastArchiveDateLongTimestamp of last archive job start.
lastFullArchiveDateLongTimestamp of last full archive job start
metaPrefixStringUsed with “other” cloudTypes. Prefix for custom metadata tags in the payload.
rateLimitStringA limit in bytes/sec to set on the outbound traffic.
regionStringCloud region.
secretKeyStringSecret credential for cloud storage.
skipS3FileExistsStringTrue/False - skip checking if the records exist in cloud storage before uploading.
tierStringcloud tier for archiving
trashBucketStringbucket to use as a trash in case of deletes.

 

 

 

Starting, Viewing, Stopping jobs

Start a full archive or recall job

The mutation to start an archive job is the gcWalk mutation. Control which gets executed through the action parameter

 

Schema

gcWalk(

  id: String!,

  action: String,

  applyMetadata: Boolean

  auto_rerun: Boolean,

  csvPath: String,

  endTime: String,

  file: String,

  s3Update:Boolean,

  skipAcls: Boolean,

  skip_meta: Boolean,

  snapshot: String,

  sourcePath: String,

  startTime: String,

  subdir:String,

  targetCluster: String,

  targetPath: String,

  versionsNewerThan: String,

  versionsOlderThan: String

): ProcedureResult

 

 

 

 

ArgumentTypeValue
idString!Unique ID for this archivedfolder
actionStringUPLOAD to archive, GET to recall
applyMetadataBooleanApply metadata to recalled files
auto_rerunBoolean

Auto-start the rerun job after the main job has completed.

 

csvPathStringPath to a file containing files to upload.
endTimeStringFor date based recall, the latest date to recall.
fileStringPath to a file containing files to upload
s3UpdateBooleanRun the s3update function on archive
skipAclsBooleanDon’t upload or recall ACLs
skip_metaBooleanDon’t upload or recall owner, group, mode
snapshotStringRead from this snapshot instead of taking a new one.
sourcePathStringSource path for the files on s3. Defaults to <cluster>/<path>
startTimeStringFor date based recall, the earliest date to recall.
subdirStringFolder below the archviedfolder’s path to archive/recall.
targetClusterStringRecall to this cluster instead.
targetPathStringPlace files in this folder on recall.
versionsOlderThanStringFor version-based recall, recall versions older than this date.
versionsNewerThanStringFor version-based recall, recall versions newer than this date.

 

Response

 

Schema

type ProcedureResult {

  jobId: String!

  state: JobState!

  finishedAt: Long

  message: String

  success: Boolean  

  startedAt: Long

}

 

FieldTypeValue
jobIdString!Unique ID for this job
stateString!State that the job is currently in. Can be QUEUED, ARCHIVING, or others.
finishedAtLongTimestamp when the job finished
messageStringExtra information about the job
successBooleanTrue if the job finished with SUCCESS
startedAtLongTimestamp of when the job started

 

Viewing running jobs

Query to view the active running jobs on the system

 

Schema

runningSearchJobs(

  type: String!

): [MonitoredJob]

 

 

 

 

ArgumentTypeValue
typeString!“all” to get all jobs. “GoldenCopy Recall” or “Incremental Archive” or “GoldenCopy Archive”

Response

 

Schema

 

type MonitoredJob {

  acceptedBytes: Long!

  acceptedFiles: Long!

  archivedBytes: Long!

  archivedFiles: Long!

  duration: String!

  erroredBytes: Long!

  erroredFiles: Long!

  finishedAt: String!

  folderId: String!

  jobId: String!

  skippedBytes: Long!

  skippedFiles: Long!

  startedAt: String!

  state: String!

  type: String!

  changelistFileChangeCount: String

  changelistId: String

  hasAutoRerun: Boolean

  metaAcceptedFiles: Long

  s3WalkBytes: Long

  s3WalkFiles: Long

  success: Boolean

}

 

 

 

FieldTypeValue
acceptedBytesLong!Total number of bytes queued.
acceptedFilesLong!File count queued
archivedBytesLong!Number of bytes archived.
archivedFilesLong!File count archived.
durationString!Duration of the job
erroredBytesLong!Total number of bytes that failed.
erroredFilesLong!File count errored.
finishedAtString!Timestamp when the job finished
folderIdString!ID of the folder for this job.
jobIdString!Unique ID of this job.
skippedBytesLong!Total number of bytes skipped
skippedFilesLong!File count skipped.
startedAtString!Timestamp when the job was started
stateString!Current state of the job.
typeString!Type of this job.
changelistFileChangeCountStringFor jobs with changelists, number of files in the changelist.
changelistIdStringFor jobs with changelists, ID of the changelist.
hasAutoRerunBooleantrue if this job will auto spawn a rerun job when complete.
s3WalkBytesLongNumber of bytes walked on S3
s3WalkFilesLongNumber of files walked on S3
successBooleantrue if this job has succeeded.

 

Viewing jobs history

Query to view the active running jobs on the system

 

Schema

jobsHistory(

  type: String!,

  folderId: String,

  tail: Long,

  kafkaOffset: Long

): [MonitoredJob]

 

 

 

 

ArgumentTypeValue
typeString!“all” to get all jobs. “GoldenCopy Recall” or “Incremental Archive” or “GoldenCopy Archive”
folderIdStringfilter jobs by this folder ID
tailLongreturn the most recent number of jobs
kafkaOffsetLongused to paginate data, in combination with tail

Response

 

Schema

 

type MonitoredJob {

  acceptedBytes: Long!

  acceptedFiles: Long!

  archivedBytes: Long!

  archivedFiles: Long!

  duration: String!

  erroredBytes: Long!

  erroredFiles: Long!

  finishedAt: String!

  folderId: String!

  jobId: String!

  skippedBytes: Long!

  skippedFiles: Long!

  startedAt: String!

  state: String!

  type: String!

  changelistFileChangeCount: String

  changelistId: String

  hasAutoRerun: Boolean

  metaAcceptedFiles: Long

  s3WalkBytes: Long

  s3WalkFiles: Long

  success: Boolean

}

 

 

 

FieldTypeValue
acceptedBytesLong!Total number of bytes queued.
acceptedFilesLong!File count queued
archivedBytesLong!Number of bytes archived.
archivedFilesLong!File count archived.
durationString!Duration of the job
erroredBytesLong!Total number of bytes that failed.
erroredFilesLong!File count errored.
finishedAtString!Timestamp when the job finished
folderIdString!ID of the folder for this job.
jobIdString!Unique ID of this job.
skippedBytesLong!Total number of bytes skipped
skippedFilesLong!File count skipped.
startedAtString!Timestamp when the job was started
stateString!Current state of the job.
typeString!Type of this job.
changelistFileChangeCountStringFor jobs with changelists, number of files in the changelist.
changelistIdStringFor jobs with changelists, ID of the changelist.
hasAutoRerunBooleantrue if this job will auto spawn a rerun job when complete.
s3WalkBytesLongNumber of bytes walked on S3
s3WalkFilesLongNumber of files walked on S3
successBooleantrue if this job has succeeded.

 

Cancelling a running job

Mutation to cancel a running job

 

Schema

cancelSearchJob(

  jobId: String!

): [Boolean]

 

 

 

 

ArgumentTypeValue
jobIdString!Id of the job to cancel

Response

 

Schema

Boolean

 

 

 

FieldTypeValue
 BooleanTrue if the cancel operation succeeded

 

© Superna Inc