Software Releases

Current Release - Release Notes Golden Copy

Home

What’s New in Superna Eyeglass Golden Copy Release 1.1.7


What’s New! In Superna Eyeglass Golden Copy can be found here.


Supported OneFS releases

8.2.x.x

9.1.x.x

9.2.x.x

9.3.x.x


Supported S3 Protocol Targets

Amazon S3 version 4 of the authentication is supported (details here)

Dell ECS version 2

Azure blob services using S3 version of the authenticated protocol

Cohesity 6.3.1e AWS version 4 signature (ask about other versions) See vendor documentation for versioning support and object retention poicy support.

OpenIO - versioning not tested. Requires --meta-prefix when adding folders and value of oo-

Ceph version 15 or later Octopus (aws v4 signature only)

Google Cloud Storage


End of Life Notifications

End of Life Notifications can be found here.


Deprecation Notifications

Azure default Tier change

In next release the default tier for Azure upload will change from cold to hot.  Tier specific upload to Azure will require advanced license.

HTML Summary Report to be deprecated

In next release the HTML report created using the export command will be deprecated and replaced by a job report that is created and downloadable from the Golden Copy GUI.


Support Removed / Deprecated in this Release

No deprecations in current release.


New in 1.1.7 - 22085

See what's new in previous 1.1.7 builds.

Fixed in 1.1.7 - 22085

T23455 Unable to resume job if incremental is running

If there are Incremental jobs only running or a mix of full archive and incremental jobs running, after a cluster down/up or isilongateway restart all jobs will be unable to be resumed. Impact only when incremental jobs are running. If only full archive jobs are running, jobs will be resumed.

Resolution: Incremental jobs no longer affect resuming of jobs on a down/up or isilongateway restart.

New in 1.1.7 - 22076

T19533 Cloud Browser

Ability to browse objects from Golden Copy GUI including version, date and file size information.  Documentation available here.

—————————————————–

T22508 Recall from Cloud Browser BETA (requires Pipeline license)

Ability to recall objects from the Cloud browser.  Documentation available here.

—————————————————–

T21217 Media Mode Symlink and Hard link optimization (requires Pipeline license)

Symlinks and hard links stored efficiently as object stubs and only a single copy of original file in object storage and links restored on recall.  Documentation available here.

—————————————————–

T20030 Pipeline Workflow (requires Pipeline license)

On-demand or schedule to Cloud or from Cloud sync.  Documentation available here.

—————————————————–

T22016 Add/Edit/Delete Folder from Golden Copy GUI BETA

Ability to Add, Edit and Delete folder from the Golden Copy GUI folder view. Documentation available here.

—————————————————–

Robustness Enhancements

T21981 Enhancement for job queue robustness

T21975 Increase archiveworker RAM to 8 GB and run on nodes 2+

—————————————————–

T21606 stats view includes 15 min stats

The searchctl stats view command now adds 15 min interval stats.

—————————————————–

T22554 Notifications include reasonForFailure

The notifications list and email notification now include reasonForFailure information for errored jobs.


Fixed in 1.1.7-22076

T21730/T21687 Parallel Full Archive Jobs may have some stuck jobs

When you start a new full archive job while there are already running full archive jobs this may result in some jobs getting stuck.

Resolution: Parallel archive jobs no longer get stuck.

—————————————————–

 

T21084 The jobs view and jobs running have inconsistent phases

For an incremental archive job, the searchctl jobs running output last phase is GC_METADATA but the searchctl jobs view shows additional phases including a phase for Data Archive.


Resolution: The searchctl jobs view and jobs running now have consistent phases.


—————————————————–

 

T19316 searchctl jobs view does not show the size of errored files for incremental archive

The searchctl jobs view command for an incremental job will show a count for any errored files but will always show size as 0B instead of the actual size of errored files.


Resolution: The jobs view command now shows size of errored files.


—————————————————–

T19387/T20731 Incremental sync does not store folder ACL & clears ACL for parent folder

On incremental sync where a new folder is archived, the associated folder ACLs are not stored with the object properties in S3 target. Also an incremental which includes an update to a file or folder clears the ACL for the parent folder. Note this issue does not affect full archive.


Resolution: Folder ACLs are now stored on incremental.


—————————————————–

T21486 The jobs history command does not work after power off/on

After a cluster down and power off/on of the Golden Copy appliance, the jobs history command does not work. An exception while fetching data error is displayed.


Resolution: The jobs history can be used after power off/on.


—————————————————–

Not available in 1.1.7

T17181 Archive to AWS Snowball

Archive to AWS Snowball is not supported with this optimized update release. This is planned in a coming update.

—————————————————–

T17195 Upload to Azure, Cohesity, ECS or Ceph via Proxy

Azure, Cohesity, ECS or Ceph clients using http proxy not supported in this update.

—————————————————–

T16247 DR cluster alias / redirected recall

Ability to recall to a different cluster than the original source cluster is not supported in this update.

—————————————————–


Technical Advisories

Technical Advisories for all products are available here.



Known Issues

Known Issues Archiving

T14014 Incremental upload requires 2 cycles before picking up changes

For incremental upload, changes are detected by comparing 2 snapshots. After enabling incremental or incremental on a newly added folder will required 2 incremental upload cycles to run and create 2 different point in time snapshots before changes will be detected.

Workaround: none required

—————————————————–

T15312 Archive job incorrectly presented as completed

For the case where all files have been uploaded but there is a larger file that is being uploaded in parts and a part is still in progress, the searchctl jobs running command will not show the job as running even though parts are still uploading.

Workaround: None required. The progress can be viewed in the logs. Final summary.html file once completed is correct.

—————————————————–

T16425 Archive incremental upload does not handle changelist in waiting state/queued and incremental fails

A changelist on the Powerscale which is in waiting state or any other changelist condition that results in the "Job type Changelist is already running or queued" is not handled by Golden Copy incremental archiving which fails the incremental job instead of waiting and applying a timeout. Impact: Changes for that interval will be missed.

Workaround: A full archive can be run which will skip any files already up to date and will update those that require updating.

—————————————————–

T16629 Azure upload error where name contains %

Upload of file or folder with name that contains % character to Azure is not handled and will fail.

Workaround: None available.

—————————————————–

T16667 Data integrity audit job size parameter gives incorrect results

Using the searchctl archivedfolders audit command with the --size parameter, the jobs view and stats view do not show any results and not all matching files are audited.

Workaround: Use the searchctl archivedfolders audit command without the --size parameter.

—————————————————–


T17449 Folder missing meta data information for Azure container with legal hold or retention policy

For Azure container configured with legal hold or retention policy, upload of folder objects will be missing associated meta data for owner, group, mode bits, date stamps but ACLs are stored and protected. Golden Copy marks this upload as an error but the object is in fact created.

Workaround: None required.

—————————————————–

T18012 Folders with language specific characters not uploaded

Folders with language specific characters are not upload but the files within the folder are uploaded.

Workaround: None available.

—————————————————–

T18107 Incremental archive job may miss files on restart, jobid lost

A cluster down/up while an incremental archive job is running will not recover any files that have not already been added to the queue for upload. Those files will be missed and also will not be identified for upload on the next incremental cycle. The job id associated with the incremental job is also lost and not available in jobs history.

Workaround: Do not cluster down/up while an incremental archive job is running.

—————————————————–

T18252 Empty folder uploaded as file on Google Cloud Storage

A folder on the file system that has no sub folders or files will be uploaded to Google Cloud Storage (GCE) as a file instead of a folder. This does not impact upload of overall archive job. On recall, the empty file is incorrectly downloaded as a file. 

Workaround: None available.

—————————————————–

T18241 Cannot add 2 Powerscale cluster to Golden Copy with same archivedfolder configuration


If you add 2 clusters to Golden Copy you cannot add the same archivedfolder for both as it results in duplicated folder id.


Workaround: Select unique path for archivedfolder for each cluster.


—————————————————–

T18979 Incremental Archive issues for files with Zone.Identifier suffix

Under some conditions PowerScale will store files with a Zone.Identifier suffix. These files may be archived without meta-data or error on archive and not be archived at all.


Workaround: These files can be excluded from archive by adding " --excludes=*.Zone.Identifier" to the archivedfolder definition.


—————————————————–

T19218 Setting to enable delete for Incremental archive not working

The system setting export ARCHIVE_INCREMENTAL_IGNORE_DELETES=false to enable deletes during incremental archive is not working. Deleted files on PowerScale are not deleted from S3.

Workaround: None available.


—————————————————–

T19305 Queued jobs are not managed

Golden Copy executes up to 10 jobs in parallel. If more than 10 jobs are submitted remaining jobs are queued and waiting to fill a job slot once it becomes available. Queued jobs are not visible through any commands such as command for running jobs and they do not survive a restart.


Workaround: On restart any jobs that were queued will need to be restarted. Tracking is available for the 10 jobs that are running and jobs history for jobs that are completed.


—————————————————–

T19388 Fast Incremental incorrectly stores UID and GID properties

When Fast Incremental is enabled, the UID and GID are crossed and stored against the wrong attribute. UID is incorrectly stored against the group attribute instead of the owner attribute and the GID is incorrectly stored against the owner attribute instead of the group attribute.


Workaround: When evaluating owner and group, use the owner attribute to determine the group and the group attribute to determine the owner.


—————————————————–

T19441 Move/Delete operation in a single incremental sync orphans deleted data in S3 target

Under certain circumstances where in the same incremental update there is a move or rename of a folder and a delete of a sub-folder, the folder move is properly updated on S3 target but the deleted sub-folder is not deleted in S3 target.


Workaround: Orphaned folder can be manually removed from S3 target using native S3 tools.


—————————————————–

T20379 Canceled archive job continues upload

For an archive job that is cancelled while the phase of walking the filesystem is still in progress, the filesystem walk continues after the cancel and if another archive job on the same folder is started while the original snapshot is still present files from both snapshots will be uploaded. Impact: Any files that are uploaded twice will be skipped if they are already present and uploaded if not. Order of upload is not guaranteed.


Workaround: Please contact support.superna.net for assistance should this situation arise. Planned resolution in 1.1.6.

—————————————————–

T21370 No message when snapshot for upload from snapshot is not in Golden Copy inventory

The archive job which specifies an existing snapshot as the source of the copy requires that the specified snapshot is in the Golden Copy inventory. If an upload job is started without the snapshot in the inventory no error message is displayed.

Workaround: Inventory runs automatically once a day at midnight. If the snapshot specified is not in the inventory, the message will indicate that the job was submitted but there will be no job-id. Ensure that a job-id is returned when starting this job.

—————————————————–

T21586 API error stops incremental job

Under some conditions, an API error to the Powerscale will stop an incremental job.

Workaround: Contact support.superna.net for assistance.

—————————————————–

T21745 s3update with large number of changes may incorrectly delete folder objects with ACL information

Under some conditions, running the s3update command will result in deletion of folder objects for folders that were not modified. This affects the folder object storing the ACL information only, the data itself is still present.

Workaround: None available.

—————————————————–

T21889 Issues with rerun of recall job

Rerun of recall job with meta-data fails to download the errored files.

Workaround: Use native tools to download the errored files.

—————————————————–

T21891 Rerun of Incremental Job does not finish

Rerun of incremental job does not complete - stays in running state once finished and all previously errored files have been copied successfully.

Workaround: Contact support.superna.net for assistance.

—————————————————–

T21687 Parallel Full Archive Jobs may have some stuck jobs

When you add folders and start jobs in short period of time one after another this can result in jobs getting stuck.

Workaround: After adding a folder and starting a job, wait at least 5 minutes before adding another folder and starting a job.

—————————————————–

T22208 Cannot run a failed job again

If copy job fails to run, running the job again will also fail.

Workaround: Delete and readd the folder to run the copy again.

—————————————————–

T22925 Cannot use existing folder to copy from DR cluster

In the event that original cluster used to copy files to object storage becomes unavailable and a 2nd DR cluster is now being used, incremental updates to the originally uploaded objects is not available. Impact: Impact to incremental updates only. Recall of data uploaded from original cluster to DR cluster is available. Adding a new folder on DR cluster for upload is available.

Workaround: Readd the folder for the DR cluster. Requires full archive for the folder and then incremental.

—————————————————–

T23126 Issues with full and incremental jobs for folders when S3 prefix is used

With ARCHIVE_S3_PREFIX configured there are following issues for folders:

-on incremental archive when a folder is deleted, the corresponding folder object in object storage is not deleted resulting in orphaned folder objects - workaround: folder objects must be deleted manually from object storage

-on full archive folder copy is not skipped when it should be resulting in additional unnecessary creation of folder objects - workaround: none required

—————————————————–

T23254 Scheduled full archive does not include PowerScale name in object storage structure

The structure created in object storage when scheduled archive is used does not include the Powerscale cluster name as it does when the archive is initiated manually.

Workaround: None available.

—————————————————–

T23275 Archive job with s3update archive job does not complete

An archive job with the s3update archive option never completes.

Workaround: The s3update job should not be used in this release. Note that this command should not be used to compare the file system to the bucket, run a full archive to validate all data is copied.

—————————————————–



Known Issues Reporting

General Reporting Issues

T17647 After restart stats, jobs view, export report, cloud stats not consistent

After a cluster down/up or restart of indexworker, archiveworker or isilongateway containers there will be a mismatch in the stats.

Workaround: None available

—————————————————–

T17932 searchctl jobs view or folder stats may be missing reporting on small percentage of files uploaded.

The searchctl jobs view command or folder stats command may not properly report all files uploaded to the S3 target.

Workaround: Verify file count directly on S3 target.

—————————————————–

T18587 isilongateway restart may remove jobs history and running jobs information

An isilongateway container restart may result in information on running jobs and jobs history to be lost.

Without running job id, a job cannot be canceled or rerun.


This issue does not affect archiving of files.  Any job in progress will continue to archive files.


Workaround: To monitor job progress, use the searchctl archivedfolders stats command which relies on folder id as opposed to job id.


—————————————————–

T19136 Jobs View / Export Report do not correctly calculate job run time if job is interrupted


For an archive job that is interrupted - for example cluster down/up while archive job is running - the jobs view and export report show a run time that is shorter than the true duration of the job.


Workaround: None available


—————————————————–

T19466 Statistics may show more than 100% archived/attempted after a cluster down/up

If there is an archive job in progress when a cluster down/up is done, the job continues on cluster up but the jobs view and folder stats may show more than 100% for Archived and Attempted files.


Workaround: The archive job can be run again to ensure all files are uploaded. Any files that are already present on the object storage will show as a skipped statistic.


—————————————————– 

T21186 Statistics may not be accurate when there is a rename operation

If there is an archive job which includes a rename operation, the job and folder stats may not be accurate.


Workaround: Verify in object storage the correct files have been uploaded.


—————————————————–

T21257 The 'file change count' column in jobs running always shows 0 for incremental archive

For incremental archive, the searchctl jobs view command will show the number of files in the changelist, but the searchctl jobs running command always shows a 0 count.


Workaround: Use the searchctl jobs view command to see the number of files in the changelist.


—————————————————–

T20497 The jobs history '--tail' argument not working

Using jobs history with the --tail option may have the following issues:

-not all job types returned

- results not sorted

- more results retrieved than specified in the tail

-error if tail argument exceeds total number of records but results still displayed


Workaround: None available. The jobs history command without the --tail option is working.


—————————————————–

T21572 Export report for folder with multiple jobs may only produce 1 report

If multiple jobs have been run against a folder and then an export job is run for one of the jobs, a subsequent export job for a different job may not generate a report.


Workaround: Use the jobs view command to see the details of the job.


—————————————————–

T21668 Copy from flat file does not report errors

On a copy from a flat file, where the copy encounters errors, jobs view and export incorrectly show the job as 100% success.


Workaround: Use folder stats or jobs view "Errors" stats to see error count.


—————————————————–

T21727 s3update stats include count for updated and unchanged objects

The s3update statistic report combines counts for updated and unchanged objects in some cases such that the number of updated objects is incorrect. This is a reporting issue only, the correct objects are updated.


Workaround: None available.


—————————————————–

T21888 Issue with errors command

Using the --tail and --count options together for the searchctl archivedfolders errors command does not respect the --count option and returns all errors.


Workaround: None available. Last n lines of errors will still be available for review. Note that --count option without tail is respected.


—————————————————–

T20445 Jobs view count not accurate when parallel jobs are running

When there are parallel jobs running, the jobs view command may show counts for accepted, archived and skipped that are higher than the actual counts.


Workaround: Use the searchctl stats view command to see the statistics for the folders.


—————————————————–

T22125 Folder stats incorrect for delete on incremental jobs

Folder stats tracking deletes on an incremental job are not accurate.


Workaround: None available.


—————————————————–

T22523 Export command error json files may be missing files

The export command correctly reports then number of errored files in the HTML summary report but the corresponding json files may be missing some files.


Workaround: None available


—————————————————–

T23450 Jobs Summary email report always shows running job

The Jobs Summary email report always shows running jobs even when there are no jobs running. When a job is running the running job count is increased accordingly.


Workaround: There are running jobs if the running job count is higher than the count when system is idle.



Recall Reporting Issues

T16960 Rerun recall job overwrites export report

Rerun of a recall job followed by exporting a report will overwrite any previous export report for that folder.


Workaround: Export report from a previous recall can be recreated but running the searchctl archivedfolders export command for the appropriate job id.


—————————————————–

T17746 Recall reporting issues for metadata only recall

Recalling metadata only for a previous recall job using the command: searchctl archivedfolders metadata --jobid has the following issues:

- The resulting job cannot be monitored using the jobs view --follow command. Running the command results in an error if run against a metadata only recall job. 

- The jobs history view does not list the metadata only recall jobs.

- Export report has doubled count and errors not reported accurately


Workaround:

Run the jobs view command multiple times to see progress.

Keep a manual record of the metadata only recall job id.


—————————————————–

T17893 searchctl archivedfolders history incorrectly shows recall job as job type FULL

The output from the searchctl archivedfolders history command will incorrectly show a recall job as job type FULL.

Workaround: searchctl jobs history correctly shows the job type as GoldenCopy Recall.

—————————————————–

T18535 Recall reporting issue for accepted file count / interrupted recall

There is no stat for recall accepted file count.  Also if a recall is interrupted during the walk to build the recall files, the job reports as success even though not all files were recalled.


Workaround: None available


—————————————————–

T18875/T21574 Recall stats may incorrectly show errored count

Under some circumstances a recall job may show stats for Errors when in fact all files were successfully recalled.


Workaround: Use the searchctl archivedfolders errors command to check for errors.  Manual count of files on the Powerscale may also be used to verify the recall.


—————————————————–

T19415 Recall stats incorrectly show errors for folder object which store ACLs

Recall stats and error command incorrectly show errors related to meta data recall for the folder objects created to store folder ACLs.


Workaround: None required. These are not errors associated with the actual folder ACLs. These can be identified in the error command as the Metadata apply failed error will be listed against the folder name where folder has been prefixed with the PowerScale cluster name.


—————————————————–

T20500 'jobs vew' for recall job not incrementing meta data related stats

The searchctl jobs view command may incorrectly show 0 for metadata related stats.


Workaround: Use the folder stats to see cumulative stats for folder metadata recall.


—————————————————–

T21561/T21577 Recall stats issues

  • For a recall job where some files recalled had an error, the jobs view Accepted and Attempted stats are incorrect.
  • Stats unrelated to the recall may get incremented: FULL/MULTIPART_FILES_ACCEPTED , FULL/MULTIPART_FILES_ARCHIVED
  • jobs view Stats for Count (Recall), Count (Metadata), Errors (Recall), Errors (Metadata) may be incorrect
  • stats view FULL/FILES_ARCHIVED_RECALLED may not be accurate

Workaround: Use the searchctl archivedfolders errors command to check for errors.  Manual count of files on the Powerscale may also be used to verify the recall.


—————————————————–

T21778 Issues with Export for Recall job with meta-data

  • The report incorrectly shows the Job Type as FULL and the report itself is located in the ./full folder.
  • Accepted stats are incorrect
  • snapshot path is incorrect

Recall jobs without metadata export as expected.


Workaround: Export report can be retrieved from the ./full folder but some report contents are incorrect as above.







Known Issues Recall

T16129 Recall from Cohesity may fail where folder or file contain special characters

Recall of files or folders from Cohesity which contain special characters may fail. Job is started successfully but no files are recalled.

Workaround: None available


—————————————————–

T16550 Empty folder is recalled as a file for GCS

Recall from GCS target of an empty folder results in a file on the PowerScale instead of a folder.

Workaround: If the empty directory is required on the file system it will need to be recreated manually.


—————————————————–

T18338 Recall Rate Limit

Golden Copy does not have the ability to rate limit a recall.


Workaround: None available within Golden Copy.


—————————————————–

T18428 Recall for target with S3-prefix result in extra S3 API call per folder

For S3 target that require a prefix for storing folders, on recall an extra S3 API call is made per folder. This API call results in an error but does not affect overall recall of files and folders.

Workaround: None required


—————————————————–

T18450 Folder object in S3 that contains folder ACL information incorrectly recalled as a directory when ARCHIVE_S3_PREFIX set


If Golden Copy is configured to apply ARCHIVE_S3_PREFIX on folder objects, on recall the folder object is incorrectly recalled as a directory to the Powerscale filesystem.


Workaround: None required


—————————————————–

T18600 Recall Job where recall path is mounted does not indicate error


No error is displayed if recall path is not mounted. In this case files may be downloaded to the Golden Copy filesystem which is not the requested end location and could also result in disk space issues on the Golden Copy VM.


Workaround: Ensure that mount for recall path exists prior to starting recall job.  See information here on the mount requirements.

—————————————————–

T19012 Recall of files from Azure fails for files uploaded with Golden Copy earlier than 1.1.4-21050

Files that were uploaded to Azure with Golden Copy build prior to 1.1.4-21050 cannot be recalled back to PowerScale using Golden Copy.

Workaround: Native S3 tools can be used to recall files from Azure.

—————————————————–

T19438 Files may not be recalled

Under some circumstances files may not be recalled without any error indicated in Golden Copy.

Workaround: Files can be manually retrieved using S3 native tools.

—————————————————–

T19649 Meta-data not recalled where AD user/group cannot be resolved

For case where files are uploaded and the owner or group was returned by the PowerScale API as Unknown User or Unknown Group because those owner/group no longer exist, on recall the Unknown User/Group cannot be resolved and block any other meta data from being applied.

Workaround: Meta data in S3 target can be used to confirm original meta data settings and manual steps on the operating system to apply them.

—————————————————–

T21291 Version based recall may not apply folder ACLs when using '--apply-metadata'

When using version based recall, if the parent folder of an object with multiple versions only has 1 version, the parent folder ACLs may not be applied.

Workaround: Reference for parent folder ACLs are stored as separate folder object in objet storage and can be applied manually.

—————————————————–

T21626 Recall Job for large number of files may get stuck

For a large recall job, Golden Copy may get stuck with files left to recall or once the recall job is completed.

Workaround: Contact support for assistance.

—————————————————–

T23220 Recall to a different target cluster does not correctly recall objects

Recall to a PowerScale cluster that has been added as a --goldencopy-recall-only cluster cannot be used due to the following issues:

- objects may be targeted for recall to an incorrect location on the file system resulting in error on recall or incorrect location on filesystem

- meta data is not applied on recall

- searchctl archivedfolders export report Job Type shows as FULL instead of RECALL

Workaround: None available.

—————————————————

T23312 Recall --start-time and --end-time options do not recall any data

Specifying --start-time and / or --end-time option on recall does not return any results.

Workaround: If the data can be identified based on the object timestamp rather than the file metadata, the --older-than and/or --newer-than option can be used.

—————————————————

T23315 Recall --older-than option creates folder structure when there is no matching objects

Recall using the --older-than option where the input is not in the range of any object timestamp, no files are recalled as expected but the folder structure is created on recall and jobs view stats incorrectly show all files are recalled.

Workaround: None available.

—————————————————

T23398 Recall job intermittently fails at step "Create Distributed Kafka topic for LINKS"

Recall jobs can intermittently fail at the "Create Distributed Kafka topic for LINKS" step. The job view shows a Count for Accepted and Recalled but in fact no objects are recalled.

Workaround: Start the recall job again.

—————————————————

T23474 Recall path incorrect with multiple cluster uses Pipeline configuration for all clusters

If multiple clusters are managed and only one is configured with the recall-source-path for pipeline, the recall-source-path is incorrectly used for the cluster that is not configured for pipeline.

Workaround: Use the recall-source-path as defined for the pipeline cluster as the recall location.


Known Issues Media Mode

T23160 Failed link archive are not recorded in stats

When there is an error on copy of a link, jobs view and stats view do not have any stats related to the errors.

Workaround: If jobs view has a difference between accepted and uploaded stats this is an indication of errors. For link errors, the errors command can be used to view the errors.

—————————————————–

T23159 Hard link percentage saved shows negative value

The jobs view Hardlinks (Size) stat may show a negative percentage saved if the size of the link object json files exceeds the size of the inode actual file.

Workaround: None required.

—————————————————–

T22022 Error on incremental for link object with suffix

With Golden Copy configured to add an ARCHIVE_LINK_SUFFIX to links, incremental archive will result in error on delete and rename because the suffix is not applied.

Workaround: Link objects in this case should be deleted manually from object storage.

—————————————————–

T22510 Symlink mode always set to 777

When recalling a symlink, the mode is always set to 777 instead of applying the original mode which is stored in the symlink object properties.

Workaround: The symlink object properties store the original mode which then can be applied manually.

—————————————————–

T22488 Object storage still contains file after all hard links and original file have been deleted

If all of the hard links to and related inode file are deleted, the the file is still present in the .gcinodes folder in the object storage.

Workaround: Object can be manually deleted from the inodes folder in object storage if required.

—————————————————–

T22484 Deleting original hard link file may result in multiple copies of the file in object storage

If a hard link has been copied to object storage and subsequently the original file is deleted, the object storage link object may now stores the full original file as well as a copy of the file in the .gcnodes folder in object storage.

Workaround: Objects no longer required may be removed manually from object storage.

—————————————————–

T23475 Error in Media Mode stops archive job

Under some circumstances for some file system configurations an error will occur when processing hard link or sym link which blocks all processing for the archive job.

Workaround: Contact support.superna.net. In some cases setting --skip-s3-file-exists to true will workaround this issue. Plan to address in patch build.

—————————————————–

T23479 Golden Copy GUI does not report statistics for hard links / symlinks processed

The Golden Copy GUI dashboards underreport on files and bytes archived when hard links and symlinks are being processed.

Workaround: Use the jobs view CLI command to see complete statistics.


Known Issues Pipeline

T23045/T22539 Move to Trash / Delete configuration is not working

With Pipeline configured with the recall folder using --source-path and the corresponding settings to enable --trash-after-recall option, objects either downloaded successfully or skipped on recall do not get deleted or moved to recycle bucket.

With Pipeline configured with an archive folder using --delete-from-source and the corresponding system settings enabled, files that are successfully copied are deleted but files that are skipped are not.

Workaround: Objects/files on source must be manually removed .

—————————————————–

T23036 No stats for Pipeline recall skipped files

The stats view or jobs view command does not report on skipped objects for Pipeline recall folder.

Workaround: None available.

—————————————————–

T23035 Jobs view does not track delete from source progress

The jobs view command does not show any progress status for the delete operation when a folder is configured with --delete-from-source.

Workaround: The stats view command FULL/FILES_DELETE_FROM_SOURCE can be used.

—————————————————–

T23034 Verbose list of archivedfolders does not show the --delete-from-source setting

The searchctl archivedfolders list --verbose option does not show the --delete-from-source setting.

Workaround: Use searchctl archivedfolders list without the verbose option to see the setting.

—————————————————–

T22013 Pipeline recall with the --source-path option does not apply meta-data

A Pipeline recall from object to file does not apply meta data from the object properties. Owner and Group are set to nobody.

Workaround: Set owner, group and mode manually.


Known Issues Golden Copy GUI

T23436 Golden Copy GUI folder and all time stats are inconsistent

In some cases, the Golden Copy all time stats will not match the per folder stats.

Workaround: Golden Copy stats view and jobs view can be used to view job progress.

—————————————————–

T23238 Golden Copy GUI not fully restricted for non-admin user

Non - admin users can login to the Golden Copy GUI and are not fully restricted from admin functionality. Non-admin user can see all folders configured and can add new folders. Impact: Impact for folder management only. Folder can be added but job cannot be started. Cloud Browser is properly filtered based on the shares for logged in user.

Workaround: Golden Copy GUI URL should not be provided to non- administrative users.

—————————————————–

T23151 Issue with selector for archived folder in Golden Copy GUI

The check boxes for archived folders in the Golden Copy GUI have the following issues:

-The check box on the Golden Copy tab does not appear as checked even though it is checked

-Checking the checkbox for an archived folder in the Archived Folder tab automatically checks the same folder on the Golden Copy tab

Workaround: None required. The path displayed below the archived folder list indicates which folder is currently selected.

—————————————————–

T23150 Error on adding or editing archivefolder has no details

When there is an error on adding or editing an archivefolder the message provided is "Error occurs in graphQL request" without any details as to what the actual error was.

Workaround: Browser developer tools can be used to see the details of the graphql output.

—————————————————–

T23131 Golden Copy GUI combines Upload and Recall stats

The Golden Copy GUI shows Upload and Recall stats combined in some parts of the GUI such as the Data Processed (File Count) graph where queued stat includes both uploaded and recalled files.

Workaround: None required.

—————————————————–

T23087 Golden Copy GUI available for Add/Edit/Delete folder for AWS, Azure and ECS

The Golden Copy GUI can only be used to manage folders for AWS, Azure and ECS. Invalid mandatory field requirement prevent management of folders for other targets.

Workaround: Use the CLI to manage folders for other targets.

—————————————————–

T23076 No validations when editing folder from Golden Copy GUI

When editing a folder from the Golden Copy GUI there are no checks for valid input on save. Impact: invalid settings can corrupt folders and block functionality.

Workaround: Manually verify inputs for accuracy.

—————————————————–

T22851 Golden Copy GUI Folder Management GUI Issues

When using the Golden Copy GUI to add/edit/delete folders there are the following GUI issues:

- fields are not cleared after changing cloud type selection - workaround: manually clear or overwrite previously entered values

- secret key shows in plain text - workaround: none available

- path field does not allow copy/paste - workaround: use path selector

- summary window does not show scrollbar - workaround: resize window

—————————————————–

T23002 Golden Copy Cloud Browser filters not displayed

After selecting Filter Results Files or Folders in Cloud Storage Browser, the GUI does not show that a filter is applied.

Workaround: None required.

—————————————————–

T22954 Golden Copy GUI does not correctly show Pipeline license

The license list display in the Golden Copy GUI for the Powerscale under management shows UN instead of a pipeline license.

Workaround: To double check licensing from command line use the command: searchctl isilons list .

—————————————————–

T22385 Golden Copy GUI shows Search product on some pages

On the login page and on some tabs once logged in the product name "search" is displayed instead of "goldencopy". No impact to functionality.

Workaround: None required.


Known Issues General & Administration

T14025 Changing PowerScale user requires a cluster down/up

If the user that was used when adding PowerScale to Golden Copy is changed, sessions still established with PowerScale using previous user.

Workaround: A cluster down/up is required to refresh user being used to connect to PowerScale. Contact support.superna.net for assistance.

—————————————————–

T16640 searchctl schedules uses UTC time

When configuring schedule using searchctl schedules command time must be entered as UTC time.

Workaround: None required

—————————————————–

T16855 Archived Folders for Powerscale cluster added with the --goldencopy-recall-only option does not appear in the archivedfolders list command

The searchctl archivedfolders list command does not list folders for Powerscale clusters that were added to Golden Copy using the --goldencopy-recall-only option.

Workaround: Keep a record of the folder id after adding the folder and then it can be referenced in other commands such as searchctl archived folders remove .

—————————————————–

T17987 Alarm for cancelled job shows job failed

The description for an alarm for a cancelled job says "Job failed to run" instead of indicating that the job was cancelled.

Workaround: Check the jobs history for the details of the job.

—————————————————–

T21073 Phone home may fail when archive folder path contains characters 'id'

Phone home may file if there is an archive folder configured path that ends in 'id' - for example: /ifs/data/patientid

Workaround: None available.

—————————————————–

T17200 Error on cluster up after power off/on

After power off/on of the Golden Copy VM, the cluster up might fail due to insufficient space for zk-ramdisk.

Workaround: Contact support for assistance.

—————————————————–

T21227 Backup & Restore missing configuration

After backup and restore, the following configurations are not restored:

- searchctl archivedfolders config --checksum

-searchctl notifications (including smtp/channel)

Workaround: Keep an external record of configurations that are not restored. After restore, missing configurations must be manually reapplied.

—————————————————–

T21894 archivedfolders configure settings are reset after a cluster down/up

After a cluster down/up sequence customize settings for checksum and snapshot expiry are reset to default values:

- checksum: DEFAULT

- fullArchSnapExpiry: 25

Workaround: Prior to cluster down up verify settings by using the command

searchctl archivedfolders getConfig

After cluster up run the command to verify settings are the same. If then need to be modified use the searchctl archivedfolders options to modify following documentation here.

—————————————————–

T22337 Firewall down after node reboot

A Golden Copy node that is rebooted will no longer have the firewall rules applied.

Workaround: After bringing the node back up run the command:

/opt/superna/eca/scripts/eca_iptables.sh

Check that the firewall is up by executing the command below from another server to confirm no access to port 3000 (replace <node IP> with the IP address of the Golden Copy node that you are running the command against):

 nc -z <node IP> 3000

—————————————————–

T22373 Cannot configure admin user or group with space in name

Cannot add an AD group or user with a space in the name as an administrative user. Running the command to add the administrator group or user results in an exception.

Workaround: For an AD group, the group can be added by replacing the space in the name with %20. For example admin group@x.y would be added using admin%20group@x.y.

For AD user with a space in the name no workaround is available.

—————————————————–

T22988 searchctl archivedfolders list --verbose incorrectly displays accesskey

In the searchctl archivedfolders list --verbose output incorrectly displays the bucket as the accesskey.

Workaround: searchtctl archivedfolders list does correctly show the accesskey field.

—————————————————–

T22957 Modify command does not change Powerscale IP address

The searchctl isilons modify command cannot be used to change the IP address Golden Copy is using to connect to Powerscale. The command will either give an error or will succeed but the original IP address will continue to be used.

Workaround: Contact support.superna.net for assistance to change IP address.

—————————————————–

T22912 Golden Copy Advanced and Pipeline licenses cannot be removed

The Golden Copy Advanced and Pipeline licenses cannot be removed using searchctl commands. Impact: No correction can be made to loaded licenses.

Workaround: Contact support.superna.net for assistance.

—————————————————–

T22913 Licensing of Powerscale can be lost

A sequence of operations that deletes and readds clusters may result in licenses being lost on a cluster down/up.

Workaround: Contact support.superna.net for assistance.
—————————————————–

T23223 Golden Copy goldencopy-recall-only Powerscale cannot be licensed

A Powerscale cluster that is added to Golden Copy as a goldencopy-recall-only cluster cannot be subsequently licensed for full Golden Copy functionality.

Workaround: Contact support.superna.net for assistance.

—————————————————–

T23382 Verbose archived folders list shows properties no longer used

The searchctl archivedfolders list --verbose property shows "recallTargetCluster" and "recallTargetPath" that are no longer used.

Workaround: None required.



Known Limitations

T15251 Upload from snapshot requires snapshot to be in Golden Copy Inventory

Golden Copy upload from an PowerScale snapshot requires snapshot to be in Golden Copy Inventory. Inventory task is run once a day. If you attempt to start archive without snapshot in inventory you will get error message "Incorrect snapshot provided".

Workaround: Wait for scheduled inventory to run or run inventory manually using command: searchctl PowerScales runinventory

—————————————————–

T15752 Cancel Job does not clear cached files for processing

Any files that were already cached for archive will still be archived once a job has been cancelled.

Workaround: None required. Once cached files are processed there is no further processing.

—————————————————–

T16429 Golden Copy Archiving rate fluctuates

Golden Copy archiving rates may fluctuate over the course of an upload or recall job.

Workaround: None required.

—————————————————–

T16628 Upgrade to 1.1.3 may result in second copy of files uploaded for Azure

In the Golden Copy 1.1.3 release, upload to Azure replaced any special characters in the cluster, file or folder names with "_". In 1.1.4 release the special characters are handled so that a subsequent upload in 1.1.4 wil re-upload any files/folders because the names are not identical in S3 to what was uploaded in 1.1.3. If the cluster name contained a special character - for example Isilon-1 - then all files will be re-uploaded.

Workaround: None

—————————————————–

HTML report cannot be exported twice for the same job

The HTML report cannot be run again after having been previously executed.

Workaround: None required. Use the previously exported report.

—————————————————–

T16250 AWS accelerated mode is not supported

Golden Copy does not support adding AWS with accelerated mode as an S3 target.

—————————————————–

T16646 Golden Copy Job status

When viewing the status of a Golden Copy Job it is possible that a job which has a status of SUCCESS contains errors in processing files. The job status is used to indicate the whether the job was able to run successfully. Then the searchctl jobs view or searchctl stats view or HTML report should be used to determine the details related to the execution of the job including errors and successes.

—————————————————–

T17173 Debug logging disk space management

If debug logging is enable, the additional disk space consumed must be managed manually.

—————————————————–

T18640  searchctl archivedfolders errors supported output limit

The searchctl archivedfolders errors command support output limit is 1000.  


Workaround: For a longer list use the --tail --count 1000 or --head --count 1000 option to limit the display.


—————————————————–

Fast Incremental Known Limitations

  • mode bit meta data information is not available in fast incremental mode
  • owner and group are stored in numeric UID and GID format in the object header
  • PowerScale API only available for OneFS 8.2.1 and higher
  • Owner and Group meta-data not recalled where objects were uploaded with fast incremental due to bug in PowerScale API they cannot be recalled. Recall should be done without meta-data. There may still be meta-data errors on recall without meta-data which can be ignored.

—————————————————–

Move/Rename identification and management in object storage known limitations

  • PowerScale API only available for OneFS 8.2.1 and higher for updating S3 target with new location of folder and files and removing folder and files from old location
  • For OneFS version lower than 8.2.1 move/renamed objects cannot be identified due to PowerScale API issue and these will be orphaned in S3 target.

—————————————————–

Backblaze target requires https access

When configuring folder for Backblaze https access must be used, http is not supported.

—————————————————–

T20868 Cannot run incremental update for same folder to multiple targets

If incremental update for same folder is run in parallel to multiple targets, only 1 incremental job will run. This impacts incremental update only, parallel full archive for the folder to multiple targets does not have this issue and both complete successfully.

—————————————————–

T21258 Version based recall uses UTC time for inputs

When specifying the --newer-than or --older-than options for version based recall, UTC time must be used.

—————————————————–

T21759 Unable to export recall job multiple times

After an export of a recall job, the same job cannot be exported again.


Workaround: None available.


—————————————————–

T21925 Export reporting limit

When running an export report to view errors or files uploaded, the number of records reported on is limited to 7 million records.


Workaround: Contact support.superna.net if you require reporting on more records. For overall reporting on a job use the Golden Copy GUI or CLI jobs view command.


—————————————————–

T22680 Media Mode Hardlink mode may copy more than 1 copy of the file

For the case where there are multiple hardlinks to the same file, due to parallel processing there may be multiple instances of the the real file uploaded until the completion of the copy of the real file is known to all Golden Copy nodes.


Workaround: None available. For a large number of hardlinks to the same file, the duplicate files uploaded is expected to be far less than a very large number of links.


—————————————————–

T22469 Media mode does not copy broken symbolic links

A symbolic link that is broken for example due to a renamed or deleted file is not uploaded by Golden Copy.


Workaround: None available.


—————————————————–

T22630 Media mode may orphan hardlink objects when all Golden Copy nodes are not able to archive

In the event that a multi-node Golden Copy deployment has nodes that are not able to copy for any reason, it may result in either orphaned inode objects or associated hard link objects due to copy for the partner object being assigned to the node that cannot copy.

Workaround: None available


—————————————————–

T23130 Media mode has no stats for broken symlink

If there are broken symlinks encountered during a Golden Copy job, there are no statistics to track this occurrence.


Workaround: None available

© Superna Inc