Software Releases

Current Release - Release Notes Golden Copy

Home

What’s New in Superna Eyeglass Golden Copy Release 1.1.6


What’s New! In Superna Eyeglass Golden Copy can be found here.


Supported OneFS releases

8.1.x.x

8.2.x.x

9.1.x.x


Supported S3 Protocol Targets

Amazon S3 version 4 of the authentication is supported (details here)

Dell ECS version 2

Azure blob services using S3 version of the authenticated protocol

Cohesity 6.3.1e AWS version 4 signature (ask about other versions) See vendor documentation for versioning support and object retention poicy support.

OpenIO - versioning not tested. Requires --meta-prefix when adding folders and value of oo-

Ceph version 15 or later Octopus (aws v4 signature only)

Google Cloud Storage


End of Life Notifications

End of Life Notifications can be found here.


Deprecation Notifications

Azure default Tier change

In next release the default tier for Azure upload will change from cold to hot.  Tier specific upload to Azure will require advanced license.


New in 1.1.6-21152

  • Backup reporting statistics for Backup paths by folder and rollup stats (Advanced License Required)

  • Native Cloud provider tier support enhancements (Advanced License Required)

    • Allows copying data directly into the target tier without transition costs between tiers.

  • Version Aware recall Enhancements - newer than and older than per object version check to select files based on the closest date match.  Supports version aware S3 targets with bucket versioning enabled (Advanced License Required)

  • Ransomware Defender Zero Trust Backup API integration to allow zero trust backup.  Golden copy checks the threat level on the source data and will block backups if any active alerts.  Extends file system real time monitoring to your backups.

  • Stats Engine Updates - more detailed break down of file sizes backed up (adv license required). All time roll up of stats on a folder.   Stats updates for all license levels.    

  • Job Engine - Storing of job history is now streamed from a database and more is history is available for all folders configured on the system.  Filter job history by folder.

  • Incremental Isilon v2 API support - Supports 8.2 and later change list api to mirror file system to S3 bucket with all scenarios covered,  layer approach ensures order dependent updates are mirrored to the S3 bucket  

  • Flat file copy - The ability to accept a flat file with files listed anywhere under a managed folder to allow custom copy jobs based on a file list.  Use case: build a flat file based on date stamps for backing up files with last accessed older than x months and then scripting the delete of these files using the flat file.

Fixed in 1.1.6-21152

—————————————————–

T21026 Incremental Archive does not handle Alternate Data Stream (ADS) Files

Incremental archive job does not identify changes to files with Alternate Data Streams and therefore these files are not uploaded as part of incremental job.


Resolution: New, updated and deleted ADS files are now updated in object storage correctly.

—————————————————–

T20001 Issue with errors command


If the archivedfolders errors command is used with the tail and count options and the actual number of errors is less than specified in the count option, the errors command will fail.


Resolution: Condition where the tail parameter exceeds the number of errors is now handled.

—————————————————–

T21144 Incremental Archive may not complete for job which includes rename operation and Azure target

Incremental archive job to Azure may not complete when the job includes rename operations. The may not finish and/or objects may not be updated.


Resolution: Rename operation now handled.

—————————————————–


Not available in 1.1.6

T16667 Data Integrity Audit Job

Data Integrity Audit Job (searchctl archivedfolders audit ....) should not be used in this release. It will be removed in future release as it is not intended for the Golden Copy base license. 

It will require the advanced Golden Copy license.

—————————————————–

T17181 Archive to AWS Snowball

Archive to AWS Snowball is not supported with this optimized update release. This is planned in a coming update.

—————————————————–

T17195 Upload to Azure, Cohesity, ECS or Ceph via Proxy

Azure, Cohesity, ECS or Ceph clients using http proxy not supported in this update.

—————————————————–

T21370 Upload from snapshot

The archive job which specifies an existing snapshot as the source of the copy is not supported in this update.

—————————————————–

T16247 DR cluster alias / redirected recall

Ability to recall to a different cluster than the original source cluster is not supported in this update.


Technical Advisories

Technical Advisories for all products are available here.



Known Issues

Known Issues Archiving

T14014 Incremental upload requires 2 cycles before picking up changes

For incremental upload, changes are detected by comparing 2 snapshots. After enabling incremental or incremental on a newly added folder will required 2 incremental upload cycles to run and create 2 different point in time snapshots before changes will be detected.

Workaround: none required

—————————————————–

T15312 Archive job incorrectly presented as completed

For the case where all files have been uploaded but there is a larger file that is being uploaded in parts and a part is still in progress, the searchctl jobs running command will not show the job as running even though parts are still uploading.

Workaround: None required. The progress can be viewed in the logs. Final summary.html file once completed is correct.

—————————————————–

T16427 Incremental archive does not run with multiple Powerscale clusters added

When multiple Powerscale clusters are added to Golden Copy, incremental archive is blocked and does not run.

Workaround: None available.

—————————————————–

T16425 Archive incremental upload does not handle changelist in waiting state and incremental fails

A changelist on the Powerscale which is in waiting state is not handled by Golden Copy incremental archiving which fails the incremental job instead of waiting and applying a timeout.

Workaround: None required - following incremental interval will pick up the changes.

—————————————————–

T16629 Azure upload error where name contains %

Upload of file or folder with name that contains % character to Azure is not handled and will fail.

Workaround: None available.

—————————————————–

T17449 Folder missing meta data information for Azure container with legal hold or retention policy

For Azure container configured with legal hold or retention policy, upload of folder objects will be missing associated meta data for owner, group, mode bits, date stamps but ACLs are stored and protected. Golden Copy marks this upload as an error but the object is in fact created.

Workaround: None required.

—————————————————–

T17493 Upload of files and folders fail where owner or group meta-data contains language specific characters

For all S3 targets except ECS, if a folder or file meta-data for owner or group contains non-ASCII language specific (Unicode) characters, the file or folder upload will fail.

Workaround: None available. Issue only affects files and folders with above configuration. Other files and folders in upload job continue to be processed.

—————————————————–

T18012 Folders with language specific characters not uploaded

Folders with language specific characters are not upload but the files within the folder are uploaded.

Workaround: None available.

—————————————————–

T18107 Incremental archive job may miss files on restart, jobid lost

A cluster down/up while an incremental archive job is running will not recover any files that have not already been added to the queue for upload. Those files will be missed and also will not be identified for upload on the next incremental cycle. The job id associated with the incremental job is also lost and not available in jobs history.

Workaround: Do not cluster down/up while an incremental archive job is running.

—————————————————–

T18252 Empty folder uploaded as file on Google Cloud Storage

A folder on the file system that has no sub folders or files will be uploaded to Google Cloud Storage (GCE) as a file instead of a folder. This does not impact upload of overall archive job. On recall, the empty file is incorrectly downloaded as a file. 

Workaround: None available.

—————————————————–

T18241 Cannot add 2 Powerscale cluster to Golden Copy with same archivedfolder configuration


If you add 2 clusters to Golden Copy you cannot add the same archivedfolder for both as it results in duplicated folder id.


Workaround: Select unique path for archivedfolder for each cluster.


—————————————————–

T18979 Incremental Archive issues for files with Zone.Identifier suffix

Under some conditions PowerScale will store files with a Zone.Identifier suffix. These files may be archived without meta-data or error on archive and not be archived at all.


Workaround: These files can be excluded from archive by adding " --excludes=*.Zone.Identifier" to the archivedfolder definition.


—————————————————–

T19218 Setting to enable delete for Incremental archive not working

The system setting export ARCHIVE_INCREMENTAL_IGNORE_DELETES=false to enable deletes during incremental archive is not working. Deleted files on PowerScale are not deleted from S3.

Workaround: None available.


—————————————————–

T19305 Queued jobs are not managed

Golden Copy executes up to 10 jobs in parallel. If more than 10 jobs are submitted remaining jobs are queued and waiting to fill a job slot once it becomes available. Queued jobs are not visible through any commands such as command for running jobs and they do not survive a restart.


Workaround: On restart any jobs that were queued will need to be restarted. Tracking is available for the 10 jobs that are running and jobs history for jobs that are completed.


—————————————————–

T19387/T20731 Incremental sync does not store folder ACL & clears ACL for parent folder

On incremental sync where a new folder is archived, the associated folder ACLs are not stored with the object properties in S3 target. Also an incremental which includes an update to a file or folder clears the ACL for the parent folder. Note this issue does not affect full archive.


Workaround: None available. Manual process required to track folder ACLs.


—————————————————–

T19388 Fast Incremental incorrectly stores UID and GID properties

When Fast Incremental is enabled, the UID and GID are crossed and stored against the wrong attribute. UID is incorrectly stored against the group attribute instead of the owner attribute and the GID is incorrectly stored against the owner attribute instead of the group attribute.


Workaround: When evaluating owner and group, use the owner attribute to determine the group and the group attribute to determine the owner.


—————————————————–

T19441 Move/Delete operation in a single incremental sync orphans deleted data in S3 target

Under certain circumstances where in the same incremental update there is a move or rename of a folder and a delete of a sub-folder, the folder move is properly updated on S3 target but the deleted sub-folder is not deleted in S3 target.


Workaround: Orphaned folder can be manually removed from S3 target using native S3 tools.


—————————————————–

T20379 Canceled archive job continues upload

For an archive job that is cancelled while the phase of walking the filesystem is still in progress, the filesystem walk continues after the cancel and if another archive job on the same folder is started while the original snapshot is still present files from both snapshots will be uploaded. Impact: Any files that are uploaded twice will be skipped if they are already present and uploaded if not. Order of upload is not guaranteed.


Workaround: Please contact support.superna.net for assistance should this situation arise. Planned resolution in 1.1.6.

—————————————————–




Known Issues Reporting

General Reporting Issues

T17932 searchctl jobs view or folder stats may be missing reporting on small percentage of files uploaded.

The searchctl jobs view command or folder stats command may not properly report all files uploaded to the S3 target.

Workaround: Verify file count directly on S3 target.

—————————————————–

T18587 isilongateway restart may remove jobs history and running jobs information

An isilongateway container restart may result in information on running jobs and jobs history to be lost.

Without running job id, a job cannot be canceled or rerun.


This issue does not affect archiving of files.  Any job in progress will continue to archive files.


Workaround: To monitor job progress, use the searchctl archivedfolders stats command which relies on folder id as opposed to job id.


—————————————————–

T18876 Jobs history deleted after cluster up if s3 stats job run

Subsequent cluster down/up after searchctl archivedfolders s3stats command was run deletes all entries in the job history.

Impact is that the job cannot be re-run, cancelled, view job statistics without the job-id from the history.


Workaround: Folder stats are available for summary view of archive statistics for a folder.

—————————————————–

T19136 Jobs View / Export Report do not correctly calculate job run time if job is interrupted


For an archive job that is interrupted - for example cluster down/up while archive job is running - the jobs view and export report show a run time that is shorter than the true duration of the job.


Workaround: None available


—————————————————–

T19137 Export report does not report failed and skipped files

For an archive job where there are failed and skipped files the export report shows 100% success.


Workaround: The jobs view command for the archive job does correctly report on the errored and skipped files.


—————————————————–

T19466 Statistics may show more than 100% archived/attempted after a cluster down/up

If there is an archive job in progress when a cluster down/up is done, the job continues on cluster up but the jobs view and folder stats may show more than 100% for Archived and Attempted files.


Workaround: The archive job can be run again to ensure all files are uploaded. Any files that are already present on the object storage will show as a skipped statistic.


—————————————————– 

T21186 Statistics may not be accurate when there is a rename operation

If there is an archive job which includes a rename operation, the job and folder stats may not be accurate.


Workaround: Verify in object storage the correct files have been uploaded.


—————————————————–

T21257 The 'file change count' column in jobs running always shows 0 for incremental archive

For incremental archive, the searchctl jobs view command will show the number of files in the changelist, but the searchctl jobs running command always shows a 0 count.


Workaround: Use the searchctl jobs view command to see the number of files in the changelist.


—————————————————– 

T21084 The jobs view and jobs running have inconsistent phases

For an incremental archive job, the searchctl jobs running output last phase is GC_METADATA but the searchctl jobs view shows additional phases including a phase for Data Archive.

Workaround: Use the searchctl jobs view command to see the status of all phases.


—————————————————– 

T19316 searchctl jobs view does not show the size of errored files for incremental archive

The searchctl jobs view command for an incremental job will show a count for any errored files but will always show size as 0B instead of the actual size of errored files.

Workaround: Use folder stats to see cumulative stats for a folder.


—————————————————–

T20497 Issues with jobs history with '--tail' argument

Using jobs history with the --tail option may have the following issues:

- results not sorted

- more results retrieved than specified in the tail

-error if tail argument exceeds total number of records but results still displayed


Workaround: None available


—————————————————–

T21357 Export job may not create report

For export report for incremental archive jobs, archive jobs with errors or archive jobs from flat file, the export report may not get created.

Workaround: None available. Plan to fix in patch release.



Recall Reporting Issues

T16960 Rerun recall job overwrites export report

Rerun of a recall job followed by exporting a report will overwrite any previous export report for that folder.


Workaround: Export report from a previous recall can be recreated but running the searchctl archivedfolders export command for the appropriate job id.


—————————————————–

T17746 Recall reporting issues for metadata only recall

Recalling metadata only for a previous recall job using the command: searchctl archivedfolders metadata --jobid has the following issues:

- The resulting job cannot be monitored using the jobs view --follow command. Running the command results in an error if run against a metadata only recall job. 

- The jobs history view does not list the metadata only recall jobs.

- Export report has doubled count and errors not reported accurately


Workaround:

Run the jobs view command multiple times to see progress.

Keep a manual record of the metadata only recall job id.


—————————————————–

T17893 searchctl archivedfolders history incorrectly shows recall job as job type FULL

The output from the searchctl archivedfolders history command will incorrectly show a recall job as job type FULL.

Workaround: searchctl jobs history correctly shows the job type as GoldenCopy Recall.

—————————————————–

T18535 Recall reporting issue for accepted file count / interrupted recall

There is no stat for recall accepted file count.  Also if a recall is interrupted during the walk to build the recall files, the job reports as success even though not all files were recalled.


Workaround: None available


—————————————————–

T18875 Recall stats may incorrectly show errored count

Under some circumstances a recall job may show stats for Errors when in fact all files were successfully recalled.


Workaround: Use the searchctl archivedfolders errors command to check for errors.  Manual count of files on the Powerscale may also be used to verify the recall.


—————————————————–

T19357 Export Report not generated for a recall job

Running an export report for a recall jobs shows a job status of SUCCESS but the export summary report is not generated.


Workaround: Use the jobs view command for details of the recall job.


—————————————————–

T19415 Recall stats incorrectly show errors for folder object which store ACLs

Recall stats and error command incorrectly show errors related to meta data recall for the folder objects created to store folder ACLs.


Workaround: None required. These are not errors associated with the actual folder ACLs. These can be identified in the error command as the Metadata apply failed error will be listed against the folder name where folder has been prefixed with the PowerScale cluster name.


—————————————————–

T20500 'jobs vew' for recall job not incrementing meta data related stats

The searchctl jobs view command may incorrectly show 0 for metadata related stats.


Workaround: Use the folder stats to see cumulative stats for folder metadata recall.



Known Issues Recall

T16129 Recall from Cohesity may fail where folder or file contain special characters

Recall of files or folders from Cohesity which contain special characters may fail. Job is started successfully but no files are recalled.

Workaround: None available


—————————————————–

T16550 Empty folder is recalled as a file for GCS

Recall from GCS target of an empty folder results in a file on the PowerScale instead of a folder.

Workaround: If the empty directory is required on the file system it will need to be recreated manually.


—————————————————–

T18338 Recall Rate Limit

Golden Copy does not have the ability to rate limit a recall.


Workaround: None available within Golden Copy.


—————————————————–

T18428 Recall for target with S3-prefix result in extra S3 API call per folder

For S3 target that require a prefix for storing folders, on recall an extra S3 API call is made per folder. This API call results in an error but does not affect overall recall of files and folders.

Workaround: None required


—————————————————–

T18450 Folder object in S3 that contains folder ACL information incorrectly recalled as a directory when ARCHIVE_S3_PREFIX set


If Golden Copy is configured to apply ARCHIVE_S3_PREFIX on folder objects, on recall the folder object is incorrectly recalled as a directory to the Powerscale filesystem.


Workaround: None required


—————————————————–

T18600 Recall Job where recall path is mounted does not indicate error


No error is displayed if recall path is not mounted. In this case files may be downloaded to the Golden Copy filesystem which is not the requested end location and could also result in disk space issues on the Golden Copy VM.


Workaround: Ensure that mount for recall path exists prior to starting recall job.  See information here on the mount requirements.

—————————————————–

T19012 Recall of files from Azure fails for files uploaded with Golden Copy earlier than 1.1.4-21050

Files that were uploaded to Azure with Golden Copy build prior to 1.1.4-21050 cannot be recalled back to PowerScale using Golden Copy.

Workaround: Native S3 tools can be used to recall files from Azure.

—————————————————–

T19438 Files may not be recalled

Under some circumstances files may not be recalled without any error indicated in Golden Copy.

Workaround: Files can be manually retrieved using S3 native tools.

—————————————————–

T19649 Meta-data not recalled where AD user/group cannot be resolved

For case where files are uploaded and the owner or group was returned by the PowerScale API as Unknown User or Unknown Group because those owner/group no longer exist, on recall the Unknown User/Group cannot be resolved and block any other meta data from being applied.

Workaround: Meta data in S3 target can be used to confirm original meta data settings and manual steps on the operating system to apply them.

—————————————————–

T21291 Version based recall may not apply folder ACLs when using '--apply-metadata'

When using version based recall, if the parent folder of an object with multiple versions only has 1 version, the parent folder ACLs may not be applied.

Workaround: Reference for parent folder ACLs are stored as separate folder object in objet storage and can be applied manually.



Known Issues General & Administration

T14025 Changing PowerScale user requires a cluster down/up

If the iuser that was used when adding PowerScale to Golden Copy is changed, sessions still established with PowerScale using previous user.

Workaround: A cluster down/up is required to refresh user being used to connect to PowerScale. Contact support.superna.net for assistance.

—————————————————–

T16640 searchctl schedules uses UTC time

When configuring schedule using searchctl schedules command time must be entered as UTC time.

Workaround: None required

—————————————————–

T16855 Archived Folders for Powerscale cluster added with the --goldencopy-recall-only option does not appear in the archivedfolders list command

The searchctl archivedfolders list command does not list folders for Powerscale clusters that were added to Golden Copy using the --goldencopy-recall-only option.

Workaround: Keep a record of the folder id after adding the folder and then it can be referenced in other commands such as searchctl archived folders remove .

—————————————————–

T17987 Alarm for cancelled job shows job failed

The description for an alarm for a cancelled job says "Job failed to run" instead of indicating that the job was cancelled.

Workaround: Check the jobs history for the details of the job.

—————————————————–

T20175 Beta GUI not available

On Golden Copy 1.1.4-21105 and higher the Beta GUI is not available due to searchmw container restarting.

Workaround: None available. Delivery of the GUI is planned for 1.1.6 Golden Copy.

—————————————————–

T21073 Phone home may fail when archive folder path contains characters 'id'

Phone home may file if there is an archive folder configured path that ends in 'id' - for example: /ifs/data/patientid

Workaround: None available.

—————————————————–

T17200 Error on cluster up after power off/on

After power off/on of the Golden Copy VM, the cluster up might fail due to insufficient space for zk-ramdisk.

Workaround: Contact support for assistance.

—————————————————–

T21227 Backup & Restore missing configuration

After backup and restore, the following configurations are not restored:

- searchctl archivedfolders config --checksum

-searchctl notifications (including smtp/channel)

Workaround: Keep an external record of configurations that are not restored. After restore, missing configurations must be manually reapplied.

—————————————————–


Known Limitations

T15251 Upload from snapshot requires snapshot to be in Golden Copy Inventory

Golden Copy upload from an PowerScale snapshot requires snapshot to be in Golden Copy Inventory. Inventory task is run once a day. If you attempt to start archive without snapshot in inventory you will get error message "Incorrect snapshot provided".

Workaround: Wait for scheduled inventory to run or run inventory manually using command: searchctl PowerScales runinventory

—————————————————–

T15752 Cancel Job does not clear cached files for processing

Any files that were already cached for archive will still be archived once a job has been cancelled.

Workaround: None required. Once cached files are processed there is no further processing.

—————————————————–

T16429 Golden Copy Archiving rate fluctuates

Golden Copy archiving rates may fluctuate over the course of an upload or recall job.

Workaround: None required.

—————————————————–

T16628 Upgrade to 1.1.3 may result in second copy of files uploaded for Azure

In the Golden Copy 1.1.3 release, upload to Azure replaced any special characters in the cluster, file or folder names with "_". In 1.1.4 release the special characters are handled so that a subsequent upload in 1.1.4 wil re-upload any files/folders because the names are not identical in S3 to what was uploaded in 1.1.3. If the cluster name contained a special character - for example Isilon-1 - then all files will be re-uploaded.

Workaround: None

—————————————————–

HTML report cannot be exported twice for the same job

The HTML report cannot be run again after having been previously executed.

Workaround: None required. Use the previoTusly exported report.

—————————————————–

T16250 AWS accelerated mode is not supported

Golden Copy does not support adding AWS with accelerated mode as an S3 target.

—————————————————–

T16646 Golden Copy Job status

When viewing the status of a Golden Copy Job it is possible that a job which has a status of SUCCESS contains errors in processing files. The job status is used to indicate the whether the job was able to run successfully. Then the searchctl jobs view or searchctl stats view or HTML report should be used to determine the details related to the execution of the job including errors and successes.

—————————————————–

T17173 Debug logging disk space management

If debug logging is enable, the additional disk space consumed must be managed manually.

—————————————————–

T18640  searchctl archivedfolders errors supported output limit

The searchctl archivedfolders errors command support output limit is 1000.  


Workaround: For a longer list use the --tail --count 1000 or --head --count 1000 option to limit the display.


—————————————————–

Fast Incremental Known Limitations

  • mode bit meta data information is not available in fast incremental mode
  • owner and group are stored in numeric UID and GID format in the object header
  • PowerScale API only available for OneFS 8.2.1 and higher
  • Owner and Group meta-data not recalled where objects were uploaded with fast incremental due to bug in PowerScale API they cannot be recalled. Recall should be done without meta-data. There may still be meta-data errors on recall without meta-data which can be ignored.

—————————————————–

Move/Rename identification and management in object storage known limitations

  • PowerScale API only available for OneFS 8.2.1 and higher for updating S3 target with new location of folder and files and removing folder and files from old location
  • For OneFS version lower than 8.2.1 move/renamed objects cannot be identified due to PowerScale API issue and these will be orphaned in S3 target.

—————————————————–

Backblaze target requires https access

When configuring folder for Backblaze https access must be used, http is not supported.

—————————————————–

T20868 Cannot run incremental update for same folder to multiple targets

If incremental update for same folder is run in parallel to multiple targets, only 1 incremental job will run. This impacts incremental update only, parallel full archive for the folder to multiple targets does not have this issue and both complete successfully.

—————————————————–

T21258 Version based recall uses UTC time for inputs

When specifying the --newer-than or --older-than options for version based recall, UTC time must be used.

© Superna LLC