Software Releases

Current Release - Release Notes Golden Copy

Home

What’s New in Superna Eyeglass Golden Copy Release 1.1.4


What’s New! In Superna Eyeglass Golden Copy can be found here.


Supported OneFS releases

8.1.x.x

8.2.x.x

9.1.x.x


Supported S3 Protocol Targets

Amazon S3 version 4 of the authentication is supported (details here)

Dell ECS version 2

Azure blob services using S3 version of the authenticated protocol

Cohesity 6.3.1e AWS version 4 signature (ask about other versions) See vendor documentation for versioning support and object retention poicy support.

OpenIO - versioning not tested. Requires --meta-prefix when adding folders and value of oo-

Ceph version 15 or later Octopus (aws v4 signature only)

Google Cloud Storage


End of Life Notifications

End of Life Notifications can be found here.


Deprecation Notifications

Azure default Tier change

In next release the default tier for Azure upload will change from cold to hot.  Tier specific upload to Azure will require advanced license.


New/Fixed in 1.1.4-21074

New

New - T18492 Fast Incremental

For large changelist a fast incremental mode is now available that takes owner and group meta-data provided in changelist instead of making a separate API call to PowerScale to retrieve the meta-data. This feature is available for OneFS 8.2.1 and higher as it requires the newer API version available with these releases. This mode requires additional configuration to enable it - contact support.superna.net for assistance.

Known Limitations:

  • mode bit meta data information is not available in fast incremental mode
  • owner and group are stored in numeric UID and GID format in the object header
  • API only available for OneFS 8.2.1 and higher

Fixed

T16368 Recall of folder ACLs

Recall of folder ACLs is now available as of 1.1.4-21074. Original folder ACLs are stored in S3 target and can be reviewed there. On recall folder ACLs are applied against the folder on PowerScale. To enable ACL recall requires an additional configuration - contact support.superna.net for assistance.

—————————————————–

T17560 Renamed Folders and Files orphaned in S3 target

Due to bug in PowerScale changelist API for OneFS 8.2.0 and earlier (internal Dell bug# 234779), renamed folders and files are not reported resulting in files and folders in old location orphaned in the S3 target. This could lead to additional counts in S3 due to duplicated records.

Resolution: Golden Copy can now be configured to use the PowerScale changelist API for OneFS 8.2.1 and higher which identifies renamed folders and files.  In this configuration, the original folder/file is deleted from S3 target and object is archived in it's new location.  

—————————————————–

T18396 / T18880 Manual incremental job results in 2 running jobs

Starting a manual incremental job results in 2 running jobs.  The Isilon Incremental Archive job is a parent job.  The actual incremental job is tracked by the second incremental archive - <uuid> job.  Jobs view should be run against the incremental archive - <uuid> job to track the incremental progress.  Jobs view against the Incremental Archive job results in an error.


Resolution: Only 1 job is displayed now for manual incremental which is the active job.


—————————————————–

T19129 rerun job does not exit

The rerun job to upload errored files after an archive job does not exit after all of the files identified for the job have been uploaded.


Resolution: Job now terminates once completed without any manual intervention.


—————————————————–


New/Fixed in 1.1.4-21062

New

New - T17155 Recall

Recall of files from S3 target of all archived files or a sub-directory of archived files to a staging area on same PowerScale cluster is now available. Documentation on Recall and it's supported options can be found here in the section "How to Recall Data from Object Back to File".

—————————————————–

New - T17490 Google Cloud Storage (GCE) Support

Support for archive to Google Cloud Storage (GCE) is now available. Golden Copy for Google Cloud Storage documentation can be found here in the section "Archiving to Google Cloud Storage".

—————————————————–

New - T18499 Security update to reduce impact of CVE-2020-25684, CVE-2020-25685, and CVE-2020-25686

 dnsmasq cache disabled to reduce impact from CVE-2020-25684, CVE-2020-25685, and CVE-2020-25686.

—————————————————–

New - T18077 searchctl archived folders archive has new --follow flag

Command to start archive job searchctl archived folders archive now has a --follow option which will open the jobs view after starting the job to more easily monitor progress of the newly started job.

—————————————————–

New - T17475 Golden Copy Beta GUI

The Golden Copy and Archived Folder Beta GUI is now available.


Fixed

T17677 Load balancing for ECS nodes not available

The option to set load balancing of Golden Copy requests to multiple ECS nodes is not available in current release.

Resolution: Golden Copy can load balance archive across multiple ECS nodes using the --endpoint-ips option in this build.

—————————————————–

T18164 Empty archive folder blocks all archive jobs

If an empty folder is added to Golden Copy and an archive jobs is started on that folder, that archive job will hang and block all other archive jobs.

Resolution: Empty folder added as archive job folder no longer hangs when the job is started.

—————————————————–

T18064 Archive rerun job does not upload errored files

The rerun job to upload errored files can be started but no files are uploaded.

Resolution: Rerun job is working and able to be used to upload errored files from a completed archive job.

—————————————————–

T15689 Command to manually initiate an incremental upload does not run

The command searchctl archivedfolders archive --incremental encounters an error which prevents it from running.

Resolution: Incremental archive job can now be initiated by using the --incremental option as documented here in the section "How to start a Full or Incremental Archive Job".

—————————————————–

T18007 Export report hangs

Export report job can be started but never finishes.

Resolution: Export job is now able to run and finish.

—————————————————–

T18275 Jobs marked as completed while it is still running

Under certain circumstances a job may be incorrectly identified as completed and marked as success when in fact it is still running.

Resolution: Job now correctly reports running and completed status.

—————————————————–

T17200 PowerScale and Archive Folder configuration gone after Golden Copy Power Off/On

After power Off/On of the Golden Copy VM, when Golden Copy comes back up previously added clusters and folders are no longer configured.

Resolution: After a power Off/On, run the following commands to restore prior configuration:

ecactl cluster down

ecactl cluster up

—————————————————–

T15758 Unable to restore Golden Copy configuration to a new VM

The ecactl cluster restore command runs but does not actually restore any of the Golden Copy configuration to the new VM.

Resolution: Backup & restore operations now restore:

- licensing

- clusters added

- archive folders added & definitions

- eca-evn-common.conf custom settings

Job history and reports are not restored.

For a multi-node deployment, restore will be done to node 1 and remaining nodes deployed as new OVF.

—————————————————–

T18113 Additional steps required to shape bandwidth of archive jobs

Additional configuration required to the steps documented here to shape bandwidth of archive jobs.

Resolution: Additional steps are no longer required.  Steps as documented can be used to shape bandwidth usage for archive jobs.


Not available in 1.1.4-21062/21074

T16368 Recall of folder ACLs (applies to 1.1.4-21062 only)

Recall of folder ACLs is not available in this release. Original folder ACLs are stored in S3 target and can be reviewed there.

—————————————————–

T16667 Data Integrity Audit Job

Data Integrity Audit Job (searchctl archivedfolders audit ....) should not be used in this release. It will be removed in future release as it is not intended for the Golden Copy base license. 

It will require the advanced Golden Copy license.

—————————————————–

T17181 Archive to AWS Snowball

Archive to AWS Snowball is not supported with this optimized update release. This is planned in a coming update.

—————————————————–

T17195 Upload to Azure, Cohesity, ECS or Ceph via Proxy

Azure, Cohesity, ECS or Ceph clients using http proxy not supported in this update.

—————————————————–

T18090 Azure option to specify tier for data copy

The option to specify the Azure tier that is target of upload should not be used. It will be removed in future release as it is not intended for the Golden Copy base license. It will require the advanced Golden Copy license.




New/Fixed in 1.1.4-21002

New

New: T18032 Improved Upload Performance

Build 1.1.4-21002 comes with improved upload performance.

—————————————————–

New: 6 node Golden Copy Deployment

Golden Copy can now be deployed as a 6 node cluster for increased archive performance.

—————————————————–

New: T16242 Authenticated login log (Pending Testing)

A log that makes available a record of authenticated login to Golden Copy.

—————————————————–


Fixed 

T17991 Record reprocessing & parallel jobs

Single or parallel archive jobs may queue files multiple times for upload. Files appear multiple times in the queue are not re-uploaded but marked as skipped.

Resolution: Management of the archive queue updated to avoid files being added to queue multiple times. This fix also provides support for parallel archive jobs up to a maximum of 3.

—————————————————–

T17182, T17692 Rate limiting for multi node deployments

Rate limiting cannot be applied to Golden Copy VMs in a multi node Golden Copy deployment as it will cause issues such as rearchiving of files.

Resolution: Golden Copy now has a traffic shaping capability for bandwidth management by coordinating file upload so that over time the bandwidth would average out to the desired rate. As such at any point in time the bandwidth usage may exceed the setting for short bursts based on infrastructure bandwidth capabilities. Additional information for this feature is available here. This solution applies to single and multi-VM deployments.

-———————————————–

T15810 Archive job affected by external issues

Upload archive job under certain conditions external to Golden Copy such as persistent sustained networking issues, permission issues or DNS resolution issues will continue to process files for upload that result in errored uploads for those files. The job automatically resumes once the external condition has been resolved. In this case stats may not properly reflect failed files.

Resolution: External issue handling has been improved.

—————————————————–


Not available in 1.1.4-21002

T17155 Recall of files from S3 target using Golden Copy

This is planned in a coming update. Files continue to be able to be recalled using native S3 tools.

—————————————————–

T17181 Archive to AWS Snowball

Archive to AWS Snowball is not supported with this optimized update release. This is planned in a coming update.

—————————————————–

T17195 Upload to Azure, Cohesity, ECS or Ceph via Proxy

Azure, Cohesity, ECS or Ceph clients using http proxy not supported in this update.

—————————————————–

Golden Copy and Archived Folder GUI view not available


Technical Advisories

Technical Advisories for all products are available here.



Known Issues

Known Issues Archiving

T14014 Incremental upload requires 2 cycles before picking up changes

For incremental upload, changes are detected by comparing 2 snapshots. After enabling incremental or incremental on a newly added folder will required 2 incremental upload cycles to run and create 2 different point in time snapshots before changes will be detected.

Workaround: none required

—————————————————–

T15312 Archive job incorrectly presented as completed

For the case where all files have been uploaded but there is a larger file that is being uploaded in parts and a part is still in progress, the searchctl jobs running command will not show the job as running even though parts are still uploading.

Workaround: None required. The progress can be viewed in the logs. Final summary.html file once completed is correct.

—————————————————–

T16427 Incremental archive does not run with multiple Powerscale clusters added

When multiple Powerscale clusters are added to Golden Copy, incremental archive is blocked and does not run.

Workaround: None available.

—————————————————–

T16425 Archive incremental upload does not handle changelist in waiting state and incremental fails

A changelist on the Powerscale which is in waiting state is not handled by Golden Copy incremental archiving which fails the incremental job instead of waiting and applying a timeout.

Workaround: None required - following incremental interval will pick up the changes.

—————————————————–

T16629 Azure upload error where name contains %

Upload of file or folder with name that contains % character to Azure is not handled and will fail.

Workaround: None available.

—————————————————–

T17449 Folder missing meta data information for Azure container with legal hold or retention policy

For Azure container configured with legal hold or retention policy, upload of folder objects will be missing associated meta data for owner, group, mode bits, date stamps but ACLs are stored and protected. Golden Copy marks this upload as an error but the object is in fact created.

Workaround: None required.

—————————————————–

T17493 Upload of files and folders fail where owner or group meta-data contains language specific characters

For all S3 targets except ECS, if a folder or file meta-data for owner or group contains non-ASCII language specific (Unicode) characters, the file or folder upload will fail.

Workaround: None available. Issue only affects files and folders with above configuration. Other files and folders in upload job continue to be processed.

—————————————————–

T18012 Folders with language specific characters not uploaded

Folders with language specific characters are not upload but the files within the folder are uploaded.

Workaround: None available.

—————————————————–

T18107 Incremental archive job may miss files on restart, jobid lost

A cluster down/up while an incremental archive job is running will not recover any files that have not already been added to the queue for upload. Those files will be missed and also will not be identified for upload on the next incremental cycle. The job id associated with the incremental job is also lost and not available in jobs history.

Workaround: Do not cluster down/up while an incremental archive job is running.

—————————————————–

T18252 Empty folder uploaded as file on Google Cloud Storage

A folder on the file system that has no sub folders or files will be uploaded to Google Cloud Storage (GCE) as a file instead of a folder. This does not impact upload of overall archive job. On recall, the empty file is incorrectly downloaded as a file. 

Workaround: None available.

—————————————————–

T18241 Cannot add 2 Powerscale cluster to Golden Copy with same archivedfolder configuration


If you add 2 clusters to Golden Copy you cannot add the same archivedfolder for both as it results in duplicated folder id.


Workaround: Select unique path for archivedfolder for each cluster.


—————————————————–

T18979 Incremental Archive issues for files with Zone.Identifier suffix

Under some conditions PowerScale will store files with a Zone.Identifier suffix. These files may be archived without meta-data or error on archive and not be archived at all.


Workaround: These files can be excluded from archive by adding " --excludes=*.Zone.Identifier" to the archivedfolder definition.


—————————————————–

T19130 Cannot rerun for errors from incremental job

The rerun job does not identify errors from an incremental job and cannot be used to reupload those files.


Workaround: A full archive job can be run. Any files that are up to date are skipped. Any files that are out of date will be uploaded.


—————————————————–

T19218 Setting to enable delete for Incremental archive not working

The system setting export ARCHIVE_INCREMENTAL_IGNORE_DELETES=false to enable deletes during incremental archive is not working. Deleted files on PowerScale are not deleted from S3.

Workaround: None available.


—————————————————–

T19305 Queued jobs are not managed

Golden Copy executes up to 10 jobs in parallel. If more than 10 jobs are submitted remaining jobs are queued and waiting to fill a job slot once it becomes available. Queued jobs are not visible through any commands such as command for running jobs and they do not survive a restart.


Workaround: On restart any jobs that were queued will need to be restarted. Tracking is available for the 10 jobs that are running and jobs history for jobs that are completed.


—————————————————–

T19387 Incremental sync does not store folder ACL

On incremental sync where a new folder is archived, the associated folder ACLs are not stored with the object properties in S3 target. Note this issue does not affect full archive.


Workaround: None available. Manual process required to track folder ACLs.


—————————————————–

T19388 Fast Incremental incorrectly stores UID and GID properties

When Fast Incremental is enabled, the UID and GID are crossed and stored against the wrong attribute. UID is incorrectly stored against the group attribute instead of the owner attribute and the GID is incorrectly stored against the owner attribute instead of the group attribute.


Workaround: When evaluating owner and group, use the owner attribute to determine the group and the group attribute to determine the owner.


—————————————————–

T19441 Move/Delete operation in a single incremental sync orphans deleted data in S3 target

Under certain circumstances where in the same incremental update there is a move or rename of a folder and a delete of a sub-folder, the folder move is properly updated on S3 target but the deleted sub-folder is not deleted in S3 target.


Workaround: Orphaned folder can be manually removed from S3 target using native S3 tools.


—————————————————–






Known Issues Reporting

General Reporting Issues

T16450 searchctl jobs view incorrect

Under some circumstances, jobs view will have the incorrect stats. For example jobs view may show more than 100% for files attempted and archived.

Workaround: Use folder stats to see counts related to the archive job.

—————————————————–

T17932 searchctl jobs view or folder stats may be missing reporting on small percentage of files uploaded.

The searchctl jobs view command or folder stats command may not properly report all files uploaded to the S3 target.

Workaround: Verify file count directly on S3 target.

—————————————————–

T18587 isilongateway restart may remove jobs history and running jobs information

An isilongateway container restart may result in information on running jobs and jobs history to be lost.

Without running job id, a job cannot be canceled or rerun.


This issue does not affect archiving of files.  Any job in progress will continue to archive files.


Workaround: To monitor job progress, use the searchctl archivedfolders stats command which relies on folder id as opposed to job id.


—————————————————–

T18876 Jobs history deleted after cluster up if s3 stats job run

Subsequent cluster down/up after searchctl archivedfolders s3stats command was run deletes all entries in the job history.

Impact is that the job cannot be re-run, cancelled, view job statistics without the job-id from the history.


Workaround: Folder stats are available for summary view of archive statistics for a folder.

—————————————————–

T19136 Jobs View / Export Report do not correctly calculate job run time if job is interrupted


For an archive job that is interrupted - for example cluster down/up while archive job is running - the jobs view and export report show a run time that is shorter than the true duration of the job.


Workaround: None available


—————————————————–

T19137 Export report does not report failed and skipped files

For an archive job where there are failed and skipped files the export report shows 100% success.


Workaround: The jobs view command for the archive job does correctly report on the errored and skipped files.


—————————————————–

T19466 Statistics may show more than 100% archived/attempted after a cluster down/up

If there is an archive job in progress when a cluster down/up is done, the job continues on cluster up but the jobs view and folder stats may show more than 100% for Archived and Attempted files.


Workaround: The archive job can be run again to ensure all files are uploaded. Any files that are already present on the object storage will show as a skipped statistic.


—————————————————– 



Recall Reporting Issues

T16960 Rerun recall job overwrites export report

Rerun of a recall job followed by exporting a report will overwrite any previous export report for that folder.


Workaround: Export report from a previous recall can be recreated but running the searchctl archivedfolders export command for the appropriate job id.


—————————————————–

T17746 Recall reporting issues for metadata only recall

Recalling metadata only for a previous recall job using the command: searchctl archivedfolders metadata --jobid has the following issues:

- The resulting job cannot be monitored using the jobs view --follow command. Running the command results in an error if run against a metadata only recall job. 

- The jobs history view does not list the metadata only recall jobs.

- Export report has doubled count and errors not reported accurately


Workaround:

Run the jobs view command multiple times to see progress.

Keep a manual record of the metadata only recall job id.


—————————————————–

T17893 searchctl archivedfolders history incorrectly shows recall job as job type FULL

The output from the searchctl archivedfolders history command will incorrectly show a recall job as job type FULL.

Workaround: searchctl jobs history correctly shows the job type as GoldenCopy Recall.

—————————————————–

T18535 Recall reporting issue for accepted file count / interrupted recall

There is no stat for recall accepted file count.  Also if a recall is interrupted during the walk to build the recall files, the job reports as success even though not all files were recalled.


Workaround: None available


—————————————————–

T18875 Recall stats may incorrectly show errored count

Under some circumstances a recall job may show stats for Errors when in fact all files were successfully recalled.


Workaround: Use the searchctl archivedfolders errors command to check for errors.  Manual count of files on the Powerscale may also be used to verify the recall.


—————————————————–

T19357 Export Report not generated for a recall job

Running an export report for a recall jobs shows a job status of SUCCESS but the export summary report is not generated.


Workaround: Use the jobs view command for details of the recall job.


—————————————————–

T19415 Recall stats incorrectly show errors for folder object which store ACLs

Recall stats and error command incorrectly show errors related to meta data recall for the folder objects created to store folder ACLs.


Workaround: None required. These are not errors associated with the actual folder ACLs. These can be identified in the error command as the Metadata apply failed error will be listed against the folder name where folder has been prefixed with the PowerScale cluster name.



Known Issues Recall

T16129 Recall from Cohesity may fail where folder or file contain special characters

Recall of files or folders from Cohesity which contain special characters may fail. Job is started successfully but no files are recalled.

Workaround: None available


—————————————————–

T19436, T19480 File/Folder mode bit meta data not recalled correctly

On recall of files and folder from object storage to PowerScale, the file and folder mode bits are not restored. Files/folders have no read/write/execute permissions.


Workaround: Meta data in S3 target can be used to confirm original mode bit settings.

Folders: Configure Golden Copy to recall ACLs and folder mode bits will be restored.

Files: Manual steps on operating system can be used to modify owner, group or mode bits as required.


—————————————————–

T18338 Recall Rate Limit

Golden Copy does not have the ability to rate limit a recall.


Workaround: None available within Golden Copy.


—————————————————–

T18428 Recall for target with S3-prefix result in extra S3 API call per folder

For S3 target that require a prefix for storing folders, on recall an extra S3 API call is made per folder. This API call results in an error but does not affect overall recall of files and folders.

Workaround: None required


—————————————————–

T18450 Folder object in S3 that contains folder ACL information incorrectly recalled as a directory when ARCHIVE_S3_PREFIX set


If Golden Copy is configured to apply ARCHIVE_S3_PREFIX on folder objects, on recall the folder object is incorrectly recalled as a directory to the Powerscale filesystem.


Workaround: None required


—————————————————–

T18600 Recall Job where recall path is mounted does not indicate error


No error is displayed if recall path is not mounted. In this case files may be downloaded to the Golden Copy filesystem which is not the requested end location and could also result in disk space issues on the Golden Copy VM.


Workaround: Ensure that mount for recall path exists prior to starting recall job.  See information here on the mount requirements.

—————————————————–

T19012 Recall of files from Azure fails for files uploaded with Golden Copy earlier than 1.1.4-21050

Files that were uploaded to Azure with Golden Copy build prior to 1.1.4-21050 cannot be recalled back to PowerScale using Golden Copy.

Workaround: Native S3 tools can be used to recall files from Azure.

—————————————————–

T19435 Recall of empty folder does not have owner/group meta data applied

Empty folders that are recalled do not have the owner and group meta data applied. This may result in Powerscale root user mapping settings on the NFS export used to mount the recall directory being applied.

Workaround: Folder object in S3 target do have the original ACLs with owner and group. The ACLs can be read from S3 and then applied manually.

—————————————————–

T19438 Files may not be recalled

Under some circumstances files may not be recalled without any error indicated in Golden Copy.

Workaround: Files can be manually retrieved using S3 native tools.

—————————————————–

T19649 Meta-data not recalled where AD user/group cannot be resolved

For case where files are uploaded and the owner or group was returned by the PowerScale API as Unknown User or Unknown Group because those owner/group no longer exist, on recall the Unknown User/Group cannot be resolved and block any other meta data from being applied.

Workaround: Meta data in S3 target can be used to confirm original meta data settings and manual steps on the operating system to apply them.


Known Issues General & Administration

T14025 Changing PowerScale user requires a cluster down/up

If the iuser that was used when adding PowerScale to Golden Copy is changed, sessions still established with PowerScale using previous user.

Workaround: A cluster down/up is required to refresh user being used to connect to PowerScale. Contact support.superna.net for assistance.

—————————————————–

T16640 searchctl schedules uses UTC time

When configuring schedule using searchctl schedules command time must be entered as UTC time.

Workaround: None required

—————————————————–

T16855 Archived Folders for Powerscale cluster added with the --goldencopy-recall-only option does not appear in the archivedfolders list command

The searchctl archivedfolders list command does not list folders for Powerscale clusters that were added to Golden Copy using the --goldencopy-recall-only option.

Workaround: Keep a record of the folder id after adding the folder and then it can be referenced in other commands such as searchctl archived folders remove .

—————————————————–

T17987 Alarm for cancelled job shows job failed

The description for an alarm for a cancelled job says "Job failed to run" instead of indicating that the job was cancelled.

Workaround: Check the jobs history for the details of the job.

—————————————————–

T18092 Files have wrong permission after initial cluster up for multi-node Golden Copy deployment

After first cluster up some configuration files have the wrong permission which causes archiving to only be done from node 1.

Workaround: Cluster must be added to Golden Copy in order for all files to be in place after which you will cluster down and fix permissions as per below:

ecactl cluster down
ecactl cluster exec mkdir -p /opt/superna/eca/data/common
sudo chown ecaadmin:ecaadmin /opt/superna/eca/data/common/jobs_history.json
ecactl cluster exec "sudo chown ecaadmin:ecaadmin /opt/superna/eca/conf/common/keystore.jceks"

—————————————————–



Known Limitations

T15251 Upload from snapshot requires snapshot to be in Golden Copy Inventory

Golden Copy upload from an PowerScale snapshot requires snapshot to be in Golden Copy Inventory. Inventory task is run once a day. If you attempt to start archive without snapshot in inventory you will get error message "Incorrect snapshot provided".

Workaround: Wait for scheduled inventory to run or run inventory manually using command: searchctl PowerScales runinventory

—————————————————–

T15752 Cancel Job does not clear cached files for processing

Any files that were already cached for archive will still be archived once a job has been cancelled.

Workaround: None required. Once cached files are processed there is no further processing.

—————————————————–

T16429 Golden Copy Archiving rate fluctuates

Golden Copy archiving rates may fluctuate over the course of an upload or recall job.

Workaround: None required.

—————————————————–

T16628 Upgrade to 1.1.3 may result in second copy of files uploaded for Azure

In the Golden Copy 1.1.3 release, upload to Azure replaced any special characters in the cluster, file or folder names with "_". In 1.1.4 release the special characters are handled so that a subsequent upload in 1.1.4 wil re-upload any files/folders because the names are not identical in S3 to what was uploaded in 1.1.3. If the cluster name contained a special character - for example Isilon-1 - then all files will be re-uploaded.

Workaround: None

—————————————————–

HTML report cannot be exported twice for the same job

The HTML report cannot be run again after having been previously executed.

Workaround: None required. Use the previoTusly exported report.

—————————————————–

T16250 AWS accelerated mode is not supported

Golden Copy does not support adding AWS with accelerated mode as an S3 target.

—————————————————–

T16646 Golden Copy Job status

When viewing the status of a Golden Copy Job it is possible that a job which has a status of SUCCESS contains errors in processing files. The job status is used to indicate the whether the job was able to run successfully. Then the searchctl jobs view or searchctl stats view or HTML report should be used to determine the details related to the execution of the job including errors and successes.

—————————————————–

T17173 Debug logging disk space management

If debug logging is enable, the additional disk space consumed must be managed manually.

—————————————————–

T18640  searchctl archivedfolders errors supported output limit

The searchctl archivedfolders errors command support output limit is 1000.  


Workaround: For a longer list use the --tail --count 1000 or --head --count 1000 option to limit the display.


—————————————————–

Fast Incremental Known Limitations

  • mode bit meta data information is not available in fast incremental mode
  • owner and group are stored in numeric UID and GID format in the object header
  • PowerScale API only available for OneFS 8.2.1 and higher
  • Owner and Group meta-data not recalled where objects were uploaded with fast incremental due to bug in PowerScale API they cannot be recalled. Recall should be done without meta-data. There may still be meta-data errors on recall without meta-data which can be ignored.

—————————————————–

Move/Rename identification and management in object storage known limitations

  • PowerScale API only available for OneFS 8.2.1 and higher for updating S3 target with new location of folder and files and removing folder and files from old location
  • For OneFS version lower than 8.2.1 move/renamed objects cannot be identified due to PowerScale API issue and these will be orphaned in S3 target.
© Superna LLC