[Amazon FSx for NetApp ONTAP] SnapMirrorの転送先ボリュームのサイズ不足である場合の挙動を確認してみた
SnapMirrorの転送先ボリュームに空きがなかった場合の挙動が気になる
こんにちは、のんピ(@non____97)です。
皆さんはSnapMirrorの転送先ボリュームに空きがなかった場合の挙動が気になったことはありますか? 私はあります。
SnapMirrorを使ってAmazon FSx for NetApp ONTAP(以降FSxN)で移行する場合、コスト削減のためにSSDの使用量を抑えたいと考えるかと思います。
そのような場合に以下のような方法が使えないかと考えました。
- SSDのサイズを小さめに作成する
- SnapMirrorでできる限り転送する
- 途中で中断されたらStorage Efficiencyを実行して空きを作成する
- 2と3の処理を繰り返し実行する
「キャパシティプールストレージに流せば良いのでは?」と思われるかもしれませんが、重複排除や圧縮などのStorage EfficiencyはSSD上でしかかけることができません。
実際に上述のようなことが可能なのか確認してみました。
いきなりまとめ
- 冒頭で紹介した方式は実現できない
- SnapMirrorの転送先ボリュームのサイズ不足である場合、転送が中止される
- SnapMirrorのInitializeで途中で終了した場合、転送先ボリュームでStorage Efficiencyを有効化することはできない
- ボリュームサイズを拡張するなど問題を解決した上で再度Initializeした後に操作する必要がある
- つまり、転送先ボリュームに十分な空き容量がなければ、Storage Efficiencyを実行することができない
- SnapMirrorの差分同期途中で転送が終了した場合、転送先ボリュームでStorage Efficiencyを有効化することはできない
- ボリュームサイズを拡張するなど問題を解決した上でチェックポイントを削除した後に操作する必要がある
- SnapMirrorの転送先ボリュームのサイズは転送元ボリュームのサイズ以上のものになるようにしよう
- ボリュームの自動拡張の有効化がオススメ
- 少ないSSDで重複排除や圧縮などのStorage Efficiencyをしっかりと効かせたい場合は、カスケードSnapMirrorを使う方式となる
やってみる
検証環境の確認
検証環境は以下記事で用意したものを利用します。
改めて転送元ボリュームやStorage Efficiency、aggregate、Snapshotを確認しておきます。
::> set diag Warning: These diagnostic commands are for use by NetApp personnel only. Do you want to continue? {y|n}: y ::*> volume efficiency show -volume vol1* -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression vserver volume state policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume ------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- ----------------------------------------- svm vol1 Enabled auto false true efficient false true true true false svm2 vol1_dst Enabled - false true efficient false true true true false svm2 vol1_dst2 Enabled - false true efficient false true true true false svm3 vol1_dst2_dst Enabled auto false true efficient true true true true false svm3 vol1_dst_dst Enabled auto false true efficient true true true true false 5 entries were displayed. ::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 6.85GB 16GB 15.20GB 8.35GB 54% 808.8MB 9% 808.8MB 9.14GB 60% - 9.14GB - - svm2 vol1_dst 7.54GB 1.10GB 7.54GB 7.16GB 6.06GB 84% 4.98GB 45% 3.00GB 11.01GB 154% - 8.05GB 0B 0% svm2 vol1_dst2 9.84GB 1.45GB 9.84GB 9.35GB 7.90GB 84% 4.53GB 36% 4.10GB 12.40GB 133% - 9.06GB 0B 0% svm3 vol1_dst2_dst 5.40GB 876.9MB 5.40GB 5.40GB 4.55GB 84% 4.59GB 50% 2.05GB 9.13GB 169% - 9.04GB 0B 0% svm3 vol1_dst_dst 4.90GB 896.3MB 4.90GB 4.90GB 4.03GB 82% 5.11GB 56% 2.21GB 9.14GB 186% - 9.07GB 0B 0% 5 entries were displayed. ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 8.38GB 1% Footprint in Performance Tier 245.6MB 3% Footprint in FSxFabricpoolObjectStore 8.24GB 97% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Deduplication Metadata 12.04MB 0% Deduplication 12.04MB 0% Delayed Frees 111.0MB 0% File Operation Metadata 4KB 0% Total Footprint 8.58GB 1% Footprint Data Reduction in capacity tier 3.87GB - Effective Total Footprint 4.71GB 1% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 196.4GB Total Physical Used: 19.87GB Total Storage Efficiency Ratio: 9.88:1 Total Data Reduction Logical Used Without Snapshots: 43.87GB Total Data Reduction Physical Used Without Snapshots: 14.80GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.96:1 Total Data Reduction Logical Used without snapshots and flexclones: 43.87GB Total Data Reduction Physical Used without snapshots and flexclones: 14.80GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.96:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 141.9GB Total Physical Used in FabricPool Performance Tier: 16.29GB Total FabricPool Performance Tier Storage Efficiency Ratio: 8.71:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 35.57GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 11.23GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.17:1 Logical Space Used for All Volumes: 43.87GB Physical Space Used for All Volumes: 23.87GB Space Saved by Volume Deduplication: 20.00GB Space Saved by Volume Deduplication and pattern detection: 20.00GB Volume Deduplication Savings ratio: 1.84:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.84:1 Logical Space Used by the Aggregate: 28.82GB Physical Space Used by the Aggregate: 19.87GB Space Saved by Aggregate Data Reduction: 8.94GB Aggregate Data Reduction SE Ratio: 1.45:1 Logical Size Used by Snapshot Copies: 152.6GB Physical Size Used by Snapshot Copies: 7.36GB Snapshot Volume Data Reduction Ratio: 20.73:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 20.73:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 3 Number of SIS Change Log Disabled Volumes: 0 ::*> snapshot show -volume vol1 ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm vol1 test.2023-12-22_0533 160KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528 24.45MB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900 312KB 0% 0% snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507 292KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406 148KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027 144KB 0% 0% 6 entries were displayed.
転送先ボリュームの作成
転送先ボリュームを作成します。
転送元ボリュームの使用量は8.35GBです。転送先ボリュームのサイズは2GBで自動拡張しないようにします。
volume create -vserver svm2 -volume vol1_dst3 -aggregate aggr1 -state online -type DP -size 2GB -tiering-policy none ::*> volume create -vserver svm2 -volume vol1_dst3 -aggregate aggr1 -state online -type DP -size 2GB -tiering-policy none -autosize-mode off [Job 149] Job succeeded: Successful ::*> volume show -volume vol1* -fields type, autosize-mode, max-autosize vserver volume max-autosize autosize-mode type ------- ------ ------------ ------------- ---- svm vol1 19.20GB off RW svm2 vol1_dst 100TB grow_shrink DP svm2 vol1_dst2 100TB grow_shrink DP svm2 vol1_dst3 100TB off DP svm3 vol1_dst2_dst 100TB grow_shrink RW svm3 vol1_dst_dst 100TB grow_shrink RW 6 entries were displayed.
SnapMirror Initialize
SnapMirrorのInitializeを行います。
下準備としてSnapMirror relationshipを作成します。
::*> snapmirror create -source-path svm:vol1 -destination-path svm2:vol1_dst3 -policy MirrorAllSnapshots Operation succeeded: snapmirror create for the relationship with destination "svm2:vol1_dst3". ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Snapmirrored Idle - true - svm2:vol1_dst3 Uninitialized Idle - true - svm3:vol1_dst_dst Broken-off Idle - true - svm2:vol1_dst2 XDP svm3:vol1_dst2_dst Broken-off Idle - true - 5 entries were displayed. ::*> volume show -volume vol1, vol1_dst3 -fields type, autosize-mode, max-autosize vserver volume max-autosize autosize-mode type ------- ------ ------------ ------------- ---- svm vol1 19.20GB off RW svm2 vol1_dst3 100TB off DP 2 entries were displayed.
Initializeします。
::*> snapmirror initialize -destination-path svm2:vol1_dst3 Operation is queued: snapmirror initialize of destination "svm2:vol1_dst3". ::*> snapmirror show -destination-path svm2:vol1_dst3 Source Path: svm:vol1 Source Cluster: - Source Vserver: svm Source Volume: vol1 Destination Path: svm2:vol1_dst3 Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol1_dst3 Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): unlimited Mirror State: Uninitialized Relationship Status: Transferring File Restore File Count: - File Restore File List: - Transfer Snapshot: test.2023-12-22_0533 Snapshot Progress: 351.6MB Total Progress: 351.6MB Network Compression Ratio: 1:1 Snapshot Checkpoint: 594.6KB Newest Snapshot: - Newest Snapshot Timestamp: - Exported Snapshot: - Exported Snapshot Timestamp: - Healthy: true Relationship ID: e94d0e31-a526-11ee-981e-bdd56ead09c8 Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8 Destination Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8 Current Operation ID: 1b60dcad-a527-11ee-981e-bdd56ead09c8 Transfer Type: initialize Transfer Error: - Last Transfer Type: - Last Transfer Error: - Last Transfer Error Codes: - Last Transfer Size: - Last Transfer Network Compression Ratio: - Last Transfer Duration: - Last Transfer From: - Last Transfer End Timestamp: - Unhealthy Reason: - Progress Last Updated: 12/28 02:16:31 Relationship Capability: 8.2 and above Lag Time: - Current Transfer Priority: normal SMTape Operation: - Destination Volume Node Name: FsxId0ab6f9b00824a187c-01 Identity Preserve Vserver DR: - Number of Successful Updates: 0 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 0 Total Transfer Time in Seconds: 0 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: - ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Snapmirrored Idle - true - svm2:vol1_dst3 Uninitialized Idle - false - svm3:vol1_dst_dst Broken-off Idle - true - svm2:vol1_dst2 XDP svm3:vol1_dst2_dst Broken-off Idle - true - 5 entries were displayed. ::*> snapmirror show -destination-path svm2:vol1_dst3 Source Path: svm:vol1 Source Cluster: - Source Vserver: svm Source Volume: vol1 Destination Path: svm2:vol1_dst3 Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol1_dst3 Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): - Mirror State: Uninitialized Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Network Compression Ratio: - Snapshot Checkpoint: 594.6KB Newest Snapshot: - Newest Snapshot Timestamp: - Exported Snapshot: - Exported Snapshot Timestamp: - Healthy: false Relationship ID: e94d0e31-a526-11ee-981e-bdd56ead09c8 Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8 Destination Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8 Current Operation ID: - Transfer Type: - Transfer Error: - Last Transfer Type: initialize Last Transfer Error: Transfer failed. (Volume access error (No space left on device)) Last Transfer Error Codes: 6620144, 5898547, 6684700 Last Transfer Size: - Last Transfer Network Compression Ratio: - Last Transfer Duration: - Last Transfer From: svm:vol1 Last Transfer End Timestamp: 12/28 02:24:11 Unhealthy Reason: Transfer failed. Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: - Current Transfer Priority: - SMTape Operation: - Destination Volume Node Name: FsxId0ab6f9b00824a187c-01 Identity Preserve Vserver DR: - Number of Successful Updates: 0 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 0 Total Transfer Time in Seconds: 0 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: -
しばらく待つと、ボリュームの空き容量不足により転送が失敗していました。
EMSにもSnapMirrorの転送失敗は記録されていました。
::*> event log show Time Node Severity Event ------------------- ---------------- ------------- --------------------------- 12/28/2023 02:22:12 FsxId0ab6f9b00824a187c-01 ERROR monitor.volume.nearlyFull: Volume vol1_dst3@vserver:5af907bb-a065-11ee-981e-bdd56ead09c8 is nearly full(using or reserving 97% of space and 0% of inodes). 12/28/2023 02:20:09 FsxId0ab6f9b00824a187c-01 ERROR monitor.volume.nearlyFull: Volume vol1_dst3@vserver:5af907bb-a065-11ee-981e-bdd56ead09c8 is nearly full(using or reserving 97% of space and 0% of inodes). 12/28/2023 02:17:52 FsxId0ab6f9b00824a187c-01 ERROR monitor.volume.nearlyFull: Volume vol1_dst3@vserver:5af907bb-a065-11ee-981e-bdd56ead09c8 is nearly full(using or reserving 97% of space and 0% of inodes). 12/28/2023 02:12:44 FsxId0ab6f9b00824a187c-01 NOTICE arw.volume.state: Anti-ransomware state was changed to "disabled" on volume "vol1_dst3" (UUID: "95d186d3-a526-11ee-981e-bdd56ead09c8") in Vserver "svm2" (UUID: "5af907bb-a065-11ee-981e-bdd56ead09c8"). 4 entries were displayed.
Storage Efficiency、ボリューム、aggregate、Snapshotを確認します。
::*> volume efficiency show -volume vol1, vol1_dst3 -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression vserver volume state policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume ------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- ----------------------------------------- svm vol1 Enabled auto false true efficient false true true true false svm2 vol1_dst3 Disabled - false true efficient false true true true false 2 entries were displayed. ::*> volume show -volume vol1, vol1_dst3 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 6.85GB 16GB 15.20GB 8.35GB 54% 808.8MB 9% 808.8MB 9.14GB 60% - 9.14GB - - svm2 vol1_dst3 2GB 40.79MB 2GB 2GB 1.96GB 98% 0B 0% 0B 1.96GB 98% - 1.96GB 0B 0% 2 entries were displayed. ::*> volume show-footprint -volume vol1_dst3 Vserver : svm2 Volume : vol1_dst3 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 1.96GB 0% Footprint in Performance Tier 1.98GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 20.98MB 0% Delayed Frees 16.74MB 0% File Operation Metadata 4KB 0% Total Footprint 2.00GB 0% Effective Total Footprint 2.00GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 207.5GB Total Physical Used: 21.83GB Total Storage Efficiency Ratio: 9.50:1 Total Data Reduction Logical Used Without Snapshots: 45.82GB Total Data Reduction Physical Used Without Snapshots: 16.61GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.76:1 Total Data Reduction Logical Used without snapshots and flexclones: 45.82GB Total Data Reduction Physical Used without snapshots and flexclones: 16.61GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.76:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 144.1GB Total Physical Used in FabricPool Performance Tier: 18.27GB Total FabricPool Performance Tier Storage Efficiency Ratio: 7.89:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 37.53GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 13.06GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.87:1 Logical Space Used for All Volumes: 45.82GB Physical Space Used for All Volumes: 25.82GB Space Saved by Volume Deduplication: 20.00GB Space Saved by Volume Deduplication and pattern detection: 20.00GB Volume Deduplication Savings ratio: 1.77:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.77:1 Logical Space Used by the Aggregate: 30.78GB Physical Space Used by the Aggregate: 21.83GB Space Saved by Aggregate Data Reduction: 8.94GB Aggregate Data Reduction SE Ratio: 1.41:1 Logical Size Used by Snapshot Copies: 161.7GB Physical Size Used by Snapshot Copies: 7.36GB Snapshot Volume Data Reduction Ratio: 21.97:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 21.97:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 3 Number of SIS Change Log Disabled Volumes: 1 ::*> snapshot show -volume vol1, vol1_dst3 ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm vol1 test.2023-12-22_0533 160KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528 24.45MB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900 312KB 0% 0% snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507 292KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406 148KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027 148KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 136KB 0% 0% 7 entries were displayed.
1.96GBまで書き込んでいますね。また、転送先ボリュームでSnapshotは作成されていません。
転送先ボリュームをNFSクライアントでマウント
転送先ボリュームをNFSクライアントでマウントして、様子を確認してみましょう。
転送先ボリュームをSVMのジャンクションパスにマウントします。
::*> volume mount -vserver svm2 -volume vol1_dst3 -junction-path /vol1_dst3 Queued private job: 66 Error: command failed: Volume vol1_dst3 in Vserver "svm2" is not mountable until a "snapmirror initialize" has been completed.
はい、SnapMirrorのInitializeが完了していないボリュームはマウントできないようです。
転送先ボリュームでStorage Efficiencyを実行
転送先ボリュームでStorage Efficiencyを実行して、空きが作成されるか確認しましょう。
まず、Storage Efficiencyを有効化します。
::*> volume efficiency on -vserver svm2 -volume vol1_dst vol1_dst vol1_dst2 vol1_dst3 ::*> volume efficiency on -vserver svm2 -volume vol1_dst3 Error: command failed: Failed to enable efficiency on volume "vol1_dst3" of Vserver "svm2": A SnapMirror transfer is running or paused. Use the "snapmirror show" command to view the status of the transfer. Retry this command when the transfer is complete, or run "snapmirror abort -hard true" to abort the SnapMirror transfer and clear any checkpoint data.
SnapMirrorの転送中や転送が中断されている場合はStorage Efficiencyを有効にできないようです。
指示に従ってsnapMirror abort
を叩いてみます。
::*> snapmirror abort -destination-path svm2:vol1_dst3 Error: command failed: No transfer to abort. ::*> snapmirror abort -destination-path svm2:vol1_dst3 -hard true Error: command failed: Deleting checkpoint for uninitialized relationship is not supported.
どちらも失敗しました。SnapMirrorのチェックポイントを削除しようにもSnapMirror Initializeが完了していないためできないようです。
ということで、冒頭の仮説では対応できないことが分かりました。
少ないSSDで重複排除や圧縮などのStorage Efficiencyをしっかり効かせたい場合は、以下記事のとおりカスケードSnapMirrorを使うと良いでしょう。
SnapMirror update時にボリュームの空き容量が足りない場合
ボリュームの拡張
SnapMirror Initializeの時は上手くいきませんでしたが、SnapMirror update時にボリュームの空き容量が足りない場合はどうでしょうか。
まず、ボリュームを拡張します。
::*> volume modify -vserver svm2 -volume vol1_dst3 -size 9GB Volume modify successful on volume vol1_dst3 of Vserver svm2. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Snapmirrored Idle - true - svm2:vol1_dst3 Uninitialized Idle - false - svm3:vol1_dst_dst Broken-off Idle - true - svm2:vol1_dst2 XDP svm3:vol1_dst2_dst Broken-off Idle - true - 5 entries were displayed.
SnapMirror Initialize
SnapMirrorのInitializeをします。
::*> snapmirror initialize -destination-path svm2:vol1_dst3Operation is queued: snapmirror initialize of destination "svm2:vol1_dst3". ::*> snapmirror show -destination-path svm2:vol1_dst3 -fields state, status, total-progress, progress-last-updated source-path destination-path state status total-progress progress-last-updated ----------- ---------------- ------------ ------ -------------- --------------------- svm:vol1 svm2:vol1_dst3 Snapmirrored Idle - - ::*> snapmirror show -destination-path svm2:vol1_dst3 Source Path: svm:vol1 Source Cluster: - Source Vserver: svm Source Volume: vol1 Destination Path: svm2:vol1_dst3 Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol1_dst3 Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): - Mirror State: Snapmirrored Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Network Compression Ratio: - Snapshot Checkpoint: - Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 Newest Snapshot Timestamp: 12/28 02:16:28 Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 Exported Snapshot Timestamp: 12/28 02:16:28 Healthy: true Relationship ID: e94d0e31-a526-11ee-981e-bdd56ead09c8 Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8 Destination Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8 Current Operation ID: - Transfer Type: - Transfer Error: - Last Transfer Type: update Last Transfer Error: - Last Transfer Error Codes: - Last Transfer Size: 3.07GB Last Transfer Network Compression Ratio: 1:1 Last Transfer Duration: 0:1:1 Last Transfer From: svm:vol1 Last Transfer End Timestamp: 12/28 03:04:32 Unhealthy Reason: - Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: 0:49:31 Current Transfer Priority: - SMTape Operation: - Destination Volume Node Name: FsxId0ab6f9b00824a187c-01 Identity Preserve Vserver DR: - Number of Successful Updates: 1 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 9060359240 Total Transfer Time in Seconds: 112 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: -
転送が完了しました。
Storage Efficiency、ボリューム、aggregate、Snapshotを確認します。
::*> volume efficiency show -volume vol1, vol1_dst3 -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression vserver volume state policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume ------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- ----------------------------------------- svm vol1 Enabled auto false true efficient false true true true false svm2 vol1_dst3 Disabled - false true efficient false true true true false 2 entries were displayed. ::*> volume show -volume vol1, vol1_dst3 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 6.85GB 16GB 15.20GB 8.35GB 54% 808.8MB 9% 808.8MB 9.14GB 60% - 9.14GB - - svm2 vol1_dst3 9GB 693.7MB 9GB 9GB 8.32GB 92% 774.5MB 8% 8.24GB 9.05GB 101% - 9.04GB 0B 0% 2 entries were displayed. ::*> volume show-footprint -volume vol1_dst3 Vserver : svm2 Volume : vol1_dst3 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 8.32GB 1% Footprint in Performance Tier 8.41GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 56.82MB 0% Delayed Frees 91.26MB 0% File Operation Metadata 4KB 0% Total Footprint 8.47GB 1% Effective Total Footprint 8.47GB 1% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 270.9GB Total Physical Used: 28.19GB Total Storage Efficiency Ratio: 9.61:1 Total Data Reduction Logical Used Without Snapshots: 52.93GB Total Data Reduction Physical Used Without Snapshots: 22.60GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.34:1 Total Data Reduction Logical Used without snapshots and flexclones: 52.93GB Total Data Reduction Physical Used without snapshots and flexclones: 22.60GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.34:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 207.5GB Total Physical Used in FabricPool Performance Tier: 24.65GB Total FabricPool Performance Tier Storage Efficiency Ratio: 8.42:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 44.65GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 19.08GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.34:1 Logical Space Used for All Volumes: 52.93GB Physical Space Used for All Volumes: 32.18GB Space Saved by Volume Deduplication: 20.76GB Space Saved by Volume Deduplication and pattern detection: 20.76GB Volume Deduplication Savings ratio: 1.65:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.65:1 Logical Space Used by the Aggregate: 37.13GB Physical Space Used by the Aggregate: 28.19GB Space Saved by Aggregate Data Reduction: 8.94GB Aggregate Data Reduction SE Ratio: 1.32:1 Logical Size Used by Snapshot Copies: 218.0GB Physical Size Used by Snapshot Copies: 7.36GB Snapshot Volume Data Reduction Ratio: 29.60:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 29.60:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 3 Number of SIS Change Log Disabled Volumes: 1 ::*> snapshot show -volume vol1, vol1_dst3 ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm vol1 test.2023-12-22_0533 160KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528 24.45MB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900 312KB 0% 0% snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507 292KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406 148KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027 148KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 136KB 0% 0% svm2 vol1_dst3 test.2023-12-22_0533 1.29MB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528 384KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900 388KB 0% 0% snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507 284KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406 232KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027 220KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 144KB 0% 0% 14 entries were displayed.
8.32GB書き込まれていますね。
テスト用ファイルの追加
転送元ボリュームにテスト用ファイルを追加します。既にボリューム上に存在しているファイルをコピーします。
$ sudo mount -t nfs svm-0058ae83d258ab2e3.fs-0ab6f9b00824a187c.fsx.us-east-1.amazonaws.com:/vol1 /mnt/fsxn/vol1 $ ls -l /mnt/fsxn/vol1 total 9474408 -rw-r--r--. 1 root root 1073741824 Dec 22 01:46 1_padding_file -rw-r--r--. 1 root root 1073741824 Dec 22 07:53 ABCDE_padding_file -rw-r--r--. 1 root root 1073741824 Dec 22 05:28 a_padding_file -rw-r--r--. 1 root root 1073741824 Dec 22 06:55 abcde_padding_file -rw-r--r--. 1 root root 1073741824 Dec 22 01:47 urandom_block_file -rw-r--r--. 1 root root 1073741824 Dec 22 05:02 urandom_block_file2 -rw-r--r--. 1 root root 1073741824 Dec 22 05:02 urandom_block_file2_copy -rw-r--r--. 1 root root 1073741824 Dec 22 01:47 urandom_block_file_copy -rw-r--r--. 1 root root 1073741824 Dec 22 06:41 urandom_block_file_copy2 $ sudo cp /mnt/fsxn/vol1/urandom_block_file /mnt/fsxn/vol1/urandom_block_file_copy3
ボリューム、aggregateを確認します。
::*> volume show -volume vol1, vol1_dst3 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 5.83GB 16GB 15.20GB 9.37GB 61% 808.8MB 8% 808.8MB 10.16GB 67% - 10.16GB - - svm2 vol1_dst3 9GB 693.7MB 9GB 9GB 8.32GB 92% 774.5MB 8% 8.24GB 9.05GB 101% - 9.04GB 0B 0% 2 entries were displayed. ::*> volume show-footprint -volume vol1_dst3 Vserver : svm2 Volume : vol1_dst3 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 8.32GB 1% Footprint in Performance Tier 8.41GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 56.82MB 0% Delayed Frees 91.40MB 0% File Operation Metadata 4KB 0% Total Footprint 8.47GB 1% Effective Total Footprint 8.47GB 1% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 271.9GB Total Physical Used: 29.34GB Total Storage Efficiency Ratio: 9.27:1 Total Data Reduction Logical Used Without Snapshots: 53.94GB Total Data Reduction Physical Used Without Snapshots: 23.70GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.28:1 Total Data Reduction Logical Used without snapshots and flexclones: 53.94GB Total Data Reduction Physical Used without snapshots and flexclones: 23.70GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.28:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 207.5GB Total Physical Used in FabricPool Performance Tier: 24.81GB Total FabricPool Performance Tier Storage Efficiency Ratio: 8.36:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 44.67GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 19.19GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.33:1 Logical Space Used for All Volumes: 53.94GB Physical Space Used for All Volumes: 33.18GB Space Saved by Volume Deduplication: 20.76GB Space Saved by Volume Deduplication and pattern detection: 20.76GB Volume Deduplication Savings ratio: 1.63:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.63:1 Logical Space Used by the Aggregate: 38.29GB Physical Space Used by the Aggregate: 29.34GB Space Saved by Aggregate Data Reduction: 8.94GB Aggregate Data Reduction SE Ratio: 1.30:1 Logical Size Used by Snapshot Copies: 218.0GB Physical Size Used by Snapshot Copies: 7.36GB Snapshot Volume Data Reduction Ratio: 29.60:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 29.60:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 3 Number of SIS Change Log Disabled Volumes: 1
SnapMirror Update
SnapMirrorの差分同期を行います。
::*> snapmirror update -destination-path svm2:vol1_dst3 Operation is queued: snapmirror update of destination "svm2:vol1_dst3". ::*> snapmirror show -destination-path svm2:vol1_dst3 -fields state, status, total-progress, progress-last-updated source-path destination-path state status total-progress progress-last-updated ----------- ---------------- ------------ ------------ -------------- --------------------- svm:vol1 svm2:vol1_dst3 Snapmirrored Transferring 18.73MB 12/28 04:17:21 ::*> snapmirror show -destination-path svm2:vol1_dst3 -fields state, status, total-progress, progress-last-updated source-path destination-path state status total-progress progress-last-updated ----------- ---------------- ------------ ------------ -------------- --------------------- svm:vol1 svm2:vol1_dst3 Snapmirrored Transferring 343.0MB 12/28 04:17:37 ::*> snapmirror show -destination-path svm2:vol1_dst3 -fields state, status, total-progress, progress-last-updated source-path destination-path state status total-progress progress-last-updated ----------- ---------------- ------------ ------------ -------------- --------------------- svm:vol1 svm2:vol1_dst3 Snapmirrored Transferring 651.2MB 12/28 04:18:08 ::*> snapmirror show -destination-path svm2:vol1_dst3 -fields state, status, total-progress, progress-last-updated source-path destination-path state status total-progress progress-last-updated ----------- ---------------- ------------ ------ -------------- --------------------- svm:vol1 svm2:vol1_dst3 Snapmirrored Idle - - ::*> snapmirror show -destination-path svm2:vol1_dst3 Source Path: svm:vol1 Source Cluster: - Source Vserver: svm Source Volume: vol1 Destination Path: svm2:vol1_dst3 Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol1_dst3 Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): - Mirror State: Snapmirrored Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Network Compression Ratio: - Snapshot Checkpoint: 265.8KB Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 Newest Snapshot Timestamp: 12/28 02:16:28 Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 Exported Snapshot Timestamp: 12/28 02:16:28 Healthy: false Relationship ID: e94d0e31-a526-11ee-981e-bdd56ead09c8 Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8 Destination Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8 Current Operation ID: - Transfer Type: - Transfer Error: - Last Transfer Type: update Last Transfer Error: Transfer failed. (Volume access error (No space left on device)) Last Transfer Error Codes: 6620144, 5898547, 6684700 Last Transfer Size: - Last Transfer Network Compression Ratio: - Last Transfer Duration: - Last Transfer From: svm:vol1 Last Transfer End Timestamp: 12/28 04:26:04 Unhealthy Reason: Transfer failed. Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: 2:34:18 Current Transfer Priority: - SMTape Operation: - Destination Volume Node Name: FsxId0ab6f9b00824a187c-01 Identity Preserve Vserver DR: - Number of Successful Updates: 1 Number of Failed Updates: 1 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 9060359240 Total Transfer Time in Seconds: 112 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: -
ボリュームの空き容量不足により転送に失敗していますね。
Storage Efficiency、ボリューム、aggregate、Snapshotを確認します。
::*> volume efficiency show -volume vol1, vol1_dst3 -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression vserver volume state policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume ------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- ----------------------------------------- svm vol1 Enabled auto false true efficient false true true true false svm2 vol1_dst3 Disabled - false true efficient false true true true false 2 entries were displayed. ::*> volume show -volume vol1, vol1_dst3 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 5.83GB 16GB 15.20GB 9.37GB 61% 825.3MB 8% 825.3MB 10.17GB 67% - 10.17GB - - svm2 vol1_dst3 9GB 16KB 9GB 9GB 9.00GB 99% 92.31MB 1% 8.24GB 9.05GB 101% - 9.05GB 0B 0% 2 entries were displayed. ::*> volume show-footprint -volume vol1_dst3 Vserver : svm2 Volume : vol1_dst3 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 9.00GB 1% Footprint in Performance Tier 9.09GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 0B 0% Delayed Frees 89.38MB 0% File Operation Metadata 4KB 0% Total Footprint 9.09GB 1% Effective Total Footprint 9.09GB 1% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 282.1GB Total Physical Used: 29.97GB Total Storage Efficiency Ratio: 9.41:1 Total Data Reduction Logical Used Without Snapshots: 53.97GB Total Data Reduction Physical Used Without Snapshots: 24.30GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.22:1 Total Data Reduction Logical Used without snapshots and flexclones: 53.97GB Total Data Reduction Physical Used without snapshots and flexclones: 24.30GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.22:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 207.8GB Total Physical Used in FabricPool Performance Tier: 25.45GB Total FabricPool Performance Tier Storage Efficiency Ratio: 8.17:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 44.69GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 19.80GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.26:1 Logical Space Used for All Volumes: 53.97GB Physical Space Used for All Volumes: 33.86GB Space Saved by Volume Deduplication: 20.11GB Space Saved by Volume Deduplication and pattern detection: 20.11GB Volume Deduplication Savings ratio: 1.59:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.59:1 Logical Space Used by the Aggregate: 38.92GB Physical Space Used by the Aggregate: 29.97GB Space Saved by Aggregate Data Reduction: 8.94GB Aggregate Data Reduction SE Ratio: 1.30:1 Logical Size Used by Snapshot Copies: 228.1GB Physical Size Used by Snapshot Copies: 7.36GB Snapshot Volume Data Reduction Ratio: 30.98:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 30.98:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 3 Number of SIS Change Log Disabled Volumes: 1 ::*> snapshot show -volume vol1, vol1_dst3 ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm vol1 test.2023-12-22_0533 160KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528 24.45MB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900 312KB 0% 0% snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507 292KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406 148KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027 148KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 212KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_041719 300KB 0% 0% svm2 vol1_dst3 test.2023-12-22_0533 1.29MB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528 384KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900 388KB 0% 0% snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507 284KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406 232KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027 220KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 428KB 0% 0% 15 entries were displayed.
追加のSnapshotは作成されていませんでした。
転送先ボリュームでStorage Efficiencyの有効化
転送先ボリュームでStorage Efficiencyを有効化します。
::*> volume efficiency on -vserver svm2 -volume vol1_dst3 Error: command failed: Failed to enable efficiency on volume "vol1_dst3" of Vserver "svm2": A SnapMirror transfer is running or paused. Use the "snapmirror show" command to view the status of the transfer. Retry this command when the transfer is complete, or run "snapmirror abort -hard true" to abort the SnapMirror transfer and clear any checkpoint data.
先ほどStorage Efficiencyを有効化した際と同様のエラーですね。
::*> snapmirror abort -destination-path svm2:vol1_dst3 Error: command failed: No transfer to abort. ::*> snapmirror abort -destination-path svm2:vol1_dst3 -hard true Operation is queued: snapmirror abort for the relationship with destination "svm2:vol1_dst3".
SnapMirror relationshipを確認します。
::*> snapmirror show -destination-path svm2:vol1_dst3 Source Path: svm:vol1 Source Cluster: - Source Vserver: svm Source Volume: vol1 Destination Path: svm2:vol1_dst3 Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol1_dst3 Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): - Mirror State: Snapmirrored Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Network Compression Ratio: - Snapshot Checkpoint: - Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 Newest Snapshot Timestamp: 12/28 02:16:28 Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 Exported Snapshot Timestamp: 12/28 02:16:28 Healthy: false Relationship ID: e94d0e31-a526-11ee-981e-bdd56ead09c8 Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8 Destination Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8 Current Operation ID: - Transfer Type: - Transfer Error: - Last Transfer Type: update Last Transfer Error: Transfer failed. (Volume access error (No space left on device)) Last Transfer Error Codes: 6620144, 5898547, 6684700 Last Transfer Size: - Last Transfer Network Compression Ratio: - Last Transfer Duration: - Last Transfer From: svm:vol1 Last Transfer End Timestamp: 12/28 04:26:04 Unhealthy Reason: Transfer failed. Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: 2:38:8 Current Transfer Priority: - SMTape Operation: - Destination Volume Node Name: FsxId0ab6f9b00824a187c-01 Identity Preserve Vserver DR: - Number of Successful Updates: 1 Number of Failed Updates: 1 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 9060359240 Total Transfer Time in Seconds: 112 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: -
Snapshot Checkpoint
が265.8KBから-
になっています。
Storage Efficiencyを有効化します。
::*> volume efficiency on -vserver svm2 -volume vol1_dst3 Efficiency for volume "vol1_dst3" of Vserver "svm2" is enabled. ::*> ::*> volume efficiency show -volume vol1, vol1_dst3 -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compressionvserver volume state policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume ------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- ----------------------------------------- svm vol1 Enabled auto false true efficient false true true true false svm2 vol1_dst3 Enabled - false true efficient false true true true false 2 entries were displayed. ::*> volume show -volume vol1, vol1_dst3 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 5.83GB 16GB 15.20GB 9.37GB 61% 825.3MB 8% 825.3MB 10.17GB 67% - 10.17GB - - svm2 vol1_dst3 9GB 693.5MB 9GB 9GB 8.32GB 92% 774.5MB 8% 8.24GB 9.05GB 101% - 9.04GB 0B 0% 2 entries were displayed. ::*> volume show-footprint -volume vol1_dst3 Vserver : svm2 Volume : vol1_dst3 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 8.32GB 1% Footprint in Performance Tier 8.57GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 56.82MB 0% Delayed Frees 249.0MB 0% File Operation Metadata 4KB 0% Total Footprint 8.62GB 1% Effective Total Footprint 8.62GB 1% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 282.1GB Total Physical Used: 29.39GB Total Storage Efficiency Ratio: 9.60:1 Total Data Reduction Logical Used Without Snapshots: 53.95GB Total Data Reduction Physical Used Without Snapshots: 23.74GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.27:1 Total Data Reduction Logical Used without snapshots and flexclones: 53.95GB Total Data Reduction Physical Used without snapshots and flexclones: 23.74GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.27:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 207.8GB Total Physical Used in FabricPool Performance Tier: 24.87GB Total FabricPool Performance Tier Storage Efficiency Ratio: 8.35:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 44.68GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 19.25GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.32:1 Logical Space Used for All Volumes: 53.95GB Physical Space Used for All Volumes: 33.17GB Space Saved by Volume Deduplication: 20.77GB Space Saved by Volume Deduplication and pattern detection: 20.77GB Volume Deduplication Savings ratio: 1.63:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.63:1 Logical Space Used by the Aggregate: 38.33GB Physical Space Used by the Aggregate: 29.39GB Space Saved by Aggregate Data Reduction: 8.94GB Aggregate Data Reduction SE Ratio: 1.30:1 Logical Size Used by Snapshot Copies: 228.1GB Physical Size Used by Snapshot Copies: 7.36GB Snapshot Volume Data Reduction Ratio: 30.98:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 30.98:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 3 Number of SIS Change Log Disabled Volumes: 0 ::*> snapshot show -volume vol1, vol1_dst3 ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm vol1 test.2023-12-22_0533 160KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528 24.45MB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900 312KB 0% 0% snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507 292KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406 148KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027 148KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 212KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_041719 300KB 0% 0% svm2 vol1_dst3 test.2023-12-22_0533 1.29MB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528 384KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900 388KB 0% 0% snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507 284KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406 232KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027 220KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 152KB 0% 0% 15 entries were displayed.
ボリューム使用量が9.00GBから8.32GBになっています。チェックポイントを削除することで、途中まで転送された内容が破棄されたことが分かります。
SnapMirror update
それでは再度SnapMirror updateを行います。
::*> snapmirror update -destination-path svm2:vol1_dst3Operation is queued: snapmirror update of destination "svm2:vol1_dst3". ::*> snapmirror show -destination-path svm2:vol1_dst3 -fields state, status, total-progress, progress-last-updatedsource-path destination-path state status total-progress progress-last-updated----------- ---------------- ------------ ------------ -------------- --------------------- svm:vol1 svm2:vol1_dst3 Snapmirrored Transferring 34.94MB 12/28 04:57:48 ::*> snapmirror show -destination-path svm2:vol1_dst3 -fields state, status, total-progress, progress-last-updated source-path destination-path state status total-progress progress-last-updated ----------- ---------------- ------------ ------------ -------------- --------------------- svm:vol1 svm2:vol1_dst3 Snapmirrored Transferring 682.8MB 12/28 04:58:35 ::*> snapmirror show -destination-path svm2:vol1_dst3 -fields state, status, total-progress, progress-last-updated source-path destination-path state status total-progress progress-last-updated ----------- ---------------- ------------ ------ -------------- --------------------- svm:vol1 svm2:vol1_dst3 Snapmirrored Idle - - ::*> snapmirror show -destination-path svm2:vol1_dst3 Source Path: svm:vol1 Source Cluster: - Source Vserver: svm Source Volume: vol1 Destination Path: svm2:vol1_dst3 Destination Cluster: - Destination Vserver: svm2 Destination Volume: vol1_dst3 Relationship Type: XDP Relationship Group Type: none Managing Vserver: svm2 SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Consistency Group Item Mappings: - Current Transfer Throttle (KB/sec): - Mirror State: Snapmirrored Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Network Compression Ratio: - Snapshot Checkpoint: 262.5KB Newest Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 Newest Snapshot Timestamp: 12/28 02:16:28 Exported Snapshot: snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 Exported Snapshot Timestamp: 12/28 02:16:28 Healthy: false Relationship ID: e94d0e31-a526-11ee-981e-bdd56ead09c8 Source Vserver UUID: 04ee1778-a058-11ee-981e-bdd56ead09c8 Destination Vserver UUID: 5af907bb-a065-11ee-981e-bdd56ead09c8 Current Operation ID: - Transfer Type: - Transfer Error: - Last Transfer Type: update Last Transfer Error: Transfer failed. (Volume access error (No space left on device)) Last Transfer Error Codes: 6620144, 5898547, 6684700 Last Transfer Size: - Last Transfer Network Compression Ratio: - Last Transfer Duration: - Last Transfer From: svm:vol1 Last Transfer End Timestamp: 12/28 05:06:22 Unhealthy Reason: Transfer failed. Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: 2:55:12 Current Transfer Priority: - SMTape Operation: - Destination Volume Node Name: FsxId0ab6f9b00824a187c-01 Identity Preserve Vserver DR: - Number of Successful Updates: 1 Number of Failed Updates: 2 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 9060359240 Total Transfer Time in Seconds: 112 Source Volume MSIDs Preserved: - OpMask: ffffffffffffffff Is Auto Expand Enabled: - Percent Complete for Current Status: -
ボリュームの空き容量不足で転送が中断されました。
Storage Efficiencyの実行状態を確認します。
::*> volume efficiency show -volume vol1_dst3 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol1_dst3 Enabled Idle Idle for 00:03:10 -
特に実行はされていません。
Storage Efficiency、ボリューム、aggregate、Snapshotを確認します。
::*> volume efficiency show -volume vol1, vol1_dst3 -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression vserver volume state policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume ------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- ----------------------------------------- svm vol1 Enabled auto false true efficient false true true true false svm2 vol1_dst3 Enabled - false true efficient false true true true false 2 entries were displayed. ::*> volume show -volume vol1, vol1_dst3 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared, logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 5.83GB 16GB 15.20GB 9.37GB 61% 825.3MB 8% 825.3MB 10.17GB 67% - 10.17GB - - svm2 vol1_dst3 9GB 72KB 9GB 9GB 9.00GB 99% 95.56MB 1% 8.24GB 9.06GB 101% - 9.05GB 0B 0% 2 entries were displayed. ::*> volume show-footprint -volume vol1_dst3 Vserver : svm2 Volume : vol1_dst3 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 9.00GB 1% Footprint in Performance Tier 9.01GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 50.61MB 0% Delayed Frees 6.29MB 0% File Operation Metadata 4KB 0% Total Footprint 9.05GB 1% Effective Total Footprint 9.05GB 1% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0ab6f9b00824a187c-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 282.1GB Total Physical Used: 30.04GB Total Storage Efficiency Ratio: 9.39:1 Total Data Reduction Logical Used Without Snapshots: 53.96GB Total Data Reduction Physical Used Without Snapshots: 24.36GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.21:1 Total Data Reduction Logical Used without snapshots and flexclones: 53.96GB Total Data Reduction Physical Used without snapshots and flexclones: 24.36GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.21:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 207.8GB Total Physical Used in FabricPool Performance Tier: 25.52GB Total FabricPool Performance Tier Storage Efficiency Ratio: 8.14:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 44.69GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 19.87GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.25:1 Logical Space Used for All Volumes: 53.96GB Physical Space Used for All Volumes: 33.85GB Space Saved by Volume Deduplication: 20.11GB Space Saved by Volume Deduplication and pattern detection: 20.11GB Volume Deduplication Savings ratio: 1.59:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.59:1 Logical Space Used by the Aggregate: 38.98GB Physical Space Used by the Aggregate: 30.04GB Space Saved by Aggregate Data Reduction: 8.94GB Aggregate Data Reduction SE Ratio: 1.30:1 Logical Size Used by Snapshot Copies: 228.1GB Physical Size Used by Snapshot Copies: 7.36GB Snapshot Volume Data Reduction Ratio: 30.98:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 30.98:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 3 Number of SIS Change Log Disabled Volumes: 0 ::*> snapshot show -volume vol1, vol1_dst3 ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm vol1 test.2023-12-22_0533 160KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528 24.45MB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900 312KB 0% 0% snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507 292KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406 148KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027 148KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 292KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_045744 156KB 0% 0% svm2 vol1_dst3 test.2023-12-22_0533 1.29MB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_053528 384KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779289.2023-12-22_065900 388KB 0% 0% snapmirror.674509fa-a065-11ee-981e-bdd56ead09c8_2148779290.2023-12-22_075507 284KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_050406 232KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779291.2023-12-25_060027 220KB 0% 0% snapmirror.5af907bb-a065-11ee-981e-bdd56ead09c8_2148779294.2023-12-28_021628 432KB 0% 0% 15 entries were displayed.
SnapMirror転送中にStorage Efficiencyによりデータが削減され、転送しきれるかと思ったのですができませんでした。
転送先のボリュームサイズは転送元のボリュームサイズ以上のものを準備しよう
SnapMirrorの転送先ボリュームのサイズ不足である場合の挙動を確認してみました。
冒頭で紹介した方式は実現できないことが確認できました。カスケードSnapMirrorを活用する方法でStorage Efficiencyをかけたデータを最終的な転送先FSxNファイルシステムに転送するのが良いでしょう。
転送先のボリュームサイズは転送元のボリュームサイズ以上のものを準備しましょう。なお、ボリュームタイプがDP
のボリュームはデフォルトでボリュームの自動拡張が有効です。
また、チェックポイントを活用することで、途中まで転送されている情報を引き続き転送できることも確認できました。
移行中にSnapMirrorベースライン転送が失敗した場合、ネットワーク接続の切断、転送の中止、コントローラ フェイルオーバーなど、様々な原因があります。失敗の原因を修正後、再開チェックポイントがある場合はSnapMirror転送を再開できます。
この記事が誰かの助けになれば幸いです。
以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!