[Amazon FSx for NetApp ONTAP] aggregateレイヤーのデータ削減効果はキャパシティプールに移動させてからSSDに書き戻した後も維持できるのか確認してみた
キャパシティプールストレージのデータをSSDに書き戻してもaggregateレイヤーのデータ削減効果は維持できるのか
こんにちは、のんピ(@non____97)です。
皆さんはAmazon FSx for NetApp ONTAP(以降FSxN)において、キャパシティプールストレージのデータをSSDに書き戻してもaggregateレイヤーのデータ削減効果は維持できるのか気になったことはありますか? 私はあります。
以下記事でキャパシティプールストレージ上のデータをSnapMirrorで転送する際に、TSSEの圧縮やコンパクションなどのaggregateレイヤーのデータ削減効果が維持できないことを確認しました。
ここで、キャパシティプールストレージのデータをSSDに書き戻してもaggregateレイヤーのデータ削減効果は維持できるのかが気になりました。
もし、圧縮やコンパクションによるデータ削減効果が失われるのであれば、SSDに書き戻したタイミングでキャパシティプールストレージの物理サイズ以上のデータがSSDに書き込まれてしまい、SSDを圧迫してしまうと考えます。
実際に確認してみました。
いきなりまとめ
aggregateレイヤーのデータ削減効果はキャパシティプールストレージに移動させてからSSDに書き戻した後も維持できる- 再度検証した結果、キャパシティプールストレージのデータをSSDに書き戻すとaggregateレイヤーのデータ削減効果が失われることが判明
- コンパクションはボリュームレベルで制限できない
- やはり、SnapMirrorの転送元のデータがキャパシティプールストレージ上にある場合はaggregateレイヤーのデータ削減効果は維持できない
- つまりは、キャパシティプールストレージに階層化するタイミングでaggregateレイヤーのデータ削減効果は失われる
- Tiering PolicyのCooling daysとInactive data compressionのthreshold daysが近いのであれば、Inactive data compressionの恩恵を感じづらい
- Tiering Policy Autoにおいては、threshold daysとCooling daysを1週間以上間を空けると感じやすい
- Tiering PolicyがNoneやSnapshot Onlyの場合は圧縮効果を感じやすい
やってみた
検証環境
検証環境は以下のとおりです。
FSxNファイルシステムのONTAPのバージョンはONTAP 9.13.1P6です。
::*> version NetApp Release 9.13.1P6: Tue Dec 05 16:06:25 UTC 2023
まっさらな状態でのStorage Efficiecny、ボリューム、aggregateの情報は以下のとおりです。
::> set diag Warning: These diagnostic commands are for use by NetApp personnel only. Do you want to continue? {y|n}: y ::*> ::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Idle Idle for 00:08:44 auto ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ----- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 15.20GB 16GB 15.20GB 312KB 0% 0B 0% 0B 312KB 0% - 312KB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 312KB 0% Footprint in Performance Tier 2.08MB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 1.78MB 0% File Operation Metadata 4KB 0% Total Footprint 94.75MB 0% Effective Total Footprint 94.75MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId01f6f549cf0cabbb9-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 132KB Total Physical Used: 280KB Total Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used Without Snapshots: 132KB Total Data Reduction Physical Used Without Snapshots: 280KB Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones: 132KB Total Data Reduction Physical Used without snapshots and flexclones: 280KB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 612KB Total Physical Used in FabricPool Performance Tier: 3.99MB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 612KB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.99MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 132KB Physical Space Used for All Volumes: 132KB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 280KB Physical Space Used by the Aggregate: 280KB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 0B Physical Size Used by Snapshot Copies: 0B Snapshot Volume Data Reduction Ratio: 1.00:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.7GB 861.8GB 1.11GB 38.39MB 0% 0B 0% 0B 0B 0B 0% 0B - ::*> aggr show -instance Aggregate: aggr1 Storage Type: ssd Checksum Style: advanced_zoned Number Of Disks: 8 Is Mirrored: true Disks for First Plex: NET-2.4, NET-2.6, NET-2.3, NET-2.5 Disks for Mirrored Plex: NET-1.3, NET-1.6, NET-1.4, NET-1.5 Partitions for First Plex: - Partitions for Mirrored Plex: - Node: FsxId01f6f549cf0cabbb9-01 Free Space Reallocation: off HA Policy: sfo Ignore Inconsistent: off Space Reserved for Snapshot Copies: 5% Aggregate Nearly Full Threshold Percent: 93% Aggregate Full Threshold Percent: 96% Checksum Verification: on RAID Lost Write: off Enable Thorough Scrub: - Hybrid Enabled: false Available Size: 860.7GB Checksum Enabled: true Checksum Status: active Cluster: FsxId01f6f549cf0cabbb9 Home Cluster ID: 63b0c2ff-aaa8-11ee-b370-e151fd8b9252 DR Home ID: - DR Home Name: - Inofile Version: 4 Has Mroot Volume: false Has Partner Node Mroot Volume: false Home ID: 3323133971 Home Name: FsxId01f6f549cf0cabbb9-01 Total Hybrid Cache Size: 0B Hybrid: false Inconsistent: false Is Aggregate Home: true Max RAID Size: 4 Flash Pool SSD Tier Maximum RAID Group Size: - Owner ID: 3323133971 Owner Name: FsxId01f6f549cf0cabbb9-01 Used Percentage: 0% Plexes: /aggr1/plex0, /aggr1/plex1 RAID Groups: /aggr1/plex0/rg0 (advanced_zoned) /aggr1/plex1/rg0 (advanced_zoned) RAID Lost Write State: off RAID Status: raid0, mirrored, normal RAID Type: raid0 SyncMirror Resync Snapshot Frequency in Minutes: 5 Is Root: false Space Used by Metadata for Volume Efficiency: 0B Size: 861.8GB State: online Maximum Write Alloc Blocks: 0 Used Size: 1.11GB Uses Shared Disks: false UUID String: dd5f737e-aaa8-11ee-b370-e151fd8b9252 Number Of Volumes: 2 Is Flash Pool Caching: - Is Eligible for Auto Balance Aggregate: false State of the aggregate being balanced: ineligible Total Physical Used Size: 39.47MB Physical Used Percentage: 0% State Change Counter for Auto Balancer: 0 SnapLock Type: non-snaplock Is NVE Capable: false Is in the precommit phase of Copy-Free Transition: false Is a 7-Mode transitioning aggregate that is not yet committed in clustered Data ONTAP and is currently out of space: false Threshold When Aggregate Is Considered Unbalanced (%): 70 Threshold When Aggregate Is Considered Balanced (%): 40 Resynchronization Priority: low Space Saved by Data Compaction: 0B Percentage Saved by Data Compaction: 0% Amount of compacted data: 0B Timestamp of Aggregate Creation: 1/4/2024 02:27:59 Enable SIDL: off Composite: true Is FabricPool Mirrored: false Capacity Tier Used Size: 0B Space Saved by Storage Efficiency: 0B Percentage of Space Saved by Storage Efficiency: 0% Amount of Shared bytes count by Storage Efficiency: 0B Inactive Data Reporting Enabled: - Timestamp when Inactive Data Reporting was Enabled: - Enable Aggregate level Encryption: false Aggregate uses data protected SEDs: false azcs read optimization: on Metadata Reserve Space Required For Revert: 0B
テスト用ファイルの書き込み
NFSクライアントからボリュームをマウントして、テスト用ファイルを書き込みます。
$ sudo mount -t nfs svm-030dabe42a4b95e03.fs-01f6f549cf0cabbb9.fsx.us-east-1.amazonaws.com:/vol1 /mnt/fsxn/vol1 $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-030dabe42a4b95e03.fs-01f6f549cf0cabbb9.fsx.us-east-1.amazonaws.com:/vol1 nfs4 16G 320K 16G 1% /mnt/fsxn/vol1 $ yes abcde | tr -d '\n' | sudo dd of=/mnt/fsxn/vol1/abcde_padding_file bs=4K count=256K 262144+0 records in 262144+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.31027 s, 170 MB/s
Storage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Idle Idle for 00:16:16 auto ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 14.20GB 16GB 15.20GB 1.00GB 6% 0B 0% 0B 1.00GB 7% - 1.00GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 1.00GB 0% Footprint in Performance Tier 1.01GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 2.74MB 0% File Operation Metadata 4KB 0% Total Footprint 1.10GB 0% Effective Total Footprint 1.10GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId01f6f549cf0cabbb9-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 1.00GB Total Physical Used: 66.16MB Total Storage Efficiency Ratio: 15.54:1 Total Data Reduction Logical Used Without Snapshots: 1.00GB Total Data Reduction Physical Used Without Snapshots: 66.16MB Total Data Reduction Efficiency Ratio Without Snapshots: 15.54:1 Total Data Reduction Logical Used without snapshots and flexclones: 1.00GB Total Data Reduction Physical Used without snapshots and flexclones: 66.16MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 15.54:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 1.00GB Total Physical Used in FabricPool Performance Tier: 73.82MB Total FabricPool Performance Tier Storage Efficiency Ratio: 13.94:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.00GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 73.82MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 13.94:1 Logical Space Used for All Volumes: 1.00GB Physical Space Used for All Volumes: 1.00GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 1.05GB Physical Space Used by the Aggregate: 66.16MB Space Saved by Aggregate Data Reduction: 1012MB Aggregate Data Reduction SE Ratio: 16.30:1 Logical Size Used by Snapshot Copies: 0B Physical Size Used by Snapshot Copies: 0B Snapshot Volume Data Reduction Ratio: 1.00:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.6GB 861.8GB 1.17GB 78.04MB 0% 1012MB 46% 8.09MB 0B 1012MB 46% 8.09MB -
1012MBほどコンパクションが効いています。これにより、1GiBのデータを書き込んだにも関わらず、aggregateの物理的なデータ使用量は66.16MBとなっています。
なお、Storage Efficiencyからコンパクションの状態を確認すると無効になっていました。コンパクションはボリュームレベルで制限できないようです。
::*> volume efficiency show -fields data-compaction vserver volume data-compaction ------- ------ --------------- svm vol1 false
コンパクションは4KiB未満のデータを1つの4KiBのデータブロックとしてまとめることにより、データブロック内の空きスペースを削減する機能です。
先ほどのテストファイルの入力ブロックのサイズは4KiBでした。試しに入力ブロックのサイズを4MiBにしたテストファイルを追加して、コンパクションの効き具合を確認します。
$ yes ABCDE | tr -d '\n' | sudo dd of=/mnt/fsxn/vol1/ABCDE_padding_file_4MiB_512 bs=4M count=512 iflag=fullblock 512+0 records in 512+0 records out 2147483648 bytes (2.1 GB, 2.0 GiB) copied, 13.7884 s, 156 MB/s $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-030dabe42a4b95e03.fs-01f6f549cf0cabbb9.fsx.us-east-1.amazonaws.com:/vol1 nfs4 16G 3.1G 13G 20% /mnt/fsxn/vol1
テストファイル追加後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Idle Idle for 00:26:57 auto ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 12.19GB 16GB 15.20GB 3.01GB 19% 0B 0% 0B 3.01GB 20% - 3.01GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 3.01GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 2.48MB 0% File Operation Metadata 4KB 0% Total Footprint 3.11GB 0% Effective Total Footprint 3.11GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId01f6f549cf0cabbb9-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 3.01GB Total Physical Used: 126.3MB Total Storage Efficiency Ratio: 24.43:1 Total Data Reduction Logical Used Without Snapshots: 3.01GB Total Data Reduction Physical Used Without Snapshots: 126.3MB Total Data Reduction Efficiency Ratio Without Snapshots: 24.43:1 Total Data Reduction Logical Used without snapshots and flexclones: 3.01GB Total Data Reduction Physical Used without snapshots and flexclones: 126.3MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 24.43:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 3.01GB Total Physical Used in FabricPool Performance Tier: 140.5MB Total FabricPool Performance Tier Storage Efficiency Ratio: 21.97:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.01GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 140.5MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 21.97:1 Logical Space Used for All Volumes: 3.01GB Physical Space Used for All Volumes: 3.01GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 3.09GB Physical Space Used by the Aggregate: 126.3MB Space Saved by Aggregate Data Reduction: 2.96GB Aggregate Data Reduction SE Ratio: 25.05:1 Logical Size Used by Snapshot Copies: 0B Physical Size Used by Snapshot Copies: 0B Snapshot Volume Data Reduction Ratio: 1.00:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 1.00:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.5GB 861.8GB 1.23GB 144.5MB 0% 2.96GB 71% 24.24MB 0B 2.96GB 71% 24.24MB -
データコンパクションにより2.96GBデータ削減されています。入力ブロックサイズを変更してもコンパクションの効き具合は変わりないように見えます。
個人的にはコンパクションにしてはデータが削減されすぎな気はします。
先述のとおり、コンパクションはデータブロック内の空きスペースを削減する機能です。書き込まれたデータを削減するのではなく、書き込まれたデータを保存する際に余計なスペースを生じさせないための仕組みです。
テストファイルは1GiB分ABCDE
で埋め尽くされているため、ほとんどのデータはどのように扱われたのか非常に気になります。
今回はaggregateレイヤーでのデータ削減効果をSSDから書き戻した際の影響度合いを確認するため、これ以上の深掘りはやめておきます。
Inactive data compressionの実行
Inactive data comnpressionを実行します。
まず、Inactive data compressionの有効化です。
::*> volume efficiency on -vserver svm -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is enabled. ::*> volume efficiency modify -vserver svm -volume vol1 -compression true ::*> volume efficiency inactive-data-compression show Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 false - IDLE SUCCESS lzopro ::*> volume efficiency inactive-data-compression modify -vserver svm -volume vol1 -is-enabled true ::*> volume efficiency inactive-data-compression show Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 true - IDLE SUCCESS lzopro
全てのデータをInactive data compressionで圧縮します。
::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0 Inactive data compression scan started on volume "vol1" in Vserver "svm" ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 7032 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 3800 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 4 Time since Last Inactive Data Compression Scan ended(sec): 4 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 4 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
Storage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Enabled Idle Idle for 00:32:21 auto ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 12.19GB 16GB 15.20GB 3.01GB 19% 0B 0% 0B 3.01GB 20% - 3.01GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 3.02GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 4.69MB 0% File Operation Metadata 4KB 0% Total Footprint 3.11GB 0% Footprint Data Reduction 2.98GB 0% Auto Adaptive Compression 2.98GB 0% Effective Total Footprint 127.2MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId01f6f549cf0cabbb9-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 3GB Total Physical Used: 131.9MB Total Storage Efficiency Ratio: 23.29:1 Total Data Reduction Logical Used Without Snapshots: 3.00GB Total Data Reduction Physical Used Without Snapshots: 131.9MB Total Data Reduction Efficiency Ratio Without Snapshots: 23.29:1 Total Data Reduction Logical Used without snapshots and flexclones: 3.00GB Total Data Reduction Physical Used without snapshots and flexclones: 131.9MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 23.29:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 3.01GB Total Physical Used in FabricPool Performance Tier: 159.8MB Total FabricPool Performance Tier Storage Efficiency Ratio: 19.31:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.01GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 159.8MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 19.31:1 Logical Space Used for All Volumes: 3.00GB Physical Space Used for All Volumes: 3.00GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 3.11GB Physical Space Used by the Aggregate: 131.9MB Space Saved by Aggregate Data Reduction: 2.98GB Aggregate Data Reduction SE Ratio: 24.13:1 Logical Size Used by Snapshot Copies: 324KB Physical Size Used by Snapshot Copies: 132KB Snapshot Volume Data Reduction Ratio: 2.45:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.45:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.5GB 861.8GB 1.26GB 166.5MB 0% 2.98GB 70% 24.40MB 0B 2.98GB 70% 24.40MB -
volume show-footprint
でAuto Adaptive Compression
が2.98GBと表示されるようになりました。一方でSpace Saved by Aggregate Data Reduction
もdata-compaction-space-saved
も2.98GBを指しています。
加えて、Inactive data compressionを実行しても圧縮されたデータブロック数は3,800です。圧縮されたデータブロックの圧縮済みのサイズが限りなく小さくなったとしても3,800ブロック × 4KiB = 15,200 KiB
げ限度です。
そのため、volume show-footprint
のAuto Adaptive Compression
にはデータコンパクションのデータ削減効果も含まれているように思えます。Auto Adaptive Compression
にコンパクションの削減効果が含まれるのは非常に違和感がありますが、そう捉えるしかないかなと思います。
Tiering Policy Allに変更
ボリュームのTiering PolicyをAllに変更して、キャパシティプールストレージに階層化します。
::*> volume show -volume vol1 -fields tiering-policy vserver volume tiering-policy ------- ------ -------------- svm vol1 none ::*> volume modify -vserver svm -volume vol1 -tiering-policy all Volume modify successful on volume vol1 of Vserver svm. ::*> volume show -volume vol1 -fields tiering-policy vserver volume tiering-policy ------- ------ -------------- svm vol1 all ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 3.02GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 5.01MB 0% File Operation Metadata 4KB 0% Total Footprint 3.11GB 0% Footprint Data Reduction 2.98GB 0% Auto Adaptive Compression 2.98GB 0% Effective Total Footprint 127.2MB 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 1.23GB 41% Footprint in FSxFabricpoolObjectStore 1.78GB 59% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 6.08MB 0% File Operation Metadata 4KB 0% Total Footprint 3.11GB 0% Footprint Data Reduction 1.22GB 0% Auto Adaptive Compression 1.22GB 0% Footprint Data Reduction in capacity tier 1.57GB - Effective Total Footprint 326.0MB 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 568.4MB 18% Footprint in FSxFabricpoolObjectStore 2.46GB 82% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 6.38MB 0% File Operation Metadata 4KB 0% Total Footprint 3.11GB 0% Footprint Data Reduction 562.0MB 0% Auto Adaptive Compression 562.0MB 0% Footprint Data Reduction in capacity tier 2.14GB - Effective Total Footprint 427.1MB 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 20.36MB 1% Footprint in FSxFabricpoolObjectStore 3GB 99% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 6.69MB 0% File Operation Metadata 4KB 0% Total Footprint 3.11GB 0% Footprint Data Reduction 20.13MB 0% Auto Adaptive Compression 20.13MB 0% Footprint Data Reduction in capacity tier 2.64GB - Effective Total Footprint 461.5MB 0%
99%のデータがキャパシティプールストレージに階層化されました。
Storage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Enabled Idle Idle for 00:37:52 auto ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 12.19GB 16GB 15.20GB 3.01GB 19% 0B 0% 0B 3.01GB 20% - 3.01GB - - ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 32.69MB 1% Footprint in FSxFabricpoolObjectStore 3GB 99% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 18.96MB 0% File Operation Metadata 4KB 0% Total Footprint 3.12GB 0% Footprint Data Reduction 32.32MB 0% Auto Adaptive Compression 32.32MB 0% Footprint Data Reduction in capacity tier 2.64GB - Effective Total Footprint 461.7MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId01f6f549cf0cabbb9-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 3GB Total Physical Used: 305.3MB Total Storage Efficiency Ratio: 10.06:1 Total Data Reduction Logical Used Without Snapshots: 3.00GB Total Data Reduction Physical Used Without Snapshots: 305.3MB Total Data Reduction Efficiency Ratio Without Snapshots: 10.06:1 Total Data Reduction Logical Used without snapshots and flexclones: 3.00GB Total Data Reduction Physical Used without snapshots and flexclones: 305.3MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 10.06:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 33.14MB Total Physical Used in FabricPool Performance Tier: 231.9MB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.83MB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 231.9MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 3.00GB Physical Space Used for All Volumes: 3.00GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 3.28GB Physical Space Used by the Aggregate: 305.3MB Space Saved by Aggregate Data Reduction: 2.98GB Aggregate Data Reduction SE Ratio: 10.99:1 Logical Size Used by Snapshot Copies: 324KB Physical Size Used by Snapshot Copies: 132KB Snapshot Volume Data Reduction Ratio: 2.45:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.45:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.6GB 861.8GB 1.16GB 253.1MB 0% 2.98GB 72% 24.40MB 353.8MB 2.98GB 72% 24.40MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 1.13GB 0% Aggregate Metadata 3.01GB 0% Snapshot Reserve 45.36GB 5% Total Used 46.52GB 5% Total Physical Used 255.2MB 0% Total Provisioned Space 17GB 2% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 3.03GB - Logical Referenced Capacity 3.01GB - Logical Unreferenced Capacity 14.98MB - Space Saved by Storage Efficiency 2.68GB - Total Physical Used 353.8MB - 2 entries were displayed.
Space Saved by Aggregate Data Reduction
やdata-compaction-space-saved
の値は変わらず2.98GBでした。
一方でキャパシティプールストレージのSpace Saved by Storage Efficiency
が2.68GBになっていることから、データ削減効果を完全に維持できている訳ではなさそうです。
SSDへの書き戻し
キャパシティプールストレージのデータをSSDに書き戻します。
::*> volume modify -vserver svm -volume vol1 -tiering-policy none -cloud-retrieval-policy promote Warning: The "promote" cloud retrieve policy retrieves all of the cloud data for the specified volume. If the tiering policy is "snapshot-only" then only AFS data is retrieved. If the tiering policy is "none" then all data is retrieved. Volume "vol1" in Vserver "svm" is on a FabricPool, and there are approximately 3221225472 bytes tiered to the cloud that will be retrieved. Cloud retrieval may take a significant amount of time, and may degrade performance during that time. The cloud retrieve operation may also result in data charges by your object store provider. Do you want to continue? {y|n}: y Volume modify successful on volume vol1 of Vserver svm. ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 33MB 1% Footprint in FSxFabricpoolObjectStore 3GB 99% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 19.27MB 0% File Operation Metadata 4KB 0% Total Footprint 3.12GB 0% Footprint Data Reduction 32.63MB 0% Auto Adaptive Compression 32.63MB 0% Footprint Data Reduction in capacity tier 2.64GB - Effective Total Footprint 461.7MB 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 1.49GB 49% Footprint in FSxFabricpoolObjectStore 1.54GB 51% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 20.01MB 0% File Operation Metadata 4KB 0% Total Footprint 3.12GB 0% Footprint Data Reduction 1.48GB 0% Auto Adaptive Compression 1.48GB 0% Footprint Data Reduction in capacity tier 1.35GB - Effective Total Footprint 298.9MB 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 2.84GB 94% Footprint in FSxFabricpoolObjectStore 200.0MB 6% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 20.60MB 0% File Operation Metadata 4KB 0% Total Footprint 3.12GB 0% Footprint Data Reduction 2.81GB 0% Auto Adaptive Compression 2.81GB 0% Footprint Data Reduction in capacity tier 176.0MB - Effective Total Footprint 149.1MB 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 3.03GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 20.75MB 0% File Operation Metadata 4KB 0% Total Footprint 3.12GB 0% Footprint Data Reduction 3.00GB 0% Auto Adaptive Compression 3.00GB 0% Effective Total Footprint 127.4MB 0%
書き戻しが完了しました。
書き戻し前は32.32MBだったAuto Adaptive Compression
が3.00GBとなっています。
Storage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Enabled Idle Idle for 00:44:02 auto ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 12.19GB 16GB 15.20GB 3.01GB 19% 0B 0% 0B 3.01GB 20% - 3.01GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 3.04GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 33.02MB 0% File Operation Metadata 4KB 0% Total Footprint 3.14GB 0% Footprint Data Reduction 3.01GB 0% Auto Adaptive Compression 3.01GB 0% Effective Total Footprint 127.5MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId01f6f549cf0cabbb9-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 3GB Total Physical Used: 228.4MB Total Storage Efficiency Ratio: 13.45:1 Total Data Reduction Logical Used Without Snapshots: 3GB Total Data Reduction Physical Used Without Snapshots: 228.4MB Total Data Reduction Efficiency Ratio Without Snapshots: 13.45:1 Total Data Reduction Logical Used without snapshots and flexclones: 3GB Total Data Reduction Physical Used without snapshots and flexclones: 228.4MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 13.45:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 3.01GB Total Physical Used in FabricPool Performance Tier: 129.0MB Total FabricPool Performance Tier Storage Efficiency Ratio: 23.92:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.01GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 129.0MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 23.92:1 Logical Space Used for All Volumes: 3GB Physical Space Used for All Volumes: 3GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 6.17GB Physical Space Used by the Aggregate: 228.4MB Space Saved by Aggregate Data Reduction: 5.95GB Aggregate Data Reduction SE Ratio: 27.67:1 Logical Size Used by Snapshot Copies: 324KB Physical Size Used by Snapshot Copies: 132KB Snapshot Volume Data Reduction Ratio: 2.45:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.45:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.3GB 861.8GB 1.43GB 340.3MB 0% 5.95GB 81% 56.18MB 353.8MB 5.95GB 81% 56.18MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 4.15GB 0% Aggregate Metadata 3.23GB 0% Snapshot Reserve 45.36GB 5% Total Used 46.78GB 5% Total Physical Used 340.3MB 0% Total Provisioned Space 17GB 2% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 3.03GB - Logical Referenced Capacity 948.7MB - Logical Unreferenced Capacity 2.10GB - Space Saved by Storage Efficiency 2.68GB - Total Physical Used 353.8MB - 2 entries were displayed.
書き戻し後はAuto Adaptive Compression
が3.01GBと階層化前と同水準となっていました。
また、data-compaction-space-saved
とSpace Saved by Aggregate Data Reduction
は5.95GBとボリュームへの書き込み量以上のサイズを指しています。SSDの書き戻し前後でLogical Space Used by the Aggregate
が3.28GBから6.17GBと増加していました。SSDへの書き戻しをすると論理サイズが膨れるのでしょうか。
何はともあれ、最も重要なTotal Physical Used
は書き戻し前後で305.3MBから228.4MBとなっていました。増加するかと思いきやむしろ減っていますね。
以上のことからaggregateレイヤーのデータ削減効果キャパシティプールストレージに移動させてからSSDに書き戻した後も維持できことが分かります。
ちなみに、aggr show-space
のキャパシティプールストレージのLogical Used
などの値は時間を置くと徐々に小さくなっていきました。すぐさま解放されるのではなく、ゆっくりと反映されるようです。
::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId01f6f549cf0cabbb9-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 3GB Total Physical Used: 194.7MB Total Storage Efficiency Ratio: 15.78:1 Total Data Reduction Logical Used Without Snapshots: 3GB Total Data Reduction Physical Used Without Snapshots: 194.7MB Total Data Reduction Efficiency Ratio Without Snapshots: 15.78:1 Total Data Reduction Logical Used without snapshots and flexclones: 3GB Total Data Reduction Physical Used without snapshots and flexclones: 194.7MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 15.78:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 3.01GB Total Physical Used in FabricPool Performance Tier: 188.3MB Total FabricPool Performance Tier Storage Efficiency Ratio: 16.39:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.01GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 188.3MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 16.39:1 Logical Space Used for All Volumes: 3GB Physical Space Used for All Volumes: 3GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 6.14GB Physical Space Used by the Aggregate: 194.7MB Space Saved by Aggregate Data Reduction: 5.95GB Aggregate Data Reduction SE Ratio: 32.29:1 Logical Size Used by Snapshot Copies: 696KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.56:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.56:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.2GB 861.8GB 1.56GB 479.3MB 0% 5.95GB 79% 56.18MB 29.46MB 5.95GB 79% 56.18MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 4.15GB 0% Aggregate Metadata 3.36GB 0% Snapshot Reserve 45.36GB 5% Total Used 46.92GB 5% Total Physical Used 479.3MB 0% Total Provisioned Space 17GB 2% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 242.1MB - Logical Referenced Capacity 227.9MB - Logical Unreferenced Capacity 14.22MB - Space Saved by Storage Efficiency 212.6MB - Total Physical Used 29.46MB - 2 entries were displayed.
::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId01f6f549cf0cabbb9-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 3.00GB Total Physical Used: 226.0MB Total Storage Efficiency Ratio: 13.60:1 Total Data Reduction Logical Used Without Snapshots: 3GB Total Data Reduction Physical Used Without Snapshots: 226.0MB Total Data Reduction Efficiency Ratio Without Snapshots: 13.60:1 Total Data Reduction Logical Used without snapshots and flexclones: 3GB Total Data Reduction Physical Used without snapshots and flexclones: 226.0MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 13.60:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 3.01GB Total Physical Used in FabricPool Performance Tier: 262.1MB Total FabricPool Performance Tier Storage Efficiency Ratio: 11.78:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.01GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 262.1MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 11.78:1 Logical Space Used for All Volumes: 3GB Physical Space Used for All Volumes: 3GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 6.17GB Physical Space Used by the Aggregate: 226.0MB Space Saved by Aggregate Data Reduction: 5.95GB Aggregate Data Reduction SE Ratio: 27.95:1 Logical Size Used by Snapshot Copies: 1.09MB Physical Size Used by Snapshot Copies: 412KB Snapshot Volume Data Reduction Ratio: 2.70:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.70:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.0GB 861.8GB 1.75GB 669.3MB 0% 5.95GB 77% 56.18MB 0B 5.95GB 77% 56.18MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 4.15GB 0% Aggregate Metadata 3.55GB 0% Snapshot Reserve 45.36GB 5% Total Used 47.10GB 5% Total Physical Used 669.3MB 0% Total Provisioned Space 17GB 2% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed.
本当にSnapMirrorの転送元のデータがキャパシティプールストレージ上にある場合はaggregateレイヤーのデータ削減効果は維持できないのか確認してみる
クラスターピアリングの作成
本当にSnapMirrorの転送元のデータがキャパシティプールストレージ上にある場合はaggregateレイヤーのデータ削減効果は維持できないのか心配になってきたので再度確認してみます。
FSxNファイルシステムをもう一つ用意してSnapMirrorをします。
まず、クラスターピアリングをします。
その前にFSxNファイルシステムのLIFのIPアドレスを確認します。
::*> cluster identity show Cluster UUID: 63b0c2ff-aaa8-11ee-b370-e151fd8b9252 Cluster Name: FsxId01f6f549cf0cabbb9 Cluster Serial Number: 1-80-000011 Cluster Location: Cluster Contact: RDB UUID: 63b16178-aaa8-11ee-b370-e151fd8b9252 ::*> network interface show -service-policy default-intercluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- FsxId01f6f549cf0cabbb9 inter_1 up/up 10.0.8.112/24 FsxId01f6f549cf0cabbb9-01 e0e true inter_2 up/up 10.0.8.135/24 FsxId01f6f549cf0cabbb9-02 e0e true 2 entries were displayed.
::> cluster identity show Cluster UUID: 0e387346-aabc-11ee-98f1-c1176fdd0775 Cluster Name: FsxId0eb334892d2718fd1 Cluster Serial Number: 1-80-000011 Cluster Location: Cluster Contact: ::> network interface show -service-policy default-intercluster Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- FsxId0eb334892d2718fd1 inter_1 up/up 10.0.8.212/24 FsxId0eb334892d2718fd1-01 e0e true inter_2 up/up 10.0.8.48/24 FsxId0eb334892d2718fd1-02 e0e true 2 entries were displayed.
FSxN 2からクラスターピアリングを作成します。
::> cluster peer create -peer-addrs 10.0.8.112 10.0.8.135 Notice: Use a generated passphrase or choose a passphrase of 8 or more characters. To ensure the authenticity of the peering relationship, use a phrase or sequence of characters that would be hard to guess. Enter the passphrase: Confirm the passphrase: Notice: Now use the same passphrase in the "cluster peer create" command in the other cluster.
FSxN 1からもクラスターピアリングを作成します。
::*> cluster peer create -peer-addrs 10.0.8.212 10.0.8.48 Notice: Use a generated passphrase or choose a passphrase of 8 or more characters. To ensure the authenticity of the peering relationship, use a phrase or sequence of characters that would be hard to guess. Enter the passphrase: Confirm the passphrase: ::*> cluster peer show Peer Cluster Name Cluster Serial Number Availability Authentication ------------------------- --------------------- -------------- -------------- FsxId0eb334892d2718fd1
クラスターピアリングできましたね。
SVMピアリングの作成
続いて、SVMピアリングを行います。
FSxN 1からSVMピアリングを作成します。
::*> vserver peer create -vserver svm -peer-vserver svm2 -applications snapmirror -peer-cluster FsxId0eb334892d2718fd1 Info: [Job 45] 'vserver peer create' job queued
FSxN 2側でSVMピアリングを承認します。
::> vserver peer show-all Peer Peer Peering Remote Vserver Vserver State Peer Cluster Applications Vserver ----------- ----------- ------------ ----------------- -------------- --------- svm2 svm pending FsxId01f6f549cf0cabbb9 snapmirror svm ::> vserver peer accept -vserver svm2 -peer-vserver svm Info: [Job 45] 'vserver peer accept' job queued ::> vserver peer show-all Peer Peer Peering Remote Vserver Vserver State Peer Cluster Applications Vserver ----------- ----------- ------------ ----------------- -------------- --------- svm2 svm peered FsxId01f6f549cf0cabbb9 snapmirror svm
SnapMirror Initialize
SnapMirrorのInitializeを行います。
::> snapmirror protect -path-list svm:vol1 -destination-vserver svm2 -policy MirrorAllSnapshots -auto-initialize true -support-tiering true -tiering-policy none [Job 46] Job is queued: snapmirror protect for list of source endpoints beginning with "svm:vol1". ::> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Uninitialized Finalizing 99.21MB true 01/04 05:45:37 ::> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - ::> snapmirror show -instance Source Path: svm:vol1 Destination Path: svm2:vol1_dst Relationship Type: XDP Relationship Group Type: none SnapMirror Schedule: - SnapMirror Policy Type: async-mirror SnapMirror Policy: MirrorAllSnapshots Tries Limit: - Throttle (KB/sec): unlimited Mirror State: Snapmirrored Relationship Status: Idle File Restore File Count: - File Restore File List: - Transfer Snapshot: - Snapshot Progress: - Total Progress: - Percent Complete for Current Status: - Network Compression Ratio: - Snapshot Checkpoint: - Newest Snapshot: snapmirror.5a7fee4d-aabd-11ee-98f1-c1176fdd0775_2157648525.2024-01-04_054533 Newest Snapshot Timestamp: 01/04 05:45:33 Exported Snapshot: snapmirror.5a7fee4d-aabd-11ee-98f1-c1176fdd0775_2157648525.2024-01-04_054533 Exported Snapshot Timestamp: 01/04 05:45:33 Healthy: true Unhealthy Reason: - Destination Volume Node: FsxId0eb334892d2718fd1-01 Relationship ID: 7a0ae921-aac4-11ee-98f1-c1176fdd0775 Current Operation ID: - Transfer Type: - Transfer Error: - Current Throttle: - Current Transfer Priority: - Last Transfer Type: update Last Transfer Error: - Last Transfer Size: 0B Last Transfer Network Compression Ratio: 1:1 Last Transfer Duration: 0:0:0 Last Transfer From: svm:vol1 Last Transfer End Timestamp: 01/04 05:45:41 Progress Last Updated: - Relationship Capability: 8.2 and above Lag Time: 0:0:38 Identity Preserve Vserver DR: - Volume MSIDs Preserved: - Is Auto Expand Enabled: - Number of Successful Updates: 1 Number of Failed Updates: 0 Number of Successful Resyncs: 0 Number of Failed Resyncs: 0 Number of Successful Breaks: 0 Number of Failed Breaks: 0 Total Transfer Bytes: 115402891 Total Transfer Time in Seconds: 8 FabricLink Source Role: - FabricLink Source Bucket: - FabricLink Peer Role: - FabricLink Peer Bucket: - FabricLink Topology: - FabricLink Pull Byte Count: - FabricLink Push Byte Count: - FabricLink Pending Work Count: - FabricLink Status: -
あっという間に転送が完了しました。99.21MBでステータスがFinalizing
となっています。
Finalizing
はSnapMirrorの転送後の後処理です。
Only for relationships with "Relationship Capability" of "8.2 and above" . Relationship Status: Status of the SnapMirror relationship. Can be one of the following: - Idle: No transfer operation is in progress and future transfers are not disabled. - Queued: A transfer operation has been accepted and queued in the system, and future transfers are not disabled. - Transferring: A transfer operation is in progress and future transfers are not disabled. - Preparing: Pre-transfer phase for Vault incremental transfers. For Vault relationships only. - Finalizing: Post-transfer phase for Vault incremental transfers. Network traffic will be low as processing is primarily on the destination volume. For Vault relationships only. - Aborting: A transfer abort operation that might include the removal of the checkpoint is underway. Future transfers are not disabled. Only for relationships with "Relationship Capability" of "8.2 and above" . - Quiesced: No transfer operation is in progress and future transfers are disabled. - Quiescing: A transfer operation is in progress and future transfers are disabled. - Checking: Destination volume is undergoing a diagnostic check, no transfer is in progress, and future transfers are not disabled. Only for relationships with "Relationship Capability" of "Pre 8.2" . - Breaking: The SnapMirror relationship is being broken off and no transfer is in progress.
抜粋 : snapmirror show
転送先のFSxNファイルシステムでStorage Efficiency、ボリューム、aggregateの情報を確認します。
::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol Disabled Idle Idle for 00:51:25 auto svm2 vol1_dst Disabled Idle Idle for 00:00:00 - 2 entries were displayed. ::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------ --------- --------------- ------ ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol1_dst 3.80GB 596.3MB 3.80GB 3.61GB 3.03GB 83% 0B 0% 3GB 3.03GB 84% - 3.03GB 0B 0% ::*> volume show-footprint -volume vol1* Vserver : svm2 Volume : vol1_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.03GB 0% Footprint in Performance Tier 3.05GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 30.20MB 0% Delayed Frees 28.50MB 0% File Operation Metadata 4KB 0% Total Footprint 3.08GB 0% Effective Total Footprint 3.08GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0eb334892d2718fd1-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 6.05GB Total Physical Used: 246.2MB Total Storage Efficiency Ratio: 25.17:1 Total Data Reduction Logical Used Without Snapshots: 3.02GB Total Data Reduction Physical Used Without Snapshots: 246.1MB Total Data Reduction Efficiency Ratio Without Snapshots: 12.58:1 Total Data Reduction Logical Used without snapshots and flexclones: 3.02GB Total Data Reduction Physical Used without snapshots and flexclones: 246.1MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 12.58:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 6.05GB Total Physical Used in FabricPool Performance Tier: 267.0MB Total FabricPool Performance Tier Storage Efficiency Ratio: 23.22:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.03GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 267.0MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 11.61:1 Logical Space Used for All Volumes: 3.02GB Physical Space Used for All Volumes: 3.02GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 3.16GB Physical Space Used by the Aggregate: 246.2MB Space Saved by Aggregate Data Reduction: 2.92GB Aggregate Data Reduction SE Ratio: 13.17:1 Logical Size Used by Snapshot Copies: 3.03GB Physical Size Used by Snapshot Copies: 308KB Snapshot Volume Data Reduction Ratio: 10304.71:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 10304.71:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 2 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.4GB 861.8GB 1.32GB 281.9MB 0% 2.92GB 69% 42.29MB 0B 2.92GB 69% 42.29MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 4.11GB 0% Aggregate Metadata 138.5MB 0% Snapshot Reserve 45.36GB 5% Total Used 46.67GB 5% Total Physical Used 281.9MB 0% Total Provisioned Space 5.80GB 1% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed.
Space Saved by Aggregate Data Reduction
が2.92GBとなっていることからaggregateレイヤーのデータ削減効果を維持できています。
Tiering Policy All
SnapMirrorの転送元ボリュームのTiering PolicyをAllに変更します。
::*> volume show -volume vol1 -fields tiering-policy vserver volume tiering-policy ------- ------ -------------- svm vol1 none ::*> volume modify -vserver svm -volume vol1 -tiering-policy all Error: command failed: Unable to set volume attribute "tiering-policy" for volume "vol1" on Vserver "svm". Reason: Invalid tiering policy vol1 . ::*> volume modify -vserver svm -volume vol1 -tiering-policy auto Error: command failed: Unable to set volume attribute "tiering-policy" for volume "vol1" on Vserver "svm". Reason: Invalid tiering policy vol1 . ::*> volume modify -vserver svm -volume vol1 -tiering-policy snapshot-only Volume modify successful on volume vol1 of Vserver svm. ::*> volume show -volume vol1 -fields tiering-policy vserver volume tiering-policy ------- ------ -------------- svm vol1 snapshot-only ::*> volume modify -vserver svm -volume vol1 -tiering-policy all Volume modify successful on volume vol1 of Vserver svm. ::*> volume show -volume vol1 -fields tiering-policy vserver volume tiering-policy ------- ------ -------------- svm vol1 all ::*> volume show -volume vol1 -fields cloud-retrieval-policy vserver volume cloud-retrieval-policy ------- ------ ---------------------- svm vol1 default ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.01GB 0% Footprint in Performance Tier 49.90MB 2% Footprint in FSxFabricpoolObjectStore 3GB 98% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 35.65MB 0% File Operation Metadata 4KB 0% Total Footprint 3.14GB 0% Footprint Data Reduction 49.34MB 0% Auto Adaptive Compression 49.34MB 0% Footprint Data Reduction in capacity tier 2.94GB - Effective Total Footprint 154.7MB 0%
一度Snapshot Onlyに変更することで、Allに変更できました。どうやらCloud Retrievalポリシーをpromote
のままTiering PolicyをAllに変更しようとしたのが良くなかったと考えます。
転送元のFSxNファイルシステムでStorage Efficiency、ボリューム、aggregateの情報を確認します。
::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Enabled Idle Idle for 03:21:46 auto ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 12.19GB 16GB 15.20GB 3.01GB 19% 0B 0% 0B 3.01GB 20% - 3.01GB - - ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.03GB 0% Footprint in Performance Tier 62.17MB 2% Footprint in FSxFabricpoolObjectStore 3GB 98% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 35.78MB 0% File Operation Metadata 4KB 0% Total Footprint 3.15GB 0% Footprint Data Reduction 61.48MB 0% Auto Adaptive Compression 61.48MB 0% Footprint Data Reduction in capacity tier 2.94GB - Effective Total Footprint 154.8MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId01f6f549cf0cabbb9-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 6.01GB Total Physical Used: 773.9MB Total Storage Efficiency Ratio: 7.96:1 Total Data Reduction Logical Used Without Snapshots: 3GB Total Data Reduction Physical Used Without Snapshots: 772.5MB Total Data Reduction Efficiency Ratio Without Snapshots: 3.98:1 Total Data Reduction Logical Used without snapshots and flexclones: 3GB Total Data Reduction Physical Used without snapshots and flexclones: 772.5MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 3.98:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 124.0MB Total Physical Used in FabricPool Performance Tier: 751.1MB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 61.65MB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 751.1MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 3GB Physical Space Used for All Volumes: 3GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 6.70GB Physical Space Used by the Aggregate: 773.9MB Space Saved by Aggregate Data Reduction: 5.95GB Aggregate Data Reduction SE Ratio: 8.87:1 Logical Size Used by Snapshot Copies: 3.01GB Physical Size Used by Snapshot Copies: 12.66MB Snapshot Volume Data Reduction Ratio: 243.91:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 243.91:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.5GB 861.8GB 1.24GB 790.4MB 0% 5.95GB 83% 56.18MB 42MB 5.95GB 83% 56.18MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 1.16GB 0% Aggregate Metadata 6.02GB 1% Snapshot Reserve 45.36GB 5% Total Used 46.59GB 5% Total Physical Used 790.4MB 0% Total Provisioned Space 17GB 2% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 3.03GB - Logical Referenced Capacity 3.01GB - Logical Unreferenced Capacity 14.98MB - Space Saved by Storage Efficiency 2.98GB - Total Physical Used 42MB - 2 entries were displayed.
新たな転送先ボリュームでSnapMirror Initialize
新たな転送先ボリュームでSnapMirror Initializeをします。
::*> snapmirror protect -path-list svm:vol1 -destination-vserver svm2 -policy MirrorAllSnapshots -auto-initialize true -support-tiering true -tiering-policy none -destination-volume-suffix _dst2 [Job 49] Job is queued: snapmirror protect for list of source endpoints beginning with "svm:vol1". ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Uninitialized Transferring 0B true 01/04 05:57:34 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Uninitialized Transferring 91.88MB true 01/04 05:57:42 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Uninitialized Transferring 240.5MB true 01/04 05:57:57 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Uninitialized Transferring 377.2MB true 01/04 05:58:13 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Uninitialized Transferring 662.5MB true 01/04 05:58:44 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Uninitialized Transferring 947.7MB true 01/04 05:59:15 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Uninitialized Transferring 1.19GB true 01/04 05:59:46 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Uninitialized Transferring 1.67GB true 01/04 06:00:48 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Uninitialized Transferring 2.34GB true 01/04 06:02:21 2 entries were displayed. ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Snapmirrored Idle - true - 2 entries were displayed.
転送完了まで6分ほどかかりました。また、Total Progress
が2.34GBと、先のSnapMirror Initializeと比較して転送量がかなり多いことが分かります。
転送先のFSxNファイルシステムでStorage Efficiency、ボリューム、aggregateの情報を確認します。
::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol Disabled Idle Idle for 01:10:04 auto svm2 vol1_dst Disabled Idle Idle for 00:00:00 - svm2 vol1_dst2 Disabled Idle Idle for 00:00:00 - 3 entries were displayed. ::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------ --------- --------------- ------ ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol1_dst 3.80GB 596.3MB 3.80GB 3.61GB 3.03GB 83% 0B 0% 3GB 3.03GB 84% - 3.03GB 0B 0% svm2 vol1_dst2 3.82GB 613.8MB 3.82GB 3.63GB 3.03GB 83% 0B 0% 3GB 3.03GB 83% - 3.03GB 0B 0% 2 entries were displayed. ::*> volume show-footprint -volume vol1* Vserver : svm2 Volume : vol1_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.03GB 0% Footprint in Performance Tier 3.05GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 30.20MB 0% Delayed Frees 28.50MB 0% File Operation Metadata 4KB 0% Total Footprint 3.08GB 0% Effective Total Footprint 3.08GB 0% Vserver : svm2 Volume : vol1_dst2 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.03GB 0% Footprint in Performance Tier 3.06GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 30.29MB 0% Delayed Frees 32.85MB 0% File Operation Metadata 4KB 0% Total Footprint 3.09GB 0% Effective Total Footprint 3.09GB 0% 2 entries were displayed. ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0eb334892d2718fd1-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 15.13GB Total Physical Used: 3.22GB Total Storage Efficiency Ratio: 4.70:1 Total Data Reduction Logical Used Without Snapshots: 6.05GB Total Data Reduction Physical Used Without Snapshots: 3.22GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.88:1 Total Data Reduction Logical Used without snapshots and flexclones: 6.05GB Total Data Reduction Physical Used without snapshots and flexclones: 3.22GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.88:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 15.13GB Total Physical Used in FabricPool Performance Tier: 3.26GB Total FabricPool Performance Tier Storage Efficiency Ratio: 4.65:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 6.05GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.25GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.86:1 Logical Space Used for All Volumes: 6.05GB Physical Space Used for All Volumes: 6.05GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 6.15GB Physical Space Used by the Aggregate: 3.22GB Space Saved by Aggregate Data Reduction: 2.92GB Aggregate Data Reduction SE Ratio: 1.91:1 Logical Size Used by Snapshot Copies: 9.08GB Physical Size Used by Snapshot Copies: 1.13MB Snapshot Volume Data Reduction Ratio: 8208.30:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 8208.30:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 3 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 857.3GB 861.8GB 4.46GB 3.39GB 0% 2.92GB 40% 42.29MB 0B 2.92GB 40% 42.29MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 7.20GB 1% Aggregate Metadata 190.0MB 0% Snapshot Reserve 45.36GB 5% Total Used 49.81GB 5% Total Physical Used 3.39GB 0% Total Provisioned Space 9.62GB 1% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed.
data-compaction-space-saved
もSpace Saved by Aggregate Data Reduction
2.92GBのままです。
また、Total Physical Used
が246.2MBから3.22GBと約3GB増加しています。ここからもキャパシティプールストレージ上のデータをSnapMirrorをすると、aggregateレイヤーで効いていたデータ削減効果は失われることが確認できます。
転送元ボリュームがTiering Policy Allである状態でSnapMirror Update
SnapMirror Initialize時は転送元ボリュームのTiering PolicyがNoneで、SnapMirror Update時はTiering Policy Allの場合の挙動を確認します。
キャパシティプールストレージに階層化した分のデータブロックが差分として判定されるのか気になりました。
::*> snapmirror update -destination-path svm2:vol1_dst Operation is queued: snapmirror update of destination "svm2:vol1_dst". ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Snapmirrored Idle - true - 2 entries were displayed.
転送先のFSxNファイルシステムでStorage Efficiency、ボリューム、aggregate、Snapshotの情報を確認します。
::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol Disabled Idle Idle for 01:20:08 auto svm2 vol1_dst Disabled Idle Idle for 00:00:00 - svm2 vol1_dst2 Disabled Idle Idle for 00:00:00 - 3 entries were displayed. ::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------ --------- --------------- ------ ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol1_dst 3.80GB 596.4MB 3.80GB 3.61GB 3.03GB 83% 0B 0% 3GB 3.03GB 84% - 3.03GB 0B 0% svm2 vol1_dst2 3.82GB 613.8MB 3.82GB 3.63GB 3.03GB 83% 0B 0% 3GB 3.03GB 83% - 3.03GB 0B 0% 2 entries were displayed. ::*> volume show-footprint -volume vol1* Vserver : svm2 Volume : vol1_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.03GB 0% Footprint in Performance Tier 3.06GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 30.20MB 0% Delayed Frees 29.65MB 0% File Operation Metadata 4KB 0% Total Footprint 3.08GB 0% Effective Total Footprint 3.08GB 0% Vserver : svm2 Volume : vol1_dst2 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.03GB 0% Footprint in Performance Tier 3.06GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 30.29MB 0% Delayed Frees 32.85MB 0% File Operation Metadata 4KB 0% Total Footprint 3.09GB 0% Effective Total Footprint 3.09GB 0% 2 entries were displayed. ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0eb334892d2718fd1-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 21.18GB Total Physical Used: 3.23GB Total Storage Efficiency Ratio: 6.55:1 Total Data Reduction Logical Used Without Snapshots: 6.05GB Total Data Reduction Physical Used Without Snapshots: 3.23GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.87:1 Total Data Reduction Logical Used without snapshots and flexclones: 6.05GB Total Data Reduction Physical Used without snapshots and flexclones: 3.23GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.87:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 21.19GB Total Physical Used in FabricPool Performance Tier: 3.27GB Total FabricPool Performance Tier Storage Efficiency Ratio: 6.48:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 6.05GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.27GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.85:1 Logical Space Used for All Volumes: 6.05GB Physical Space Used for All Volumes: 6.05GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 6.16GB Physical Space Used by the Aggregate: 3.23GB Space Saved by Aggregate Data Reduction: 2.92GB Aggregate Data Reduction SE Ratio: 1.90:1 Logical Size Used by Snapshot Copies: 15.13GB Physical Size Used by Snapshot Copies: 1.74MB Snapshot Volume Data Reduction Ratio: 8915.02:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 8915.02:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 3 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 857.3GB 861.8GB 4.48GB 3.42GB 0% 2.92GB 39% 42.29MB 0B 2.92GB 39% 42.29MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 7.20GB 1% Aggregate Metadata 215.2MB 0% Snapshot Reserve 45.36GB 5% Total Used 49.84GB 5% Total Physical Used 3.42GB 0% Total Provisioned Space 9.62GB 1% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed. ::*> snapshot show -volume vol1* ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm2 vol1_dst snapmirror.5a7fee4d-aabd-11ee-98f1-c1176fdd0775_2157648525.2024-01-04_054533 372KB 0% 0% snapmirror.5a7fee4d-aabd-11ee-98f1-c1176fdd0775_2157648526.2024-01-04_055734 264KB 0% 0% snapmirror.5a7fee4d-aabd-11ee-98f1-c1176fdd0775_2157648525.2024-01-04_061512 136KB 0% 0% vol1_dst2 snapmirror.5a7fee4d-aabd-11ee-98f1-c1176fdd0775_2157648525.2024-01-04_054533 576KB 0% 0% snapmirror.5a7fee4d-aabd-11ee-98f1-c1176fdd0775_2157648526.2024-01-04_055734 136KB 0% 0% 5 entries were displayed.
特に変わりありませんでした。
差分がない状態でTiering PolicyをAllにしたとしても転送済みのデータブロックには影響はないようです。
SSDに書き戻してからSnapMirror Update
SnapMirror Initialize時は転送元ボリュームのTiering PolicyがAllで、SnapMirror Update時はSSDに書き戻している場合の挙動を確認します。
転送元ボリュームのデータブロックをSSDに書き戻します。
::*> volume modify -vserver svm -volume vol1 -tiering-policy none -cloud-retrieval-policy promote Warning: The "promote" cloud retrieve policy retrieves all of the cloud data for the specified volume. If the tiering policy is "snapshot-only" then only AFS data is retrieved. If the tiering policy is "none" then all data is retrieved. Volume "vol1" in Vserver "svm" is on a FabricPool, and there are approximately 3221225472 bytes tiered to the cloud that will be retrieved. Cloud retrieval may take a significant amount of time, and may degrade performance during that time. The cloud retrieve operation may also result in data charges by your object store provider. Do you want to continue? {y|n}: y Volume modify successful on volume vol1 of Vserver svm. ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.03GB 0% Footprint in Performance Tier 3.08GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 50.82MB 0% File Operation Metadata 4KB 0% Total Footprint 3.17GB 0% Footprint Data Reduction 3.04GB 0% Auto Adaptive Compression 3.04GB 0% Effective Total Footprint 127.9MB 0%
転送元のFSxNファイルシステムでStorage Efficiency、ボリューム、aggregateの情報を確認します。
::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Enabled Idle Idle for 03:46:20 auto ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 16GB 12.19GB 16GB 15.20GB 3.01GB 19% 0B 0% 0B 3.01GB 20% - 3.01GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.03GB 0% Footprint in Performance Tier 3.08GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 92.66MB 0% Delayed Frees 50.82MB 0% File Operation Metadata 4KB 0% Total Footprint 3.17GB 0% Footprint Data Reduction 3.04GB 0% Auto Adaptive Compression 3.04GB 0% Effective Total Footprint 127.9MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId01f6f549cf0cabbb9-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 9.03GB Total Physical Used: 186.5MB Total Storage Efficiency Ratio: 49.57:1 Total Data Reduction Logical Used Without Snapshots: 3GB Total Data Reduction Physical Used Without Snapshots: 186.3MB Total Data Reduction Efficiency Ratio Without Snapshots: 16.49:1 Total Data Reduction Logical Used without snapshots and flexclones: 3GB Total Data Reduction Physical Used without snapshots and flexclones: 186.3MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 16.49:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 9.04GB Total Physical Used in FabricPool Performance Tier: 201.0MB Total FabricPool Performance Tier Storage Efficiency Ratio: 46.08:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.01GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 200.7MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 15.38:1 Logical Space Used for All Volumes: 3GB Physical Space Used for All Volumes: 3GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 9.10GB Physical Space Used by the Aggregate: 186.5MB Space Saved by Aggregate Data Reduction: 8.92GB Aggregate Data Reduction SE Ratio: 49.95:1 Logical Size Used by Snapshot Copies: 6.03GB Physical Size Used by Snapshot Copies: 12.95MB Snapshot Volume Data Reduction Ratio: 476.92:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 476.92:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 859.8GB 861.8GB 1.99GB 924.5MB 0% 8.92GB 82% 87.97MB 11.59MB 8.92GB 82% 87.97MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 4.18GB 0% Aggregate Metadata 6.74GB 1% Snapshot Reserve 45.36GB 5% Total Used 47.35GB 5% Total Physical Used 924.5MB 0% Total Provisioned Space 17GB 2% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 855.5MB - Logical Referenced Capacity 832.0MB - Logical Unreferenced Capacity 23.48MB - Space Saved by Storage Efficiency 843.9MB - Total Physical Used 11.59MB - 2 entries were displayed.
SSDに書き戻してもaggregateレイヤーのデータ削減効果を維持できています。
SnapMirror Updateします。
::*> snapmirror update -destination-path svm2:vol1_dst2 Operation is queued: snapmirror update of destination "svm2:vol1_dst2". ::*> snapmirror show Progress Source Destination Mirror Relationship Total Last Path Type Path State Status Progress Healthy Updated ----------- ---- ------------ ------- -------------- --------- ------- -------- svm:vol1 XDP svm2:vol1_dst Snapmirrored Idle - true - svm2:vol1_dst2 Snapmirrored Idle - true - 2 entries were displayed.
一瞬で完了しました。
転送元のFSxNファイルシステムでStorage Efficiency、ボリューム、aggregateの情報を確認します。
::*> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm2 vol Disabled Idle Idle for 01:25:17 auto svm2 vol1_dst Disabled Idle Idle for 00:00:00 - svm2 vol1_dst2 Disabled Idle Idle for 00:00:00 - 3 entries were displayed. ::*> volume show -volume vol1* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- -------- ------ --------- --------------- ------ ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm2 vol1_dst 3.80GB 596.4MB 3.80GB 3.61GB 3.03GB 83% 0B 0% 3GB 3.03GB 84% - 3.03GB 0B 0% svm2 vol1_dst2 3.82GB 613.9MB 3.82GB 3.63GB 3.03GB 83% 0B 0% 3GB 3.03GB 83% - 3.03GB 0B 0% 2 entries were displayed. ::*> volume show-footprint -volume vol1* Vserver : svm2 Volume : vol1_dst Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.03GB 0% Footprint in Performance Tier 3.06GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 30.20MB 0% Delayed Frees 29.65MB 0% File Operation Metadata 4KB 0% Total Footprint 3.08GB 0% Effective Total Footprint 3.08GB 0% Vserver : svm2 Volume : vol1_dst2 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 3.03GB 0% Footprint in Performance Tier 3.06GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 30.29MB 0% Delayed Frees 34.80MB 0% File Operation Metadata 4KB 0% Total Footprint 3.09GB 0% Effective Total Footprint 3.09GB 0% 2 entries were displayed. ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0eb334892d2718fd1-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 24.21GB Total Physical Used: 3.24GB Total Storage Efficiency Ratio: 7.48:1 Total Data Reduction Logical Used Without Snapshots: 6.05GB Total Data Reduction Physical Used Without Snapshots: 3.24GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.87:1 Total Data Reduction Logical Used without snapshots and flexclones: 6.05GB Total Data Reduction Physical Used without snapshots and flexclones: 3.24GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.87:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 24.21GB Total Physical Used in FabricPool Performance Tier: 3.27GB Total FabricPool Performance Tier Storage Efficiency Ratio: 7.39:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 6.05GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 3.27GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.85:1 Logical Space Used for All Volumes: 6.05GB Physical Space Used for All Volumes: 6.05GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 6.16GB Physical Space Used by the Aggregate: 3.24GB Space Saved by Aggregate Data Reduction: 2.92GB Aggregate Data Reduction SE Ratio: 1.90:1 Logical Size Used by Snapshot Copies: 18.16GB Physical Size Used by Snapshot Copies: 1.87MB Snapshot Volume Data Reduction Ratio: 9959.25:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 9959.25:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 3 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 857.3GB 861.8GB 4.50GB 3.43GB 0% 2.92GB 39% 42.29MB 0B 2.92GB 39% 42.29MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 7.20GB 1% Aggregate Metadata 230.1MB 0% Snapshot Reserve 45.36GB 5% Total Used 49.86GB 5% Total Physical Used 3.43GB 0% Total Provisioned Space 9.62GB 1% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed.
特に変わりありませんでした。
差分がない状態でSSDに書き戻したとしても転送済みのデータブロックには影響はないようです。
2024/1/26 追記 もう一度SSDに書き戻してもaggregateレイヤーのデータ削減効果を維持できるのか確認してみた
ファイルの書き込み
他の検証をする中で0
やABCDE
など短いシンプルな文字列ではコンパクションが効いてしまうことが判明しています。
インラインコンパクションが効かないように今回は/dev/urandom
で生成したバイナリデータをBase64でエンコードした1KBの文字列を指定したバイト数分繰り返すことでテストファイルを用意します。
新しくFSxNファイルシステムを用意しました。ファイル作成前のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::> set diag Warning: These diagnostic commands are for use by NetApp personnel only. Do you want to continue? {y|n}: y ::*> volume efficiency show -volume vol1 -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression vserver volume state policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume ------- ------ -------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- ----------------------------------------- svm vol1 Disabled auto false false efficient false false true false false ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ -------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Disabled Idle for 01:33:03 Thu Jan 25 07:43:59 2024 Thu Jan 25 07:43:59 2024 0B 0% 0B 364KB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent,size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ----- --------- --------------- ------- ----- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 128TB 860.6GB 128TB 121.6TB 364KB 0% 0B 0% 0B 364KB 0% 364KB 0% - 364KB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 364KB 0% Footprint in Performance Tier 3.30MB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 107.5MB 0% Delayed Frees 2.95MB 0% File Operation Metadata 4KB 0% Total Footprint 110.8MB 0% Effective Total Footprint 110.8MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId08a5d78da0813ad18-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 920KB Total Physical Used: 628KB Total Storage Efficiency Ratio: 1.46:1 Total Data Reduction Logical Used Without Snapshots: 208KB Total Data Reduction Physical Used Without Snapshots: 356KB Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones: 208KB Total Data Reduction Physical Used without snapshots and flexclones: 356KB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 1.44MB Total Physical Used in FabricPool Performance Tier: 19.37MB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 764KB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 19.10MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 208KB Physical Space Used for All Volumes: 208KB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 628KB Physical Space Used by the Aggregate: 628KB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 712KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.62:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.62:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.6GB 861.8GB 1.14GB 228.0MB 0% 0B 0% 0B 0B 0B 0% 0B - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 1.12GB 0% Aggregate Metadata 18.23MB 0% Snapshot Reserve 45.36GB 5% Total Used 46.49GB 5% Total Physical Used 228.0MB 0% Total Provisioned Space 128.0TB 14449% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed.
16GiBのファイルを追記します。
$ sudo mount -t nfs svm-06b1b2536d4947461.fs-08a5d78da0813ad18.fsx.us-east-1.amazonaws.com:/vol1 /mnt/fsxn/vol1 $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-06b1b2536d4947461.fs-08a5d78da0813ad18.fsx.us-east-1.amazonaws.com:/vol1 nfs4 122T 121T 861G 100% /mnt/fsxn/vol1 $ yes \ $(base64 /dev/urandom -w 0 \ | head -c 1K ) \ | tr -d '\n' \ | sudo dd of=/mnt/fsxn/vol1/1KB_random_pattern_text_block_16GiB bs=4M count=4096 iflag=fullblock 4096+0 records in 4096+0 records out 17179869184 bytes (17 GB, 16 GiB) copied, 115.114 s, 149 MB/s $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-06b1b2536d4947461.fs-08a5d78da0813ad18.fsx.us-east-1.amazonaws.com:/vol1 nfs4 122T 121T 845G 100% /mnt/fsxn/vol1
ファイル作成後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ -------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Disabled Idle for 01:38:55 Thu Jan 25 07:43:59 2024 Thu Jan 25 07:43:59 2024 0B 0% 0B 16.07GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent,size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 128TB 844.4GB 128TB 121.6TB 16.07GB 0% 0B 0% 0B 16.07GB 0% 16.07GB 0% - 16.07GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 16.07GB 2% Footprint in Performance Tier 16.08GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Delayed Frees 11.99MB 0% File Operation Metadata 4KB 0% Total Footprint 16.29GB 2% Effective Total Footprint 16.29GB 2% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId08a5d78da0813ad18-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 16.06GB Total Physical Used: 16.06GB Total Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used Without Snapshots: 16.06GB Total Data Reduction Physical Used Without Snapshots: 16.06GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones: 16.06GB Total Data Reduction Physical Used without snapshots and flexclones: 16.06GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 16.07GB Total Physical Used in FabricPool Performance Tier: 16.13GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 16.07GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 16.13GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 16.06GB Physical Space Used for All Volumes: 16.06GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 16.06GB Physical Space Used by the Aggregate: 16.06GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 712KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.62:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.62:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 844.4GB 861.8GB 17.33GB 16.38GB 2% 0B 0% 0B 0B 0B 0% 0B - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 17.30GB 2% Aggregate Metadata 27.91MB 0% Snapshot Reserve 45.36GB 5% Total Used 62.68GB 7% Total Physical Used 16.38GB 2% Total Provisioned Space 128.0TB 14449% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed.
書き込んだ16GiB分増加しているように見えます。
Space Saved by Aggregate Data Reduction
やdata-compaction-space-saved
も0Bです。
Inactive data compressionの実行
Inactive data compressionを実行します。
::*> volume efficiency on -vserver svm -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is enabled. ::*> volume efficiency modify -vserver svm -volume vol1 -compression true ::*> volume efficiency inactive-data-compression modify -vserver svm -volume vol1 -is-enabled true ::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0 Inactive data compression scan started on volume "vol1" in Vserver "svm" ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 1637 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 413368 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 409728 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 10780 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 2753560 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 2743232 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 4209712 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 4195736 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 47 Time since Last Inactive Data Compression Scan ended(sec): 27 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 27 Average time for Cold Data Compression(sec): 20 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
4,195,736ブロックを圧縮したようです。
Inactive data compressionが完了して10分後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Enabled Idle for 01:52:05 Thu Jan 25 07:43:59 2024 Thu Jan 25 07:43:59 2024 0B 0% 0B 16.07GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent,size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 128TB 848.0GB 128TB 121.6TB 16.07GB 0% 0B 0% 0B 16.07GB 0% 16.07GB 0% - 16.07GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 16.07GB 2% Footprint in Performance Tier 16.14GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Delayed Frees 68.14MB 0% File Operation Metadata 4KB 0% Total Footprint 16.35GB 2% Footprint Data Reduction 15.45GB 2% Auto Adaptive Compression 15.45GB 2% Effective Total Footprint 918.8MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId08a5d78da0813ad18-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 15.96GB Total Physical Used: 6.69GB Total Storage Efficiency Ratio: 2.38:1 Total Data Reduction Logical Used Without Snapshots: 15.96GB Total Data Reduction Physical Used Without Snapshots: 6.69GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.38:1 Total Data Reduction Logical Used without snapshots and flexclones: 15.96GB Total Data Reduction Physical Used without snapshots and flexclones: 6.69GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.38:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 16.07GB Total Physical Used in FabricPool Performance Tier: 6.87GB Total FabricPool Performance Tier Storage Efficiency Ratio: 2.34:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 16.07GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 6.87GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.34:1 Logical Space Used for All Volumes: 15.96GB Physical Space Used for All Volumes: 15.96GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 22.00GB Physical Space Used by the Aggregate: 6.69GB Space Saved by Aggregate Data Reduction: 15.31GB Aggregate Data Reduction SE Ratio: 3.29:1 Logical Size Used by Snapshot Copies: 712KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.62:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.62:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 848.0GB 861.8GB 13.76GB 12.63GB 1% 15.31GB 53% 699.2MB 0B 15.31GB 53% 699.2MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 17.36GB 2% Aggregate Metadata 11.71GB 1% Snapshot Reserve 45.36GB 5% Total Used 59.11GB 7% Total Physical Used 12.63GB 1% Total Provisioned Space 128.0TB 14449% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed.
aggr show-efficiency
のTotal Physical Used
が16.06GBから6.69GBに、aggr show
のphysical-used
が16.38GBから12.63GBに削減されました。
CloudWatchメトリクスを確認してみます。
メトリクスからAll SSD StorageUsed
がTotal Physical Used
に、StorageUsed
の値がaggr show
のusedsize
に対応していそうです。
キャパシティプールストレージへの階層化
キャパシティプールストレージに階層化します。
::*> volume show -volume vol1 -fields tiering-policy vserver volume tiering-policy ------- ------ -------------- svm vol1 none ::*> volume modify -vserver svm -volume vol1 -tiering-policy all Volume modify successful on volume vol1 of Vserver svm. ::*> volume show -volume vol1 -fields tiering-policy vserver volume tiering-policy ------- ------ -------------- svm vol1 all ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 16.07GB 2% Footprint in Performance Tier 16.14GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Delayed Frees 68.27MB 0% File Operation Metadata 4KB 0% Total Footprint 16.35GB 2% Footprint Data Reduction 15.45GB 2% Auto Adaptive Compression 15.45GB 2% Effective Total Footprint 918.8MB 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 16.07GB 2% Footprint in Performance Tier 211.9MB 1% Footprint in FSxFabricpoolObjectStore 16GB 99% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Delayed Frees 140.1MB 0% File Operation Metadata 4KB 0% Total Footprint 16.42GB 2% Footprint Data Reduction 202.9MB 0% Auto Adaptive Compression 202.9MB 0% Footprint Data Reduction in capacity tier 14.88GB - Effective Total Footprint 1.34GB 0%
16GB階層化されました。
階層化完了後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Enabled Idle for 02:00:23 Thu Jan 25 07:43:59 2024 Thu Jan 25 07:43:59 2024 0B 0% 0B 16.07GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent,size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 128TB 860.2GB 128TB 121.6TB 16.07GB 0% 0B 0% 0B 16.07GB 0% 16.07GB 0% - 16.07GB - - ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 16.07GB 2% Footprint in Performance Tier 211.9MB 1% Footprint in FSxFabricpoolObjectStore 16GB 99% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Delayed Frees 140.1MB 0% File Operation Metadata 4KB 0% Total Footprint 16.42GB 2% Footprint Data Reduction 202.9MB 0% Auto Adaptive Compression 202.9MB 0% Footprint Data Reduction in capacity tier 14.88GB - Effective Total Footprint 1.34GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId08a5d78da0813ad18-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 15.96GB Total Physical Used: 843.7MB Total Storage Efficiency Ratio: 19.37:1 Total Data Reduction Logical Used Without Snapshots: 15.96GB Total Data Reduction Physical Used Without Snapshots: 843.7MB Total Data Reduction Efficiency Ratio Without Snapshots: 19.37:1 Total Data Reduction Logical Used without snapshots and flexclones: 15.96GB Total Data Reduction Physical Used without snapshots and flexclones: 843.7MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 19.37:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 211.2MB Total Physical Used in FabricPool Performance Tier: 1.83GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 210.5MB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.83GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 15.96GB Physical Space Used for All Volumes: 15.96GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 9.58GB Physical Space Used by the Aggregate: 843.7MB Space Saved by Aggregate Data Reduction: 8.76GB Aggregate Data Reduction SE Ratio: 11.63:1 Logical Size Used by Snapshot Copies: 712KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.62:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.62:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.2GB 861.8GB 1.57GB 1.94GB 0% 8.76GB 85% 401.2MB 1.12GB 8.76GB 85% 401.2MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 1.43GB 0% Aggregate Metadata 8.90GB 1% Snapshot Reserve 45.36GB 5% Total Used 46.93GB 5% Total Physical Used 1.94GB 0% Total Provisioned Space 128.0TB 14449% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 16.14GB - Logical Referenced Capacity 16.06GB - Logical Unreferenced Capacity 79.88MB - Space Saved by Storage Efficiency 15.02GB - Total Physical Used 1.12GB - 2 entries were displayed.
SSDへの書き戻し
SSDに書き戻しを行います。
::*> volume modify -vserver svm -volume vol1 -tiering-policy none -cloud-retrieval-policy promote Warning: The "promote" cloud retrieve policy retrieves all of the cloud data for the specified volume. If the tiering policy is "snapshot-only" then only AFS data is retrieved. If the tiering policy is "none" then all data is retrieved. Volume "vol1" in Vserver "svm" is on a FabricPool, and there are approximately 17179869184 bytes tiered to the cloud that will be retrieved. Cloud retrieval may take a significant amount of time, and may degrade performance during that time. The cloud retrieve operation may also result in data charges by your object store provider. Do you want to continue? {y|n}: y Volume modify successful on volume vol1 of Vserver svm. ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 16.07GB 2% Footprint in Performance Tier 4.51GB 28% Footprint in FSxFabricpoolObjectStore 11.70GB 72% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Delayed Frees 142.5MB 0% File Operation Metadata 4KB 0% Total Footprint 16.42GB 2% Footprint Data Reduction 4.32GB 0% Auto Adaptive Compression 4.32GB 0% Footprint Data Reduction in capacity tier 10.88GB - Effective Total Footprint 1.22GB 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 16.07GB 2% Footprint in Performance Tier 16.31GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Delayed Frees 239.6MB 0% File Operation Metadata 4KB 0% Total Footprint 16.52GB 2% Footprint Data Reduction 15.61GB 2% Auto Adaptive Compression 15.61GB 2% Effective Total Footprint 926.2MB 0%
書き戻されました。
SSDへの書き戻しが完了Storage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Enabled Idle for 07:33:00 Thu Jan 25 07:43:59 2024 Thu Jan 25 07:43:59 2024 0B 0% 0B 16.07GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 128TB 844.0GB 128TB 121.6TB 16.07GB 0% 0B 0% 0B 16.07GB 0% 16.07GB 0% - 16.07GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 16.07GB 2% Footprint in Performance Tier 16.31GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Delayed Frees 239.6MB 0% File Operation Metadata 4KB 0% Total Footprint 16.52GB 2% Footprint Data Reduction 15.61GB 2% Auto Adaptive Compression 15.61GB 2% Effective Total Footprint 926.2MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId08a5d78da0813ad18-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 15.96GB Total Physical Used: 11.22GB Total Storage Efficiency Ratio: 1.42:1 Total Data Reduction Logical Used Without Snapshots: 15.96GB Total Data Reduction Physical Used Without Snapshots: 11.22GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.42:1 Total Data Reduction Logical Used without snapshots and flexclones: 15.96GB Total Data Reduction Physical Used without snapshots and flexclones: 11.22GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.42:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 16.08GB Total Physical Used in FabricPool Performance Tier: 11.37GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.41:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 16.07GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 11.37GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.41:1 Logical Space Used for All Volumes: 15.96GB Physical Space Used for All Volumes: 15.96GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 17.68GB Physical Space Used by the Aggregate: 11.22GB Space Saved by Aggregate Data Reduction: 6.46GB Aggregate Data Reduction SE Ratio: 1.58:1 Logical Size Used by Snapshot Copies: 3.15MB Physical Size Used by Snapshot Copies: 836KB Snapshot Volume Data Reduction Ratio: 3.86:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.86:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 844.0GB 861.8GB 17.72GB 17.96GB 2% 6.46GB 27% 296.6MB 0B 6.46GB 27% 296.6MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 17.53GB 2% Aggregate Metadata 6.66GB 1% Snapshot Reserve 45.36GB 5% Total Used 63.08GB 7% Total Physical Used 17.96GB 2% Total Provisioned Space 128.0TB 14449% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed.
physical-used
が17.96GBとなっています。
また、volume show-footprint
ではAuto Adaptive Compression
が15.61GBとなっています。いくらこちらの値が大きくても実際の物理データ使用量は小さくなっていないことからデータ削減効果を維持できていないと言えます。
CloudWatchメトリクスも確認してみます。
All SSD StorageUsed
、StorageUsed
どちらも階層化前(9:30-9:35)よりも増加していることが分かります。StorageEfficiencySavings
もSSDからの書き戻しのタイミング(9:45)で減少していることが分かります。
ファイル追加
もう一度試してみます。
ポストプロセス重複排除が効かないようにStorage Efficiencyを無効化します。
::*> volume efficiency off -volume vol1Efficiency for volume "vol1" of Vserver "svm" is disabled.
16GiBのファイルを追加します。
$ yes \ $(base64 /dev/urandom -w 0 \ | head -c 1K ) \ | tr -d '\n' \ | sudo dd of=/mnt/fsxn/vol1/1KB_random_pattern_text_block_16GiB_2 bs=4M count=4096 iflag=fullblock 4096+0 records in 4096+0 records out 17179869184 bytes (17 GB, 16 GiB) copied, 114.963 s, 149 MB/s $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-06b1b2536d4947461.fs-08a5d78da0813ad18.fsx.us-east-1.amazonaws.com:/vol1 nfs4 122T 121T 828G 100% /mnt/fsxn/vol1
ファイル追加後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ -------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Disabled Idle for 07:41:32 Thu Jan 25 07:43:59 2024 Thu Jan 25 07:43:59 2024 0B 0% 0B 32.14GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 128TB 827.9GB 128TB 121.6TB 32.14GB 0% 0B 0% 0B 32.14GB 0% 32.14GB 0% - 32.14GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 32.14GB 4% Footprint in Performance Tier 32.38GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 322.4MB 0% Delayed Frees 246.0MB 0% File Operation Metadata 4KB 0% Total Footprint 32.69GB 4% Footprint Data Reduction 31.00GB 3% Auto Adaptive Compression 31.00GB 3% Effective Total Footprint 1.69GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId08a5d78da0813ad18-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 32.13GB Total Physical Used: 26.47GB Total Storage Efficiency Ratio: 1.21:1 Total Data Reduction Logical Used Without Snapshots: 32.13GB Total Data Reduction Physical Used Without Snapshots: 26.47GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.21:1 Total Data Reduction Logical Used without snapshots and flexclones: 32.13GB Total Data Reduction Physical Used without snapshots and flexclones: 26.47GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.21:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 32.14GB Total Physical Used in FabricPool Performance Tier: 26.56GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.21:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.14GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 26.56GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.21:1 Logical Space Used for All Volumes: 32.13GB Physical Space Used for All Volumes: 32.13GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 32.93GB Physical Space Used by the Aggregate: 26.47GB Space Saved by Aggregate Data Reduction: 6.46GB Aggregate Data Reduction SE Ratio: 1.24:1 Logical Size Used by Snapshot Copies: 3.15MB Physical Size Used by Snapshot Copies: 836KB Snapshot Volume Data Reduction Ratio: 3.86:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.86:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 827.9GB 861.8GB 33.91GB 34.12GB 4% 6.46GB 16% 296.6MB 0B 6.46GB 16% 296.6MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 33.71GB 4% Aggregate Metadata 6.67GB 1% Snapshot Reserve 45.36GB 5% Total Used 79.26GB 9% Total Physical Used 34.12GB 4% Total Provisioned Space 128.0TB 14449% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed.
16GiB分単純に増加しています。
Inactive data compressionの実行 (2回目)
Inactive data compressionを再度実行します。
::*> volume efficiency on -vserver svm -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is enabled. ::*> volume efficiency modify -vserver svm -volume vol1 -compression true ::*> volume efficiency inactive-data-compression show Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 true - IDLE SUCCESS lzopro ::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0 Inactive data compression scan started on volume "vol1" in Vserver "svm" ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 2027 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 561016 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 553688 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 21101 Time since Last Inactive Data Compression Scan ended(sec): 21081 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 21081 Average time for Cold Data Compression(sec): 20 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 29487 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 7589568 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 7562496 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 21204 Time since Last Inactive Data Compression Scan ended(sec): 21184 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 21184 Average time for Cold Data Compression(sec): 20 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 51% Phase1 L1s Processed: 31226 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 16506912 Phase2 Blocks Processed: 8486727 Number of Cold Blocks Encountered: 8414072 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 8369856 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 21318 Time since Last Inactive Data Compression Scan ended(sec): 21297 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 21297 Average time for Cold Data Compression(sec): 20 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 8416760 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 8372560 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 144 Time since Last Inactive Data Compression Scan ended(sec): 23 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 23 Average time for Cold Data Compression(sec): 70 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
スキャンしてデータブロック数は8,416,760です。先ほど実行した際は4,209,712です。以下記事で検証しているとおりInactive data compressionでスキャン済みのブロックは、再実行してもスキャン対象にならない認識です。一度キャパシティプールストレージに階層化してからSSDに書き戻すと、新規ブロック扱いされていそうです。
Inactive data compressionの実行が完了して7分後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Enabled Idle for 07:55:24 Thu Jan 25 07:43:59 2024 Thu Jan 25 07:43:59 2024 0B 0% 0B 32.14GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 128TB 845.7GB 128TB 121.6TB 32.14GB 0% 0B 0% 0B 32.14GB 0% 32.14GB 0% - 32.14GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 32.14GB 4% Footprint in Performance Tier 32.49GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 322.4MB 0% Delayed Frees 358.0MB 0% File Operation Metadata 4KB 0% Total Footprint 32.81GB 4% Footprint Data Reduction 31.04GB 3% Auto Adaptive Compression 31.04GB 3% Effective Total Footprint 1.77GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId08a5d78da0813ad18-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 31.75GB Total Physical Used: 9.65GB Total Storage Efficiency Ratio: 3.29:1 Total Data Reduction Logical Used Without Snapshots: 31.74GB Total Data Reduction Physical Used Without Snapshots: 9.65GB Total Data Reduction Efficiency Ratio Without Snapshots: 3.29:1 Total Data Reduction Logical Used without snapshots and flexclones: 31.74GB Total Data Reduction Physical Used without snapshots and flexclones: 9.65GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 3.29:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 32.15GB Total Physical Used in FabricPool Performance Tier: 10.13GB Total FabricPool Performance Tier Storage Efficiency Ratio: 3.17:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.14GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 10.13GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 3.17:1 Logical Space Used for All Volumes: 31.74GB Physical Space Used for All Volumes: 31.74GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 40.20GB Physical Space Used by the Aggregate: 9.65GB Space Saved by Aggregate Data Reduction: 30.55GB Aggregate Data Reduction SE Ratio: 4.17:1 Logical Size Used by Snapshot Copies: 3.15MB Physical Size Used by Snapshot Copies: 836KB Snapshot Volume Data Reduction Ratio: 3.86:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.86:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 845.7GB 861.8GB 16.04GB 14.80GB 2% 30.55GB 66% 1.36GB 0B 30.55GB 66% 1.36GB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 33.82GB 4% Aggregate Metadata 12.78GB 1% Snapshot Reserve 45.36GB 5% Total Used 61.40GB 7% Total Physical Used 14.80GB 2% Total Provisioned Space 128.0TB 14449% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed.
physical-used
が34.12GBから14.80GBと約20GB削減されています。
キャパシティプールストレージへの階層化 (2回目)
キャパシティプールストレージへの階層化を行います。
::*> volume show -volume vol1 -fields tiering-policy, cloud-retrieval-policy vserver volume tiering-policy cloud-retrieval-policy ------- ------ -------------- ---------------------- svm vol1 none promote ::*> volume modify -vserver svm -volume vol1 -tiering-policy all -cloud-retrieval-policy default Error: command failed: Unable to set volume attribute "tiering-policy" for volume "vol1" on Vserver "svm". Reason: Invalid tiering policy vol1 . ::*> volume modify -vserver svm -volume vol1 -tiering-policy snapshot-only Volume modify successful on volume vol1 of Vserver svm. ::*> volume show -volume vol1 -fields tiering-policy, cloud-retrieval-policy vserver volume tiering-policy cloud-retrieval-policy ------- ------ -------------- ---------------------- svm vol1 snapshot-only default ::*> volume modify -vserver svm -volume vol1 -tiering-policy all -cloud-retrieval-policy default Volume modify successful on volume vol1 of Vserver svm. ::*> volume show -volume vol1 -fields tiering-policy, cloud-retrieval-policy vserver volume tiering-policy cloud-retrieval-policy ------- ------ -------------- ---------------------- svm vol1 all default
階層化が完了した際のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Enabled Idle for 08:05:10 Thu Jan 25 07:43:59 2024 Thu Jan 25 07:43:59 2024 0B 0% 0B 32.14GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 128TB 859.4GB 128TB 121.6TB 32.14GB 0% 0B 0% 0B 32.14GB 0% 32.14GB 0% - 32.14GB - - ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 32.14GB 4% Footprint in Performance Tier 648.8MB 2% Footprint in FSxFabricpoolObjectStore 32GB 98% Volume Guarantee 0B 0% Flexible Volume Metadata 322.4MB 0% Delayed Frees 501.4MB 0% File Operation Metadata 4KB 0% Total Footprint 32.95GB 4% Footprint Data Reduction 619.8MB 0% Auto Adaptive Compression 619.8MB 0% Footprint Data Reduction in capacity tier 29.44GB - Effective Total Footprint 2.90GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId08a5d78da0813ad18-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 31.75GB Total Physical Used: 1.65GB Total Storage Efficiency Ratio: 19.23:1 Total Data Reduction Logical Used Without Snapshots: 31.74GB Total Data Reduction Physical Used Without Snapshots: 1.65GB Total Data Reduction Efficiency Ratio Without Snapshots: 19.23:1 Total Data Reduction Logical Used without snapshots and flexclones: 31.74GB Total Data Reduction Physical Used without snapshots and flexclones: 1.65GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 19.23:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 642.9MB Total Physical Used in FabricPool Performance Tier: 47.84MB Total FabricPool Performance Tier Storage Efficiency Ratio: 13.44:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 639.8MB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 47.71MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 13.41:1 Logical Space Used for All Volumes: 31.74GB Physical Space Used for All Volumes: 31.74GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 10.81GB Physical Space Used by the Aggregate: 1.65GB Space Saved by Aggregate Data Reduction: 9.16GB Aggregate Data Reduction SE Ratio: 6.55:1 Logical Size Used by Snapshot Copies: 3.15MB Physical Size Used by Snapshot Copies: 836KB Snapshot Volume Data Reduction Ratio: 3.86:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.86:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 859.4GB 861.8GB 2.39GB 1.87GB 0% 9.16GB 79% 426.2MB 2.28GB 9.16GB 79% 426.2MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 1.96GB 0% Aggregate Metadata 9.59GB 1% Snapshot Reserve 45.36GB 5% Total Used 47.75GB 5% Total Physical Used 1.87GB 0% Total Provisioned Space 128.0TB 14449% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 32.28GB - Logical Referenced Capacity 32.12GB - Logical Unreferenced Capacity 159.8MB - Space Saved by Storage Efficiency 30.00GB - Total Physical Used 2.28GB - 2 entries were displayed.
SSDへの書き戻し (2回目)
SSDへの書き戻しを行います。
::*> volume modify -vserver svm -volume vol1 -tiering-policy none -cloud-retrieval-policy promote Warning: The "promote" cloud retrieve policy retrieves all of the cloud data for the specified volume. If the tiering policy is "snapshot-only" then only AFS data is retrieved. If the tiering policy is "none" then all data is retrieved. Volume "vol1" in Vserver "svm" is on a FabricPool, and there are approximately 34359738368 bytes tiered to the cloud that will be retrieved. Cloud retrieval may take a significant amount of time, and may degrade performance during that time. The cloud retrieve operation may also result in data charges by your object store provider. Do you want to continue? {y|n}: y Volume modify successful on volume vol1 of Vserver svm.
SSDへの書き戻し完了後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Enabled Idle for 18:45:59 Thu Jan 25 07:43:59 2024 Thu Jan 25 07:43:59 2024 0B 0% 0B 32.15GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent,size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ----- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 128TB 827.0GB 128TB 121.6TB 32.15GB 0% 0B 0% 0B 32.15GB 0% 32.15GB 0% - 32.15GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 32.15GB 4% Footprint in Performance Tier 32.86GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 322.4MB 0% Delayed Frees 730.9MB 0% File Operation Metadata 4KB 0% Total Footprint 33.18GB 4% Footprint Data Reduction 31.39GB 3% Auto Adaptive Compression 31.39GB 3% Effective Total Footprint 1.78GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId08a5d78da0813ad18-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 31.76GB Total Physical Used: 25.36GB Total Storage Efficiency Ratio: 1.25:1 Total Data Reduction Logical Used Without Snapshots: 31.75GB Total Data Reduction Physical Used Without Snapshots: 25.36GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.25:1 Total Data Reduction Logical Used without snapshots and flexclones: 31.75GB Total Data Reduction Physical Used without snapshots and flexclones: 25.36GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.25:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 32.16GB Total Physical Used in FabricPool Performance Tier: 25.79GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.25:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.15GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 25.79GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.25:1 Logical Space Used for All Volumes: 31.75GB Physical Space Used for All Volumes: 31.75GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 33.22GB Physical Space Used by the Aggregate: 25.36GB Space Saved by Aggregate Data Reduction: 7.86GB Aggregate Data Reduction SE Ratio: 1.31:1 Logical Size Used by Snapshot Copies: 8.76MB Physical Size Used by Snapshot Copies: 1.03MB Snapshot Volume Data Reduction Ratio: 8.50:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 8.50:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 827.0GB 861.8GB 34.72GB 35.78GB 4% 7.86GB 18% 368.3MB 0B 7.86GB 18% 368.3MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 34.19GB 4% Aggregate Metadata 8.39GB 1% Snapshot Reserve 45.36GB 5% Total Used 80.07GB 9% Total Physical Used 35.78GB 4% Total Provisioned Space 128.0TB 14449% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed.
CloudWatchメトリクスも確認します。
先ほど検証した時と同じようなグラフになっていますね。All SSD StorageUsed
に至ってはSSD書き戻し後が最大値となってしまっています。
ここから、キャパシティプールストレージのデータをSSDに書き戻すとaggregateレイヤーのデータ削減効果が失われると判断できます。
SSDに書き戻すとaggregateレイヤーのデータ削減効果は失われる
キャパシティプールストレージのデータをSSDに書き戻してもaggregateレイヤーのデータ削減効果は維持できるのか確認しました。
結論としては、SSDに書き戻してもaggregateレイヤーのデータ削減効果は維持できます。
あまり書き戻す機会はないかと思いますが、もし書き戻しが必要になってもデータ削減効果が効いている分のSSDの空きがあれば良さそうです。
再度検証した結果、キャパシティプールストレージのデータをSSDに書き戻すとaggregateレイヤーのデータ削減効果が失われることが分かりました。
加えて、「SnapMirrorの転送元のデータがキャパシティプールストレージ上にある場合はaggregateレイヤーのデータ削減効果は維持できない」ということも改めて確認できました。
つまりは、「キャパシティプールストレージに階層化するタイミングでaggregateレイヤーのデータ削減効果は失われる」と考えると良いでしょう。
となると、気になるのはTiering PolicyとInactive data compressionとの付き合い方です。
Tiering PolicyのCooling daysとInactive data compressionのthreshold daysが近いのであればInactive data compressionの恩恵を感じづらいでしょう。
例えば、Inactive data compressionのthreshold daysが1
で、Tiering PolicyがAutoの場合にCooling daysが2
だと、圧縮された上でSSD上に存在するのは1日だけです。
設定しないよりかはマシですが、なかなか役立っているという実感は得づらいと考えます。
Tiering Policy Autoにおいてはthreshold daysとCooling daysを1週間以上間を空けると感じやすいでしょう。「階層化するとパフォーマンスが気になる。まだ階層化するほどでもないけど、xxx日アクセスがないのであれば圧縮してSSDのサイズを最適化したい」という使い方をすると良いと考えます。
また、Tiering PolicyがNoneやSnapshot Onlyの場合は圧縮効果をしっかり得られるでしょう。パフォーマンスに影響がないことが確認できたら有効にするのが良いと考えます。
この記事が誰かの助けになれば幸いです。
以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!