Amazon FSx for NetApp ONTAPのInactive data compressionによる圧縮を解凍してみた
Inactive data compressionによるパフォーマンス影響が大きかった際の対応方法が気になる
こんにちは、のんピ(@non____97)です。
皆さんはInactive data compressionによるパフォーマンス影響が大きかった際の対応方法が気になったことはありますか? 私はあります。
Inactive data compressionは閾値を超過した日数、そのデータブロックにアクセスがない場合に32KBで単位で圧縮を行う仕組みです。詳細は以下記事をご覧ください。
Amazon FSx for NetApp ONTAP(以降FSxN)のFAQには圧縮や重複排除などStorage Efficiencyを有効化しても、ほとんどのワークロードではパフォーマンスに悪影響を与えることはないと記載されています。
Q: データの圧縮と重複排除はファイルシステムのパフォーマンスにどのように影響しますか?
A: ほとんどのワークロードでは、圧縮と重複排除を有効にしても、ファイルシステムのパフォーマンスに悪影響を与えることはありません。
実際、ほとんどのワークロードでは、圧縮によって全体的なパフォーマンスが向上します。RAM キャッシュからの高速な読み取りと書き込みを提供するために、FSx for ONTAP ファイルサーバーでは、ファイルサーバーとストレージディスク間で利用できるよりも高いレベルのネットワーク帯域幅がフロントエンドネットワークインターフェイスカード (NIC) に備わっています。データ圧縮によりほとんどのワークロードでファイルサーバーとストレージディスク間で送信されるデータ量が減少するため、データ圧縮を使用すると、ファイルシステム全体のスループットキャパシティーが増加します。ファイルシステムのフロントエンド NIC が飽和状態になると、データ圧縮に関連するスループットキャパシティーの増加は制限されます。データ圧縮を使用する場合のスループットパフォーマンスの詳細については、FSx for ONTAP のドキュメントを参照してください。
「ほとんど」であるため、もしかすると圧縮によってパフォーマンスが悪化することも考えられます。
2014年と古めのONTAPのTechnical Reportですが、「圧縮によりスループットへの影響が考えられるため、パフォーマンステストを行うべき」と記載されています。
Although we have optimized compression to minimize impact on your throughput, there may still be an impact even if you are only using postprocess compression, since we still have to uncompress some data in memory when servicing reads. This impact will continue so long as the data is compressed on disk regardless of whether compression is disabled on the volume at a future point. See the section on uncompression in this document for more details.
Because of these factors, NetApp recommends that performance with compression/deduplication be carefully measured in a test setup and taken into sizing consideration before deploying
. . (中略) . .
Compression has an impact on I/O performance. File services–type benchmark testing with compression savings of 50% has shown a decrease in throughput of ~5%. The impact on your environment will varydepending on a number of factors, including the amount of savings, the type of storage system, how busy your system is, and other factors laid out at the beginning of this section. NetApp highly recommends testing in a lab environment to fully understand the impact on your environment before implementing in production.
. . (中略) . .
Read Performance of a Compressed Volume
When data is read from a compressed volume, the impact on the read performance varies depending on the access patterns, the amount of compression savings on disk, and how busy the system resources are (CPU and disk). In a sample test with a 50% CPU load on the system, read throughput from a dataset with 50% compressibility showed decreased throughput of 25%. On a typical system the impact could be higher because of the additional load on the system. Typically the most impact is seen on small random reads of highly compressible data, and on a system that is more than 50% CPU busy. Impact on performance will vary and should be tested before implementing in production.
(以下機械翻訳)
スループットへの影響を最小限に抑えるために圧縮を最適化していますが、読み取りを処理する際にメモリ内のデータを解凍する必要があるため、ポストプロセス圧縮のみを使用している場合でも影響が生じる場合があります。この影響は、将来の時点でボリュームの圧縮を無効にするかどうかに関係なく、データがディスクで圧縮されている限り続きます。詳細については、このドキュメントの圧縮解除のセクションを参照してください。
このような要因があるため、ネットアップでは、圧縮/重複排除のパフォーマンスをテストセットアップで注意深く測定し、次のように評価することを推奨します。 を導入する前に、テストセットアップで慎重に測定し、サイジングを考慮することをお勧めします。
. . 中略 . .
圧縮はI/O性能に影響を与える。圧縮率を50%削減したファイルサービスタイプのベンチマークテストでは、スループットが~5%低下することが示されています。環境への影響は、削減量、ストレージシステムのタイプ、システムの混雑度、およびこのセクションの冒頭で説明したその他の要因など、さまざまな要因によって異なります。ネットアップでは、本番環境に導入する前にラボ環境でテストし、環境への影響を十分に把握することを強く推奨しています。
. . 中略 . .
圧縮ボリュームの読み取り性能
圧縮ボリュームからデータを読み取る場合、読み取り性能への影響は、アクセスパターン、ディスク上の圧縮節約量、システムリソースの混雑度によって異なります。読み取り性能への影響は、アクセスパターン、ディスク上の圧縮節約量、システムリソース(CPUとディスク (CPUとディスク)によって異なります。システムのCPU負荷を50%にしたサンプルテストでは、圧縮率50%のデータセットからの読み取りスループット からの読み取りスループットは25%低下した。一般的なシステムでは、その影響は システムに負荷がかかるため、一般的なシステムではそれ以上の影響が出る可能性があります。通常、最も影響が大きいのは、圧縮性の高いデータの少量のランダム 読み出しや、CPUビジーが50%以上のシステムで見られます。パフォーマンスへの影響は パフォーマンスへの影響は様々であるため、実運用に導入する前にテストする必要がある。
TR-3966 Data Compression and Deduplication DIG, clustered Data ONTAP
では、本番稼働後にInactive data compressionによりパフォーマンスの悪影響が顕在化した場合は、どのようにすれば良いでしょうか。
単純にInactive data compressionを無効化するだけでは、データブロックは解凍されず、圧縮されたままです。
そのような時は明示的にAuto adaptive compressionによる圧縮を解凍することで対応します。
いきなりまとめ
- volume efficiency undoに
-auto-adaptive-compression
オプションを付与することで解凍可能- 事前にStorage Efficiencyも無効にする操作が必要
- Inline compressionで圧縮されたものも解凍されてしまう
- 解凍時に併せて重複排除によって削減したデータ量も元に戻るという動きはしない
- 解凍の様子は
volume efficiency show
から確認できる - Inactive data compression を有効にする際は、Storage Efficiencyの
Volume doing auto adaptive compression
がtrue
である必要がある- compressionを有効にする際に
true
となる
- compressionを有効にする際に
検証した限り、aggregate上で解凍した結果が反映されるには、実際に圧縮が行われたかどうか問わずスキャンしたデータブロックに対して変更が行われたとき
Inactive data compressionが効いたデータブロックの読み取り
検証環境準備
実際に確認してみます。
まず、検証環境の準備をします。
vol1
というStorage EfficiencyもInactive data compressionも有効になっていないボリュームを用意しました。
::> version NetApp Release 9.13.1P5: Thu Nov 02 20:37:09 UTC 2023 ::> set diag Warning: These diagnostic commands are for use by NetApp personnel only. Do you want to continue? {y|n}: y ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Idle Idle for 00:02:11 auto ::*> volume efficiency inactive-data-compression show -volume vol1 Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 false - IDLE SUCCESS lzopro
こちらのボリュームに4GiBの0埋めされたデータを書き込みます。
$ sudo sudo mount -t nfs svm-039500ac2e5d1c6b6.fs-0a3d8dfad5702bbf8.fsx.us-east-1.amazonaws.com:/vol1 /mnt/fsxn/vol1 $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-039500ac2e5d1c6b6.fs-0a3d8dfad5702bbf8.fsx.us-east-1.amazonaws.com:/vol1 nfs4 61G 320K 61G 1% /mnt/fsxn/vol1 $ sudo dd if=/dev/zero of=/mnt/fsxn/vol1/zero_block_file bs=1M count=4096 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 27.7344 s, 155 MB/s $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-039500ac2e5d1c6b6.fs-0a3d8dfad5702bbf8.fsx.us-east-1.amazonaws.com:/vol1 nfs4 61G 4.1G 57G 7% /mnt/fsxn/vol1
Storage Efficiencyが無効化されているので、インライン重複排除されていないことを確認します。
::*> volume show -volume vol1 -fields used, dedupe-space-saved vserver volume used dedupe-space-saved ------- ------ ------ ------------------ svm vol1 4.02GB 0B
Inactive data compressionの実行
Inactive data compressionを有効にします。
::*> volume efficiency inactive-data-compression modify -volume vol1 -is-enabled true ::*> volume efficiency inactive-data-compression show -volume vol1 Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 false - IDLE SUCCESS lzopro
コマンドは受け付けられたにも関わらず、有効になっていません。
試しにStorage Efficiencyを有効にした上で、Inactive data compressionを有効にします。
まず、Storage Efficiencyを有効にします。
::*> volume efficiency on -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is enabled. ::*> volume efficiency show -volume vol1 -instance Vserver Name: svm Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:05:39 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: edb9bc3f-9af3-11ee-9a2f-3ba3cfd3db08 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Fri Dec 15 02:46:34 2023 Last Success Operation End: Fri Dec 15 02:46:34 2023 Last Operation Begin: Fri Dec 15 02:46:34 2023 Last Operation End: Fri Dec 15 02:46:34 2023 Last Operation Size: 0B Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 4.02GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: - Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 0 Duplicate Blocks Found: 0 Sorting Begin: - Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: - Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: true Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 0 Same FP Count: 0 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: true Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: true auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: 0 Number of indirect blocks skipped by compression phase: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true
連動して自動でInactive data compressionが有効になっていないことを確認します。
::*> volume efficiency inactive-data-compression show -volume vol1 Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 false - IDLE SUCCESS lzopro
Inactive data compressionを有効にします。
::*> volume efficiency inactive-data-compression modify -volume vol1 -is-enabled true ::*> volume efficiency inactive-data-compression show -volume vol1 Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 false - IDLE SUCCESS lzopro
これでも有効になっていません。
volume efficiency show -volume vol1 -instance
の実行結果を確認するとVolume doing auto adaptive compression
がfalse
になっていました。
以下記事でInactive data compressionを有効化しているときはVolume doing auto adaptive compression
がtrue
になっていました。
volume efficiency modifyのオプションを確認しても、Volume doing auto adaptive compression
を変更するものは特にありませんでした。
Inline compressionを有効してみます。
::*> volume efficiency modify -volume vol1 -inline-compression true Error: command failed: Failed to modify efficiency configuration for volume "vol1" of Vserver "svm": Enabling compression and disabling inline compression, or specifying the "-inline-compression" parameter when the "compression" parameter is not specified, are not supported for the volumes with auto adaptive compression. ::*> volume efficiency modify -volume vol1 -inline-compression true -compression true ::*> volume efficiency show -volume vol1 -fields inline-compression, compression, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume vserver volume compression inline-compression auto-adaptive-compression-savings auto-adaptive-compression-existing-volume ------- ------ ----------- ------------------ --------------------------------- ----------------------------------------- svm vol1 false true true false
Volume doing auto adaptive compression
がtrue
になりました。
また、Inline compressionを有効にする際は一緒にCompressionも指定する必要があるようです。なお、Compressionは有効にならないのは相変わらずです。
Inactive data compressionの状態を確認すると、有効化されていました。
::*> volume efficiency inactive-data-compression show -volume vol1 Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 true - IDLE SUCCESS lzopro
それでは、Inactive data compressionを実行します。実行する際にすべてのデータブロックを対象としたいので-inactive-days 0
をオプションで指定します。
::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0 Inactive data compression scan started on volume "vol1" in Vserver "svm" ::*> volume efficiency inactive-data-compression show -volume vol1 -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 23% Phase1 L1s Processed: 4112 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 5502304 Phase2 Blocks Processed: 1248235 Number of Cold Blocks Encountered: 6192 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 5232 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -volume vol1 -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 9600 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 5248 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 60 Time since Last Inactive Data Compression Scan ended(sec): 30 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 30 Average time for Cold Data Compression(sec): 30 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
圧縮量を確認します。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 4.02GB 0% Footprint in Performance Tier 4.02GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 107.5MB 0% Delayed Frees 6.93MB 0% File Operation Metadata 4KB 0% Total Footprint 4.13GB 0% Footprint Data Reduction 3.98GB 0% Auto Adaptive Compression 3.98GB 0% Effective Total Footprint 153.9MB 0% ::*> volume show-footprint -volume vol1 -instance Vserver: svm Volume Name: vol1 Volume MSID: 2149022693 Volume DSID: 1026 Vserver UUID: ebdf0016-9af3-11ee-9a2f-3ba3cfd3db08 Aggregate Name: aggr1 Aggregate UUID: 57345e7b-9af3-11ee-9a2f-3ba3cfd3db08 Hostname: FsxId0a3d8dfad5702bbf8-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: - Deduplication Footprint Percent: - Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 4.02GB Volume Data Footprint Percent: 0% Flexible Volume Metadata Footprint: 107.5MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 6.93MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 4.13GB Total Footprint Percent: 0% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 4.02GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: - Total Deduplication Footprint Percent: - Footprint Data Reduction by Auto Adaptive Compression: 3.98GB Footprint Data Reduction by Auto Adaptive Compression Percent: 0% Total Footprint Data Reduction: 3.98GB Total Footprint Data Reduction Percent: 0% Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 153.9MB Effective Total after Footprint Data Reduction Percent: 0% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: -
Footprint Data Reduction by Auto Adaptive Compression
が3.98GBなので、3.98GB圧縮されているといえます。
ほとんどのデータが圧縮により削減されていると考えられます。
Inactive data compressionが効いたデータブロックの読み取り速度の確認
せっかくなので、Inactive data compressionが効いたデータブロックの読み取り速度の確認をします。
今回用意したFSxNファイルシステムのスループットキャパシティは128MBpsです。128MBpsのインメモリキャッシュ量は16GBです。
ということで16GiBのファイルを作成して、インメモリキャッシュ内のデータをすべて上書きします。なお、念のため2回16GiBのファイルを作成します。
$ sudo dd if=/dev/zero of=/mnt/fsxn/vol1/zero_block_file_2 bs=1M count=16384 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB, 16 GiB) copied, 114.308 s, 150 MB/s $ sudo dd if=/dev/zero of=/mnt/fsxn/vol1/zero_block_file_2 bs=1M count=16384 16384+0 records in 16384+0 records out 17179869184 bytes (17 GB, 16 GiB) copied, 114.134 s, 151 MB/s
念のため、NFSクライアントもOS再起動をしておきます。
OS再起動後に作成した4GiBのファイルを読み込みます。
$ sudo mount -t nfs svm-0efc8502c15912e44.fs-03f7b48fddc2252fc.fsx.us-east-1.amazonaws.com:/vol1 /mnt/fsxn/vol1 $ sudo dd if=/mnt/fsxn/vol1/zero_block_file of=/dev/null 8388608+0 records in 8388608+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 40.1662 s, 107 MB/s
読み取り速度は107MBpsでした。何回か繰り返しましたが、いずれも110MBps〜215MBpsほどでした。かなりブレ幅が大きいですね。
なお、インラインメモリキャッシュ内のデータを上書きしようとせずに、連続で読み取ると500MBpsほどの速度が出ました。速いですね。
$ sudo dd if=/mnt/fsxn/vol1/zero_block_file of=/dev/null 8388608+0 records in 8388608+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 8.74941 s, 491 MB/s
再度Auto adaptive compressionで削減されたデータ量を確認します。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 4.08GB 0% Footprint in Performance Tier 4.31GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 107.5MB 0% Delayed Frees 237.7MB 0% File Operation Metadata 4KB 0% Total Footprint 4.42GB 0% Footprint Data Reduction 4.27GB 0% Auto Adaptive Compression 4.27GB 0% Effective Total Footprint 157.3MB 0% ::*> volume show-footprint -volume vol1 -instance Vserver: svm Volume Name: vol1 Volume MSID: 2149022693 Volume DSID: 1026 Vserver UUID: ebdf0016-9af3-11ee-9a2f-3ba3cfd3db08 Aggregate Name: aggr1 Aggregate UUID: 57345e7b-9af3-11ee-9a2f-3ba3cfd3db08 Hostname: FsxId0a3d8dfad5702bbf8-01 Tape Backup Metadata Footprint: - Tape Backup Metadata Footprint Percent: - Deduplication Footprint: - Deduplication Footprint Percent: - Temporary Deduplication Footprint: - Temporary Deduplication Footprint Percent: - Cross Volume Deduplication Footprint: - Cross Volume Deduplication Footprint Percent: - Cross Volume Temporary Deduplication Footprint: - Cross Volume Temporary Deduplication Footprint Percent: - Volume Data Footprint: 4.08GB Volume Data Footprint Percent: 0% Flexible Volume Metadata Footprint: 107.5MB Flexible Volume Metadata Footprint Percent: 0% Delayed Free Blocks: 237.7MB Delayed Free Blocks Percent: 0% SnapMirror Destination Footprint: - SnapMirror Destination Footprint Percent: - Volume Guarantee: 0B Volume Guarantee Percent: 0% File Operation Metadata: 4KB File Operation Metadata Percent: 0% Total Footprint: 4.42GB Total Footprint Percent: 0% Containing Aggregate Size: 907.1GB Name for bin0: Performance Tier Volume Footprint for bin0: 4.31GB Volume Footprint bin0 Percent: 100% Name for bin1: FSxFabricpoolObjectStore Volume Footprint for bin1: 0B Volume Footprint bin1 Percent: 0% Total Deduplication Footprint: - Total Deduplication Footprint Percent: - Footprint Data Reduction by Auto Adaptive Compression: 4.27GB Footprint Data Reduction by Auto Adaptive Compression Percent: 0% Total Footprint Data Reduction: 4.27GB Total Footprint Data Reduction Percent: 0% Footprint Data Reduction by Capacity Tier: - Footprint Data Reduction by Capacity Tier Percent: - Effective Total after Footprint Data Reduction: 157.3MB Effective Total after Footprint Data Reduction Percent: 0% Footprint Data Reduction by Compaction: - Footprint Data Reduction by Compaction Percent: - ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 20.07GB Total Physical Used: 571.2MB Total Storage Efficiency Ratio: 35.97:1 Total Data Reduction Logical Used Without Snapshots: 20.07GB Total Data Reduction Physical Used Without Snapshots: 571.2MB Total Data Reduction Efficiency Ratio Without Snapshots: 35.97:1 Total Data Reduction Logical Used without snapshots and flexclones: 20.07GB Total Data Reduction Physical Used without snapshots and flexclones: 571.2MB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 35.97:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 20.08GB Total Physical Used in FabricPool Performance Tier: 597.8MB Total FabricPool Performance Tier Storage Efficiency Ratio: 34.40:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 20.08GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 597.8MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 34.40:1 Logical Space Used for All Volumes: 20.07GB Physical Space Used for All Volumes: 4.07GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 16GB Volume Deduplication Savings ratio: 4.94:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 4.94:1 Logical Space Used by the Aggregate: 4.53GB Physical Space Used by the Aggregate: 571.2MB Space Saved by Aggregate Data Reduction: 3.97GB Aggregate Data Reduction SE Ratio: 8.12:1 Logical Size Used by Snapshot Copies: 676KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.49:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.49:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0
Auto adaptive compressionのデータ削減量が3.98GBから4.27GBに増えていますね。
確認したところ、Inactive data compressionがバックグラウンドで動作はしていなかったので、増加分はInline compressionによるものだと推測します。
なお、ボリュームの重複排除量を確認すると、16GBとなっていました。
::*> volume show -volume vol1 -fields used, dedupe-space-saved vserver volume used dedupe-space-saved ------- ------ ------ ------------------ svm vol1 4.08GB 16GB ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Enabled Idle Idle for 01:22:08 auto
Progress
が1時間以上前であることからポストプロセス重複排除によるデータ削減ではないことが分かります。
これは、インラインゼロブロック重複排除によるデータ削減です。インライン先のaggr show-efficiency
の結果を合わせて確認するとSpace Saved by Inline Zero Pattern Detection: 16GB
となっています。
インラインゼロブロック重複排除時にインメモリキャッシュが上書きされたのか不安になったので、今度は/dev/urandom
で生成した18GiBのファイルを作成して上書きします。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/urandom_block_file bs=1M count=18432 18432+0 records in 18432+0 records out 19327352832 bytes (19 GB, 18 GiB) copied, 131.518 s, 147 MB/s
OS再起動後、Inactive data compressionで圧縮したファイルを読み込みます。
$ sudo dd if=/mnt/fsxn/vol1/zero_block_file of=/dev/null 8388608+0 records in 8388608+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 22.2838 s, 193 MB/s $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-039500ac2e5d1c6b6.fs-0a3d8dfad5702bbf8.fsx.us-east-1.amazonaws.com:/vol1 nfs4 61G 23G 39G 37% /mnt/fsxn/vol1 $ ls -lh /mnt/fsxn/vol1 total 39G -rw-r--r--. 1 root root 18G Dec 15 04:12 urandom_block_file -rw-r--r--. 1 root root 4.0G Dec 15 02:50 zero_block_file -rw-r--r--. 1 root root 16G Dec 15 04:03 zero_block_file_2
193MBpsでした。結果としては、先ほど計測した時とあまり速度は変わりありませんでした。
volume show-footprint
とaggr show-efficiency
の結果を確認しておきます。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 22.38GB 2% Footprint in Performance Tier 22.49GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 108.2MB 0% Deduplication 108.2MB 0% Delayed Frees 118.3MB 0% File Operation Metadata 4KB 0% Total Footprint 22.81GB 3% Footprint Data Reduction 22.24GB 2% Auto Adaptive Compression 22.24GB 2% Effective Total Footprint 582.8MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 38.32GB Total Physical Used: 18.38GB Total Storage Efficiency Ratio: 2.08:1 Total Data Reduction Logical Used Without Snapshots: 38.32GB Total Data Reduction Physical Used Without Snapshots: 18.38GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.08:1 Total Data Reduction Logical Used without snapshots and flexclones: 38.32GB Total Data Reduction Physical Used without snapshots and flexclones: 18.38GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.08:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 38.49GB Total Physical Used in FabricPool Performance Tier: 18.70GB Total FabricPool Performance Tier Storage Efficiency Ratio: 2.06:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 38.49GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 18.70GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.06:1 Logical Space Used for All Volumes: 38.32GB Physical Space Used for All Volumes: 22.21GB Space Saved by Volume Deduplication: 113.3MB Space Saved by Volume Deduplication and pattern detection: 16.11GB Volume Deduplication Savings ratio: 1.73:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.73:1 Logical Space Used by the Aggregate: 22.36GB Physical Space Used by the Aggregate: 18.38GB Space Saved by Aggregate Data Reduction: 3.97GB Aggregate Data Reduction SE Ratio: 1.22:1 Logical Size Used by Snapshot Copies: 676KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.49:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.49:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0
Auto adaptive compressionが22.24GBと、書き込んだ18GiB分増えているように見えます。一方でSpace Saved by Aggregate Data Reduction
は書き込み前と変わらず3.97GBです。
Inactive data compressionなどTSSEの圧縮処理はaggregateレイヤーで動作する仕組みです。aggregateからレポートされた内容をvolume show-footprint
でボリューム単位にレポートする過程で不具合でも発生しているのでしょうか。
Auto adaptive compressionの解凍
Auto adaptive compressionの解凍
Auto adaptive compressionの解凍を行います。
volume efficiency undoのコマンドリファレンスを見ると、-auto-adaptive-compression
でできそうです。
[-c, -auto-adaptive-compression
] - Auto Adaptive Compression Undo the effects of auto adaptive compression.
なお、「何KBで圧縮したものを解凍する」というオプションはありませんでした。
そのため、こちらの処理を実行すると8KBで圧縮するInline compressionにより、圧縮されたものも解凍されてしまいます。
それではAuto adaptive compressionの解凍を行います。
::*> volume efficiency undo -volume vol1 -auto-adaptive-compression true Error: command failed: Failed to undo efficiency on volume "vol1" of Vserver "svm": Operation is not disabled.
失敗しました。
よくよくドキュメントを見てみると、こちらのコマンドを叩く際はStorage Efficinecyを無効にする必要があると記載がありました。
The command volume efficiency undo removes volume efficiency on a volume by undoing compression, undoing compaction and removing all the block sharing relationships, and cleaning up any volume efficiency specific data structures. Any efficiency operations on the volume must be disabled before issuing this command. The volume efficiency configuration is deleted when the undo process completes. The command is used to revert a volume to an earlier version of ONTAP where some of the efficiency features are not supported. During this revert not all efficiencies needs to be undone but only those gained by that particular feature (for example, compaction), which is not supported in the earlier version.
Storage Efficiencyを無効にします。
::*> volume efficiency off -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is disabled. ::*> volume efficiency show -volume vol1 -instance Vserver Name: svm Volume Name: vol1 Volume Path: /vol/vol1 State: Disabled Auto State: - Status: Idle Progress: Idle for 00:04:25 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: edb9bc3f-9af3-11ee-9a2f-3ba3cfd3db08 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Fri Dec 15 04:15:02 2023 Last Success Operation End: Fri Dec 15 04:15:18 2023 Last Operation Begin: Fri Dec 15 04:15:02 2023 Last Operation End: Fri Dec 15 04:15:18 2023 Last Operation Size: 11.33GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 38.49GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 0 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 4718591 Duplicate Blocks Found: 4 Sorting Begin: Fri Dec 15 04:15:02 UTC 2023 Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: Fri Dec 15 04:15:13 UTC 2023 Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: false Application IO Size: auto Compression Type: adaptive Storage Efficiency Mode: efficient Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 4718591 Same FP Count: 4 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 4 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: false Cross Volume Inline Deduplication: false Compression Algorithm: lzopro Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: true Volume doing auto adaptive compression: false auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: Fri Dec 15 04:15:13 UTC 2023 Number of L1s processed by compression phase: 18532 Number of indirect blocks skipped by compression phase: L1: 20534 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Volume Has Extended Auto Adaptive Compression: true ::*> volume efficiency inactive-data-compression show Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 false - IDLE SUCCESS lzopro
この状態でAuto adaptive compressionの解凍を行います。
::*> volume efficiency undo -volume vol1 -auto-adaptive-compression true The efficiency undo operation for volume "vol1" of Vserver "svm" has started.
今度は開始できました。
ふと、Storage Efficiencyの情報を確認してみるとステータスがUndoing
で処理が走っているようでした。
::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing 6013440 KB Processed auto
1分30秒ほど待つと、volume show-footprint
の結果からAuto Adaptive Compression
がなくなり、Effective Total Footprint
も582.8MBから22.83GBに増加していました。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 22.38GB 2% Footprint in Performance Tier 22.51GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 108.2MB 0% Deduplication 108.2MB 0% Delayed Frees 137.8MB 0% File Operation Metadata 4KB 0% Total Footprint 22.83GB 3% Effective Total Footprint 22.83GB 3% ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Idle Idle for 00:06:55 auto
圧縮が解凍されたことが分かります。
aggr show-efficiency
などの結果を確認すると、Space Saved by Volume Deduplication and pattern detection
は引き続き16.11GB
でした。Auto adaptive compressionの解凍時に一緒に重複排除によって削減されたデータブロックが一緒に復元されるということはなさそうです。
::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 38.34GB Total Physical Used: 18.62GB Total Storage Efficiency Ratio: 2.06:1 Total Data Reduction Logical Used Without Snapshots: 38.33GB Total Data Reduction Physical Used Without Snapshots: 18.62GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.06:1 Total Data Reduction Logical Used without snapshots and flexclones: 38.33GB Total Data Reduction Physical Used without snapshots and flexclones: 18.62GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.06:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 38.49GB Total Physical Used in FabricPool Performance Tier: 18.92GB Total FabricPool Performance Tier Storage Efficiency Ratio: 2.03:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 38.49GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 18.92GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.03:1 Logical Space Used for All Volumes: 38.33GB Physical Space Used for All Volumes: 22.22GB Space Saved by Volume Deduplication: 113.3MB Space Saved by Volume Deduplication and pattern detection: 16.11GB Volume Deduplication Savings ratio: 1.72:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.72:1 Logical Space Used by the Aggregate: 22.59GB Physical Space Used by the Aggregate: 18.62GB Space Saved by Aggregate Data Reduction: 3.97GB Aggregate Data Reduction SE Ratio: 1.21:1 Logical Size Used by Snapshot Copies: 676KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.49:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.49:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> volume show -volume vol1 -fields used, dedupe-space-saved vserver volume used dedupe-space-saved ------- ------ ------- ------------------ svm vol1 22.38GB 16.11GB
一方でSpace Saved by Aggregate Data Reduction
は引き続き3.97GBを指しています。すぐさまaggregate上で解凍されないのでしょうか。
再解凍
内部的には解凍されていないのかもしれません。
もう一度解凍してみます。
Inactive data compressionの状態を確認します。
::*> volume efficiency inactive-data-compression show This table is currently empty.
テーブルが空のようです。
試しにStorage Efficiencyを有効にしてみましたが、結果は変わりありませんでした。
::*> volume efficiency on -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is enabled. ::*> volume efficiency show -instance Vserver Name: svm Volume Name: vol1 Volume Path: /vol/vol1 State: Enabled Auto State: Auto Status: Idle Progress: Idle for 00:11:29 Type: Regular Schedule: - Efficiency Policy Name: auto Efficiency Policy UUID: edb9bc3f-9af3-11ee-9a2f-3ba3cfd3db08 Optimization: space-saving Min Blocks Shared: 1 Blocks Skipped Sharing: 0 Last Operation State: Success Last Success Operation Begin: Fri Dec 15 04:15:02 2023 Last Success Operation End: Fri Dec 15 04:15:18 2023 Last Operation Begin: Fri Dec 15 04:15:02 2023 Last Operation End: Fri Dec 15 04:15:18 2023 Last Operation Size: 11.33GB Last Operation Error: - Operation Frequency: - Changelog Usage: 0% Changelog Size: 0B Vault transfer log Size: 0B Compression Changelog Size: 0B Changelog Overflow: 0B Logical Data Size: 38.49GB Logical Data Limit: 1.25PB Logical Data Percent: 0% Queued Job: - Stale Fingerprint Percentage: 0 Stage: Done Checkpoint Time: No Checkpoint Checkpoint Operation Type: - Checkpoint Stage: - Checkpoint Substage: - Checkpoint Progress: - Fingerprints Gathered: 20966400 Blocks Processed For Compression: 0 Gathering Begin: - Gathering Phase 2 Begin: - Fingerprints Sorted: 0 Duplicate Blocks Found: 0 Sorting Begin: - Blocks Deduplicated: 0 Blocks Snapshot Crunched: 0 De-duping Begin: - Fingerprints Deleted: 0 Checking Begin: - Compression: false Inline Compression: false Application IO Size: - Compression Type: - Storage Efficiency Mode: - Verify Trigger Rate: 20 Total Verify Time: 00:00:00 Verify Suspend Count: - Constituent Volume: false Total Sorted Blocks: 0 Same FP Count: 0 Same FBN: 0 Same Data: 0 No Op: 0 Same VBN: 0 Mismatched Data: 0 Same Sharing Records: 0 Max RefCount Hits: 0 Stale Recipient Count: 0 Stale Donor Count: 0 VBN Absent Count: 0 Num Out Of Space: 0 Mismatch Due To Overwrites: 0 Stale Auxiliary Recipient Count: 0 Stale Auxiliary Recipient Block Count: 0 Mismatched Recipient Block Pointers: 0 Unattempted Auxiliary Recipient Share: 0 Skip Share Blocks Delta: 0 Skip Share Blocks Upper: 0 Inline Dedupe: false Data Compaction: false Cross Volume Inline Deduplication: false Compression Algorithm: - Cross Volume Background Deduplication: false Extended Compressed Data: true Volume has auto adaptive compression savings: false Volume doing auto adaptive compression: false auto adaptive compression on existing volume: false File compression application IO Size: - Compression Algorithm List: lzopro Compression Begin Time: - Number of L1s processed by compression phase: - Number of indirect blocks skipped by compression phase: - Volume Has Extended Auto Adaptive Compression: false ::*> volume efficiency inactive-data-compression show This table is currently empty.
注視してみると、Application IO SizeやStorage Efficiency modeなどが-
となっていました。TSSEではないため、Inactive data compressionの情報が表示されないのかもしれません。
::*> volume efficiency show -fields storage-efficiency-mode, application-io-size, compression, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume vserver volume compression application-io-size storage-efficiency-mode auto-adaptive-compression-savings auto-adaptive-compression-existing-volume ------- ------ ----------- ------------------- ----------------------- --------------------------------- ----------------------------------------- svm vol1 false - - false false
Application IO Sizeにauto
を指定します。すると、TSSEとなり、Inactive data compressionの情報を確認できるようになりました。
::*> volume efficiency modify -volume vol1 -application-io-size auto -compression true ::*> volume efficiency show -fields storage-efficiency-mode, application-io-size, compression, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume vserver volume compression application-io-size storage-efficiency-mode auto-adaptive-compression-savings auto-adaptive-compression-existing-volume ------- ------ ----------- ------------------- ----------------------- --------------------------------- ----------------------------------------- svm vol1 false auto efficient true true ::*> volume efficiency inactive-data-compression show Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 false - IDLE SUCCESS lzopro
このタイミングでvolume show-footprint
とaggr show-efficiency
を確認します。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 22.38GB 2% Footprint in Performance Tier 22.51GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 108.2MB 0% Deduplication 108.2MB 0% Delayed Frees 138.3MB 0% File Operation Metadata 4KB 0% Total Footprint 22.83GB 3% Footprint Data Reduction 22.26GB 2% Auto Adaptive Compression 22.26GB 2% Effective Total Footprint 583.1MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 38.32GB Total Physical Used: 18.60GB Total Storage Efficiency Ratio: 2.06:1 Total Data Reduction Logical Used Without Snapshots: 38.32GB Total Data Reduction Physical Used Without Snapshots: 18.60GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.06:1 Total Data Reduction Logical Used without snapshots and flexclones: 38.32GB Total Data Reduction Physical Used without snapshots and flexclones: 18.60GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.06:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 38.49GB Total Physical Used in FabricPool Performance Tier: 18.92GB Total FabricPool Performance Tier Storage Efficiency Ratio: 2.03:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 38.49GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 18.92GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.03:1 Logical Space Used for All Volumes: 38.32GB Physical Space Used for All Volumes: 22.21GB Space Saved by Volume Deduplication: 113.3MB Space Saved by Volume Deduplication and pattern detection: 16.11GB Volume Deduplication Savings ratio: 1.73:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.73:1 Logical Space Used by the Aggregate: 22.58GB Physical Space Used by the Aggregate: 18.60GB Space Saved by Aggregate Data Reduction: 3.97GB Aggregate Data Reduction SE Ratio: 1.21:1 Logical Size Used by Snapshot Copies: 676KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.49:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.49:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0
Space Saved by Aggregate Data Reduction
が相変わらず3.97GBですが、Auto Adaptive Compressionが表示され22.26GBを指すようになりました。先ほどの解凍処理で1分半かかっていたのはなんだったのでしょうか。
もう一度解凍します。
::*> volume efficiency off -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is disabled. ::*> volume efficiency undo -volume vol1 -auto-adaptive-compression true The efficiency undo operation for volume "vol1" of Vserver "svm" has started. ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing 58725172 KB Processed auto
すると、今度は数十秒ほどで処理が完了しました。
volume show-footprint
とaggr show-efficiency
を確認します。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 22.38GB 2% Footprint in Performance Tier 22.66GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 108.2MB 0% Deduplication 108.2MB 0% Delayed Frees 292.5MB 0% File Operation Metadata 4KB 0% Total Footprint 22.98GB 3% Effective Total Footprint 22.98GB 3% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 38.34GB Total Physical Used: 18.63GB Total Storage Efficiency Ratio: 2.06:1 Total Data Reduction Logical Used Without Snapshots: 38.34GB Total Data Reduction Physical Used Without Snapshots: 18.63GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.06:1 Total Data Reduction Logical Used without snapshots and flexclones: 38.34GB Total Data Reduction Physical Used without snapshots and flexclones: 18.63GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.06:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 38.49GB Total Physical Used in FabricPool Performance Tier: 18.93GB Total FabricPool Performance Tier Storage Efficiency Ratio: 2.03:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 38.49GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 18.93GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.03:1 Logical Space Used for All Volumes: 38.34GB Physical Space Used for All Volumes: 22.22GB Space Saved by Volume Deduplication: 113.3MB Space Saved by Volume Deduplication and pattern detection: 16.11GB Volume Deduplication Savings ratio: 1.72:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.72:1 Logical Space Used by the Aggregate: 22.60GB Physical Space Used by the Aggregate: 18.63GB Space Saved by Aggregate Data Reduction: 3.97GB Aggregate Data Reduction SE Ratio: 1.21:1 Logical Size Used by Snapshot Copies: 676KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.49:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.49:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1
最初に解凍した時と全く同じ状況です。
volume show-footprint
の結果からAuto Adaptive Compression
がなくなり、Effective Total Footprint
も582.8MBから22.83GBに増加していました。そして、Space Saved by Aggregate Data Reduction
は引き続き3.97GBを指しています。
また、Total Physical Used
も18.63GBです。
データブロックの読み取り速度の確認
データブロックの読み取り速度の確認します。
本当に解凍されているのであれば、読み取り速度に変化があると考えられます。
また、データの読み書きなどを行った時に、解凍の状況が反映されるかもしれません。
18GiBのファイルを作成します。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/urandom_block_file bs=1M count=18432 18432+0 records in 18432+0 records out 19327352832 bytes (19 GB, 18 GiB) copied, 129.702 s, 149 MB/s
作成後、volume show-footprint
とaggr show-efficiency
を確認します。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 22.33GB 2% Footprint in Performance Tier 22.73GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 108.2MB 0% Deduplication 108.2MB 0% Delayed Frees 407.0MB 0% File Operation Metadata 4KB 0% Total Footprint 23.05GB 3% Effective Total Footprint 23.05GB 3% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 38.19GB Total Physical Used: 22.23GB Total Storage Efficiency Ratio: 1.72:1 Total Data Reduction Logical Used Without Snapshots: 38.18GB Total Data Reduction Physical Used Without Snapshots: 22.23GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.72:1 Total Data Reduction Logical Used without snapshots and flexclones: 38.18GB Total Data Reduction Physical Used without snapshots and flexclones: 22.23GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.72:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 38.34GB Total Physical Used in FabricPool Performance Tier: 22.57GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.70:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 38.33GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 22.57GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.70:1 Logical Space Used for All Volumes: 38.18GB Physical Space Used for All Volumes: 22.18GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 16GB Volume Deduplication Savings ratio: 1.72:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.72:1 Logical Space Used by the Aggregate: 22.23GB Physical Space Used by the Aggregate: 22.23GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 676KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.49:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.49:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1
Space Saved by Aggregate Data Reduction
が0Bになっているではありませんか。またTotal Physical Used
も22.23GBと、ファイル作成前後と比較して4GB増えています。
このことから、実際に解凍した結果をレポートに反映するためには書き込みが必要そうであることが分かります。
読み込み速度も確認します。OSを再起動した上でInactive data compressionによって圧縮していたファイルを読み込みます。
$ sudo dd if=/mnt/fsxn/vol1/zero_block_file of=/dev/null 8388608+0 records in 8388608+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 6.96954 s, 616 MB/s
616MBpsです。3倍ほど早くなっています。2回ほど一度同じように18GiBのファイルを作成してから、読み込むと585MBpsと563MBpsでした。
圧縮アルゴリズムを zstd に変更してみる
圧縮アルゴリズムを zstd に変更
先ほどのInactive data compressionのアルゴリズムはlzopro
でした。
Inactive data compressionでは圧縮アルゴリズムにzstd
を使用することも可能です。
せっかくなのでzstdで圧縮した時の速度も確認しておきます。
まず、Storage EfficiencyがTSSEとして動作するように変更して、Inactive data compressionを表示できるようにします。
::*> volume efficiency on -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is enabled. ::*> volume efficiency modify -volume vol1 -application-io-size auto -compression true ::*> volume efficiency show -fields storage-efficiency-mode, application-io-size, compression, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume vserver volume compression application-io-size storage-efficiency-mode auto-adaptive-compression-savings auto-adaptive-compression-existing-volume ------- ------ ----------- ------------------- ----------------------- --------------------------------- ----------------------------------------- svm vol1 false auto efficient true true ::*> volume efficiency inactive-data-compression show Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 false - IDLE SUCCESS lzopro
volume show-footprint
とaggr show-efficiency
を確認します。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 22.31GB 2% Footprint in Performance Tier 22.74GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 108.2MB 0% Deduplication 108.2MB 0% Delayed Frees 434.7MB 0% File Operation Metadata 4KB 0% Total Footprint 23.05GB 3% Footprint Data Reduction 22.48GB 2% Auto Adaptive Compression 22.48GB 2% Effective Total Footprint 585.7MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 38.15GB Total Physical Used: 23.04GB Total Storage Efficiency Ratio: 1.66:1 Total Data Reduction Logical Used Without Snapshots: 38.15GB Total Data Reduction Physical Used Without Snapshots: 23.04GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.66:1 Total Data Reduction Logical Used without snapshots and flexclones: 38.15GB Total Data Reduction Physical Used without snapshots and flexclones: 23.04GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.66:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 38.31GB Total Physical Used in FabricPool Performance Tier: 23.40GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.64:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 38.31GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 23.40GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.64:1 Logical Space Used for All Volumes: 38.15GB Physical Space Used for All Volumes: 22.15GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 16GB Volume Deduplication Savings ratio: 1.72:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.72:1 Logical Space Used by the Aggregate: 23.04GB Physical Space Used by the Aggregate: 23.04GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 676KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.49:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.49:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0
Space Saved by Aggregate Data Reduction
やTotal Physical Used
の値は設定変更前後で変わりありませんが、Auto Adaptive Compression
が22.48GBとしてレポートされています。不思議ですね。後ほどAWSにフィードバックしておきます。
圧縮アルゴリズムをzstd
に変更します。
::*> volume efficiency inactive-data-compression modify -volume vol1 -is-enabled true -compression-algorithm ? lzopro zstd zlib ::*> volume efficiency inactive-data-compression modify -volume vol1 -is-enabled true -compression-algorithm zstd ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: - Progress: IDLE Status: FAILURE Compression Algorithm: zstd Failure Reason: Inactive data compression disabled on volume Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 0 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 0 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 2394 Time since Last Inactive Data Compression Scan ended(sec): 2390 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 2390 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
それではInactive data compressionを実行します。
::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0 Inactive data compression scan started on volume "vol1" in Vserver "svm" ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: zstd Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 5783640 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 1048960 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 57 Time since Last Inactive Data Compression Scan ended(sec): 30 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 30 Average time for Cold Data Compression(sec): 21 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 81%
圧縮量の確認をします。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 22.35GB 2% Footprint in Performance Tier 22.74GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 115.2MB 0% Deduplication 115.2MB 0% Delayed Frees 403.1MB 0% File Operation Metadata 4KB 0% Total Footprint 23.06GB 3% Footprint Data Reduction 4.10GB 0% Auto Adaptive Compression 4.10GB 0% Effective Total Footprint 18.96GB 2% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 37.91GB Total Physical Used: 18.30GB Total Storage Efficiency Ratio: 2.07:1 Total Data Reduction Logical Used Without Snapshots: 37.91GB Total Data Reduction Physical Used Without Snapshots: 18.30GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.07:1 Total Data Reduction Logical Used without snapshots and flexclones: 37.91GB Total Data Reduction Physical Used without snapshots and flexclones: 18.30GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.07:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 38.51GB Total Physical Used in FabricPool Performance Tier: 19.10GB Total FabricPool Performance Tier Storage Efficiency Ratio: 2.02:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 38.51GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 19.10GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.02:1 Logical Space Used for All Volumes: 37.91GB Physical Space Used for All Volumes: 21.75GB Space Saved by Volume Deduplication: 167MB Space Saved by Volume Deduplication and pattern detection: 16.16GB Volume Deduplication Savings ratio: 1.74:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.74:1 Logical Space Used by the Aggregate: 22.27GB Physical Space Used by the Aggregate: 18.30GB Space Saved by Aggregate Data Reduction: 3.97GB Aggregate Data Reduction SE Ratio: 1.22:1 Logical Size Used by Snapshot Copies: 676KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.49:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.49:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0
Space Saved by Aggregate Data Reduction
が3.97GBとなっていますね。Total Physical Used
も実行前後と比較して減っていることを確認できます。
zstd で圧縮したデータブロックの読み取り速度の確認
zstd
で圧縮したデータブロックの読み取り速度の確認をします。
18GiBのファイルを書き込んでインメモリキャッシュを上書きします。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/urandom_block_file bs=1M count=18432 18432+0 records in 18432+0 records out 19327352832 bytes (19 GB, 18 GiB) copied, 131.539 s, 147 MB/s
OSを再起動した上で、zstd
で圧縮したデータブロックを読み取ります。
$ sudo dd if=/mnt/fsxn/vol1/zero_block_file of=/dev/null 8388608+0 records in 8388608+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 10.3734 s, 414 MB/s
414MBpsでした。解凍した時と比べて速度が落ちているような気がします。
もう一度同じように18GiBのファイルを作成してから、読み込むと553MBpsでした。
なかなか振れ幅が大きいですね。
lzopro で圧縮したデータブロックの読み取り速度を再度計測
解凍
lzopro で圧縮したデータブロックの読み取り速度を再度計測してみようと思います。
まず、一度解凍します。
::*> volume efficiency off -volume vol1 ::*> volume efficiency undo -volume vol1 -auto-adaptive-compression true The efficiency undo operation for volume "vol1" of Vserver "svm" has started. ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing 5267456 KB Processed auto ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing 6417408 KB Processed auto ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing 7654656 KB Processed auto ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing 19831552 KB Processed auto ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing 43255552 KB Processed auto ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing 82554288 KB Processed auto ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Idle Idle for 02:41:13 auto
30秒ほどで完了しました。
volume show-footprint
とaggr show-efficiency
の結果を確認しておきます。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 22.40GB 2% Footprint in Performance Tier 22.91GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 330.5MB 0% Deduplication 219.3MB 0% Temporary Deduplication 111.1MB 0% Delayed Frees 518.6MB 0% File Operation Metadata 4KB 0% Total Footprint 23.44GB 3% Effective Total Footprint 23.44GB 3% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 38.30GB Total Physical Used: 18.97GB Total Storage Efficiency Ratio: 2.02:1 Total Data Reduction Logical Used Without Snapshots: 38.29GB Total Data Reduction Physical Used Without Snapshots: 18.96GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.02:1 Total Data Reduction Logical Used without snapshots and flexclones: 38.29GB Total Data Reduction Physical Used without snapshots and flexclones: 18.96GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.02:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 38.40GB Total Physical Used in FabricPool Performance Tier: 19.37GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.98:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 38.40GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 19.37GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.98:1 Logical Space Used for All Volumes: 38.29GB Physical Space Used for All Volumes: 22.29GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 16GB Volume Deduplication Savings ratio: 1.72:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.72:1 Logical Space Used by the Aggregate: 22.93GB Physical Space Used by the Aggregate: 18.97GB Space Saved by Aggregate Data Reduction: 3.97GB Aggregate Data Reduction SE Ratio: 1.21:1 Logical Size Used by Snapshot Copies: 2.50MB Physical Size Used by Snapshot Copies: 832KB Snapshot Volume Data Reduction Ratio: 3.07:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.07:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1
解凍した結果が aggr show-efficiency に反映されるタイミングの調査
aggr show-efficiency
は相変わらず解凍したにも関わらず、Total Physical Used
などほとんどの値が変わりありません。
先の検証では、ボリューム上のファイルを上書きaggr show-efficiency
に結果が反映されました。
ボリューム上のファイルを読み込んだ時には反映されるのか確認します。
$ sudo dd if=/mnt/fsxn/vol1/zero_block_file of=/dev/null 8388608+0 records in 8388608+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 7.01521 s, 612 MB/s
volume show-footprint
とaggr show-efficiency
の結果を確認します。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 22.40GB 2% Footprint in Performance Tier 22.91GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 330.5MB 0% Deduplication 219.3MB 0% Temporary Deduplication 111.1MB 0% Delayed Frees 518.7MB 0% File Operation Metadata 4KB 0% Total Footprint 23.44GB 3% Effective Total Footprint 23.44GB 3% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 38.30GB Total Physical Used: 18.97GB Total Storage Efficiency Ratio: 2.02:1 Total Data Reduction Logical Used Without Snapshots: 38.29GB Total Data Reduction Physical Used Without Snapshots: 18.96GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.02:1 Total Data Reduction Logical Used without snapshots and flexclones: 38.29GB Total Data Reduction Physical Used without snapshots and flexclones: 18.96GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.02:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 38.40GB Total Physical Used in FabricPool Performance Tier: 19.37GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.98:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 38.40GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 19.37GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.98:1 Logical Space Used for All Volumes: 38.29GB Physical Space Used for All Volumes: 22.29GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 16GB Volume Deduplication Savings ratio: 1.72:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.72:1 Logical Space Used by the Aggregate: 22.93GB Physical Space Used by the Aggregate: 18.97GB Space Saved by Aggregate Data Reduction: 3.97GB Aggregate Data Reduction SE Ratio: 1.21:1 Logical Size Used by Snapshot Copies: 2.50MB Physical Size Used by Snapshot Copies: 832KB Snapshot Volume Data Reduction Ratio: 3.07:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.07:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1
特に変わりありません。
続いて、新規ファイルを作成することで反映されるのか確認します。
1GiBのファイルを作成します。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/urandom_block_file_2 bs=1M count=1024 1024+0 records in 1024+0 records out 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 6.69785 s, 160 MB/s
volume show-footprint
とaggr show-efficiency
の結果を確認します。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 23.10GB 3% Footprint in Performance Tier 23.59GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 330.5MB 0% Deduplication 219.3MB 0% Temporary Deduplication 111.1MB 0% Delayed Frees 506.3MB 0% File Operation Metadata 4KB 0% Total Footprint 24.12GB 3% Effective Total Footprint 24.12GB 3% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 39.30GB Total Physical Used: 19.94GB Total Storage Efficiency Ratio: 1.97:1 Total Data Reduction Logical Used Without Snapshots: 39.30GB Total Data Reduction Physical Used Without Snapshots: 19.94GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.97:1 Total Data Reduction Logical Used without snapshots and flexclones: 39.30GB Total Data Reduction Physical Used without snapshots and flexclones: 19.94GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.97:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 39.41GB Total Physical Used in FabricPool Performance Tier: 20.34GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.94:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 39.40GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 20.34GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.94:1 Logical Space Used for All Volumes: 39.30GB Physical Space Used for All Volumes: 23.30GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 16GB Volume Deduplication Savings ratio: 1.69:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.69:1 Logical Space Used by the Aggregate: 23.91GB Physical Space Used by the Aggregate: 19.94GB Space Saved by Aggregate Data Reduction: 3.97GB Aggregate Data Reduction SE Ratio: 1.20:1 Logical Size Used by Snapshot Copies: 2.50MB Physical Size Used by Snapshot Copies: 832KB Snapshot Volume Data Reduction Ratio: 3.07:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.07:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1
Space Saved by Aggregate Data Reduction
は変わりありません。1GiB書き込んだので、純粋に1GiB分aggregateの使用量が増えました。
最後にボリューム上のファイルを上書きます。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/urandom_block_file bs=1M count=18432 18432+0 records in 18432+0 records out 19327352832 bytes (19 GB, 18 GiB) copied, 129.689 s, 149 MB/s
volume show-footprint
とaggr show-efficiency
の結果を確認します。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 23.28GB 3% Footprint in Performance Tier 23.73GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 330.5MB 0% Deduplication 219.3MB 0% Temporary Deduplication 111.1MB 0% Delayed Frees 462.3MB 0% File Operation Metadata 4KB 0% Total Footprint 24.27GB 3% Effective Total Footprint 24.27GB 3% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 39.18GB Total Physical Used: 32.48GB Total Storage Efficiency Ratio: 1.21:1 Total Data Reduction Logical Used Without Snapshots: 39.18GB Total Data Reduction Physical Used Without Snapshots: 32.48GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.21:1 Total Data Reduction Logical Used without snapshots and flexclones: 39.18GB Total Data Reduction Physical Used without snapshots and flexclones: 32.48GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.21:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 39.29GB Total Physical Used in FabricPool Performance Tier: 32.89GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.19:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 39.28GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 32.89GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.19:1 Logical Space Used for All Volumes: 39.18GB Physical Space Used for All Volumes: 23.18GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 16GB Volume Deduplication Savings ratio: 1.69:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.69:1 Logical Space Used by the Aggregate: 32.48GB Physical Space Used by the Aggregate: 32.48GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 2.50MB Physical Size Used by Snapshot Copies: 832KB Snapshot Volume Data Reduction Ratio: 3.07:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.07:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1
Space Saved by Aggregate Data Reduction
が0Bになりました。
上書きしたファイルのほとんどのデータブロックは実際には圧縮されていません。そのため、「aggregate上で解凍した結果が反映されるには、実際に圧縮が行われたかどうか問わずスキャンしたデータブロックに対して変更が行われたときである」と考えられます。
ただし、Total Physical Used
が19.94GBから32.48GBと12.5GB増加しているのは謎です。
lzopro で圧縮
lzopro
で圧縮します。
Storage Efficiencyを有効にして、TSSEで動作するように設定した上でInactive data compressionを確認します。
::*> volume efficiency on -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is enabled. ::*> volume efficiency modify -volume vol1 -application-io-size auto -compression true ::*> volume efficiency inactive-data-compression show Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 false - IDLE SUCCESS lzopro
圧縮アルゴリズムがlzopro
になっていますね。
volume show-footprint
とaggr show-efficiency
の結果を確認します。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 23.28GB 3% Footprint in Performance Tier 23.73GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 330.5MB 0% Deduplication 219.3MB 0% Temporary Deduplication 111.1MB 0% Delayed Frees 462.8MB 0% File Operation Metadata 4KB 0% Total Footprint 24.27GB 3% Footprint Data Reduction 4.28GB 0% Auto Adaptive Compression 4.28GB 0% Effective Total Footprint 19.99GB 2% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 38.76GB Total Physical Used: 23.02GB Total Storage Efficiency Ratio: 1.68:1 Total Data Reduction Logical Used Without Snapshots: 38.76GB Total Data Reduction Physical Used Without Snapshots: 23.02GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.68:1 Total Data Reduction Logical Used without snapshots and flexclones: 38.76GB Total Data Reduction Physical Used without snapshots and flexclones: 23.02GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.68:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 39.29GB Total Physical Used in FabricPool Performance Tier: 23.84GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.65:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 39.28GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 23.84GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.65:1 Logical Space Used for All Volumes: 38.76GB Physical Space Used for All Volumes: 22.76GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 16GB Volume Deduplication Savings ratio: 1.70:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.70:1 Logical Space Used by the Aggregate: 23.02GB Physical Space Used by the Aggregate: 23.02GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 2.50MB Physical Size Used by Snapshot Copies: 832KB Snapshot Volume Data Reduction Ratio: 3.07:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.07:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0
またしても、解凍したにも関わらずAuto Adaptive Compression
が4.28GBとなっています。先ほどの同様の事象は偶然ではなさそうです。
Space Saved by Aggregate Data Reduction
は0Bのままです。また、Total Physical Used
は23.02GBと9GBほど減少していました。一瞬32GBに増加していたのはなんだったのでしょうか。
Inactive data compressionを有効にして、実行します。
::*> volume efficiency inactive-data-compression modify -volume vol1 -is-enabled true ::*> volume efficiency inactive-data-compression show Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 true - IDLE SUCCESS lzopro ::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0 Inactive data compression scan started on volume "vol1" in Vserver "svm" ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 18% Phase1 L1s Processed: 23504 Phase1 Lns Skipped: L1: 0 L2: 65 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 11004608 Phase2 Blocks Processed: 1969069 Number of Cold Blocks Encountered: 6038272 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 1041200 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 12279 Time since Last Inactive Data Compression Scan ended(sec): 12252 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 12252 Average time for Cold Data Compression(sec): 21 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 81% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 6047384 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 1048864 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 94 Time since Last Inactive Data Compression Scan ended(sec): 10 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 10 Average time for Cold Data Compression(sec): 37 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 82%
おおよそ1分半ほどで完了しました。
volume show-footprint
とaggr show-efficiency
の結果を確認します。
::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 23.28GB 3% Footprint in Performance Tier 23.75GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 214.9MB 0% Deduplication Metadata 330.5MB 0% Deduplication 219.3MB 0% Temporary Deduplication 111.1MB 0% Delayed Frees 479.0MB 0% File Operation Metadata 4KB 0% Total Footprint 24.28GB 3% Footprint Data Reduction 4.09GB 0% Auto Adaptive Compression 4.09GB 0% Effective Total Footprint 20.19GB 2% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId0a3d8dfad5702bbf8-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 38.67GB Total Physical Used: 19.71GB Total Storage Efficiency Ratio: 1.96:1 Total Data Reduction Logical Used Without Snapshots: 38.67GB Total Data Reduction Physical Used Without Snapshots: 19.71GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.96:1 Total Data Reduction Logical Used without snapshots and flexclones: 38.67GB Total Data Reduction Physical Used without snapshots and flexclones: 19.71GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.96:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 39.29GB Total Physical Used in FabricPool Performance Tier: 20.62GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.91:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 39.28GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 20.62GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.91:1 Logical Space Used for All Volumes: 38.67GB Physical Space Used for All Volumes: 22.67GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 16GB Volume Deduplication Savings ratio: 1.71:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 16GB Volume Data Reduction SE Ratio: 1.71:1 Logical Space Used by the Aggregate: 23.67GB Physical Space Used by the Aggregate: 19.71GB Space Saved by Aggregate Data Reduction: 3.96GB Aggregate Data Reduction SE Ratio: 1.20:1 Logical Size Used by Snapshot Copies: 2.50MB Physical Size Used by Snapshot Copies: 832KB Snapshot Volume Data Reduction Ratio: 3.07:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 3.07:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0
約4GB圧縮されたことにより、aggregateの使用量が減っていることが分かります。
読み取り速度の計測
読み取り速度の計測をします。
18GiBのファイルを作成します。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol1/urandom_block_file bs=1M count=18432 18432+0 records in 18432+0 records out 19327352832 bytes (19 GB, 18 GiB) copied, 132.034 s, 146 MB/s
OS再起動後、0埋めしたファイルを読み込みます。
$ sudo dd if=/mnt/fsxn/vol1/zero_block_file of=/dev/null 8388608+0 records in 8388608+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 9.94262 s, 432 MB/s
432MBpsでした。再度同じ手順で計測すると、586MBpsでした。振れ幅は大きいですが、0埋めしたファイルに対しての圧縮はzstdもlzoproも性能はあまり変わりないようです。
2024/1/15 追記 インラインコンパクションの効きづらいファイルで再チャレンジ
4GiBのテストファイルの作成
先の検証では0埋めされたファイルで試しました。
他の検証をする中で0
やABCDE
など短いシンプルな文字列ではコンパクションが効いてしまうことが判明しています。
インラインコンパクションが効かないように今回は/dev/urandom
で生成したバイナリデータをBase64でエンコードした1KBの文字列を指定したバイト数分繰り返すことでテストファイルを用意します。
新しくFSxNファイルシステムを用意しました。ファイル作成前のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression vserver volume state policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume ------- ------ -------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- ----------------------------------------- svm vol1 Disabled auto false false efficient false false true false false ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ -------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Disabled Idle for 00:53:57 Thu Jan 11 05:42:52 2024 Thu Jan 11 05:42:52 2024 0B 0% 0B 328KB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ----- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 64GB 60.80GB 64GB 60.80GB 328KB 0% 0B 0% 0B 328KB 0% 328KB 0% - 328KB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 328KB 0% Footprint in Performance Tier 2.43MB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 107.5MB 0% Delayed Frees 2.11MB 0% File Operation Metadata 4KB 0% Total Footprint 109.9MB 0% Effective Total Footprint 109.9MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId02f18e50b38b6fe03-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 464KB Total Physical Used: 436KB Total Storage Efficiency Ratio: 1.06:1 Total Data Reduction Logical Used Without Snapshots: 156KB Total Data Reduction Physical Used Without Snapshots: 304KB Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones: 156KB Total Data Reduction Physical Used without snapshots and flexclones: 304KB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 972KB Total Physical Used in FabricPool Performance Tier: 11.78MB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 664KB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 11.65MB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 156KB Physical Space Used for All Volumes: 156KB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 436KB Physical Space Used by the Aggregate: 436KB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 308KB Physical Size Used by Snapshot Copies: 132KB Snapshot Volume Data Reduction Ratio: 2.33:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.33:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 860.6GB 861.8GB 1.13GB 135.0MB 0% 0B 0% 0B 0B 0B 0% 0B -
試しに4GiBのファイルを作成します。
$ sudo mount -t nfs svm-0a450d6e932f20fdc.fs-02f18e50b38b6fe03.fsx.us-east-1.amazonaws.com:/vol1 /mnt/fsxn/vol1 $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-0a450d6e932f20fdc.fs-02f18e50b38b6fe03.fsx.us-east-1.amazonaws.com:/vol1 nfs4 61G 320K 61G 1% /mnt/fsxn/vol1 $ yes \ $(base64 /dev/urandom -w 0 \ | head -c 1K ) \ | tr -d '\n' \ | sudo dd of=/mnt/fsxn/vol1/1KB_random_pattern_text_block_4GiB bs=4M count=1024 iflag=fullblock 1024+0 records in 1024+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 28.1688 s, 152 MB/s sh-5.2$ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-0a450d6e932f20fdc.fs-02f18e50b38b6fe03.fsx.us-east-1.amazonaws.com:/vol1 nfs4 61G 4.1G 57G 7% /mnt/fsxn/vol1
ファイル作成後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ -------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Disabled Idle for 00:59:13 Thu Jan 11 05:42:52 2024 Thu Jan 11 05:42:52 2024 0B 0% 0B 4.02GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 64GB 56.78GB 64GB 60.80GB 4.02GB 6% 0B 0% 0B 4.02GB 6% 4.02GB 7% - 4.02GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 4.02GB 0% Footprint in Performance Tier 4.02GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 107.5MB 0% Delayed Frees 4.80MB 0% File Operation Metadata 4KB 0% Total Footprint 4.13GB 0% Effective Total Footprint 4.13GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId02f18e50b38b6fe03-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 4.02GB Total Physical Used: 4.02GB Total Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used Without Snapshots: 4.02GB Total Data Reduction Physical Used Without Snapshots: 4.02GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones: 4.02GB Total Data Reduction Physical Used without snapshots and flexclones: 4.02GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 4.02GB Total Physical Used in FabricPool Performance Tier: 4.04GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.02GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.04GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 4.02GB Physical Space Used for All Volumes: 4.02GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 4.02GB Physical Space Used by the Aggregate: 4.02GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 308KB Physical Size Used by Snapshot Copies: 132KB Snapshot Volume Data Reduction Ratio: 2.33:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.33:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 856.6GB 861.8GB 5.15GB 4.18GB 0% 0B 0% 0B 0B 0B 0% 0B -
Total Physical Used
が4.02GBと書き込んだ分だけ物理的にも使用していることが分かります。
Space Saved by Aggregate Data Reduction
やdata-compaction-space-saved
も0Bです。
Inactive data compressionの実行
Inactive data compressionを実行します。
::*> volume efficiency on -vserver svm -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is enabled. ::*> volume efficiency modify -vserver svm -volume vol1 -compression true ::*> volume efficiency inactive-data-compression modify -vserver svm -volume vol1 -is-enabled true ::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0 Inactive data compression scan started on volume "vol1" in Vserver "svm" ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 2625 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 666760 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 662272 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 16% Phase1 L1s Processed: 3965 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 5502304 Phase2 Blocks Processed: 881280 Number of Cold Blocks Encountered: 1014808 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 1012352 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 0 Time since Last Inactive Data Compression Scan ended(sec): 0 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 0 Average time for Cold Data Compression(sec): 0 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 1052304 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 1048376 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 23 Time since Last Inactive Data Compression Scan ended(sec): 12 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 12 Average time for Cold Data Compression(sec): 10 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
1,048,376ブロックを圧縮したようです。
Inactive data compressionが完了して10分後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Enabled Idle for 01:14:42 Thu Jan 11 05:42:52 2024 Thu Jan 11 05:42:52 2024 0B 0% 0B 4.02GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 64GB 56.78GB 64GB 60.80GB 4.02GB 6% 0B 0% 0B 4.02GB 6% 4.02GB 7% - 4.02GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 4.02GB 0% Footprint in Performance Tier 4.04GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 107.5MB 0% Delayed Frees 21.03MB 0% File Operation Metadata 4KB 0% Total Footprint 4.14GB 0% Footprint Data Reduction 3.86GB 0% Auto Adaptive Compression 3.86GB 0% Effective Total Footprint 286.3MB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId02f18e50b38b6fe03-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 3.99GB Total Physical Used: 1.92GB Total Storage Efficiency Ratio: 2.08:1 Total Data Reduction Logical Used Without Snapshots: 3.99GB Total Data Reduction Physical Used Without Snapshots: 1.92GB Total Data Reduction Efficiency Ratio Without Snapshots: 2.08:1 Total Data Reduction Logical Used without snapshots and flexclones: 3.99GB Total Data Reduction Physical Used without snapshots and flexclones: 1.92GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 2.08:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 4.02GB Total Physical Used in FabricPool Performance Tier: 1.97GB Total FabricPool Performance Tier Storage Efficiency Ratio: 2.04:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.02GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 1.97GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 2.04:1 Logical Space Used for All Volumes: 3.99GB Physical Space Used for All Volumes: 3.99GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 5.74GB Physical Space Used by the Aggregate: 1.92GB Space Saved by Aggregate Data Reduction: 3.82GB Aggregate Data Reduction SE Ratio: 2.99:1 Logical Size Used by Snapshot Copies: 308KB Physical Size Used by Snapshot Copies: 132KB Snapshot Volume Data Reduction Ratio: 2.33:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.33:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 856.6GB 861.8GB 5.17GB 4.43GB 0% 3.82GB 43% 173.9MB 0B 3.82GB 43% 173.9MB -
aggr show-efficiency
のTotal Physical Used
が4.02GBから1.92GBに削減されました。
ただ、aggr showy
のphysical-used
は4.18GBから4.43GBとむしろ増加しています。どちらもaggregateレイヤーの情報をレポートするコマンドだと思うのですが、ズレが発生しているのはイマイチ理解できていません。
また、data-compaction-space-saved
が3.82GBに増加しているのも気になります。こちらの項目はコンパクションによって削減されたデータサイズを表示していると認識しています。Inactive data compressionを実行して一緒に変動するのも理解できていません。Inactive data compressionを実行すると、圧縮されて小さくなったデータブロックをまとめるためにデータコンパクションが動作するのでしょうか。
今回はこのまま先に進みます。詳細は別途検証しようと思います。
解凍
解凍します。
::*> volume efficiency off -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is disabled. ::*> volume efficiency undo -volume vol1 -auto-adaptive-compression true The efficiency undo operation for volume "vol1" of Vserver "svm" has started. ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing 7433728 KB Processed auto ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Idle Idle for 01:16:08 auto
10秒ほどで解凍が完了しました。
解凍して17分後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ -------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Disabled Idle for 01:33:30 Thu Jan 11 05:42:52 2024 Thu Jan 11 05:42:52 2024 0B 0% 0B 4.02GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 64GB 56.78GB 64GB 60.80GB 4.02GB 6% 0B 0% 0B 4.02GB 6% 4.02GB 7% - 4.02GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 4.02GB 0% Footprint in Performance Tier 4.06GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 107.5MB 0% Delayed Frees 46.94MB 0% File Operation Metadata 4KB 0% Total Footprint 4.17GB 0% Effective Total Footprint 4.17GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId02f18e50b38b6fe03-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 4.02GB Total Physical Used: 2.26GB Total Storage Efficiency Ratio: 1.77:1 Total Data Reduction Logical Used Without Snapshots: 4.02GB Total Data Reduction Physical Used Without Snapshots: 2.26GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.77:1 Total Data Reduction Logical Used without snapshots and flexclones: 4.02GB Total Data Reduction Physical Used without snapshots and flexclones: 2.26GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.77:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 4.02GB Total Physical Used in FabricPool Performance Tier: 2.30GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.75:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.02GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 2.30GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.75:1 Logical Space Used for All Volumes: 4.02GB Physical Space Used for All Volumes: 4.02GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 6.09GB Physical Space Used by the Aggregate: 2.26GB Space Saved by Aggregate Data Reduction: 3.82GB Aggregate Data Reduction SE Ratio: 2.69:1 Logical Size Used by Snapshot Copies: 660KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.43:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.43:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 856.6GB 861.8GB 5.20GB 8.53GB 1% 3.82GB 42% 173.9MB 0B 3.82GB 42% 173.9MB -
aggr show-efficiency
のTotal Physical Used
が1.92GBから2.26GBと若干増加しました。
一方で、Space Saved by Aggregate Data Reduction
やaggr show
のdata-compaction-space-saved
は3.82GBのままです。
そして、aggr show
のphysical-used
は4.43GBから8.53GBと4.1GB増加しています。圧縮前よりも物理使用量が大きくなってしまっています。
今までの検証で私が理解できていないポイントをまとめます。
- Inactive data compressionを実行すると
aggr show
のdata-compaction-space-saved
も増加する- Inactive data compressionを実行すると、裏側でコンパクションも実行される?
aggr show-efficiency
のTotal Physical Used
とaggr show
のphysical-used
が異なる- どちらもaggregateレイヤーの物理使用量を表示するものと認識しているが、実は違う?
- Auto adaptive compressionを解凍しても
aggr show
のdata-compaction-space-saved
は変わらない?- Inactive data compressionを実行すると、裏側でコンパクションも実行されるのが正であり、ほとんどコンパクションによるデータ削減であるというのであれば納得
- Auto adaptive compressionを解凍すると、
aggr show
のphysical-used
がInactive data compression実行前よりも増加する- 解凍プロセスで新規にデータブロックを書き込む必要があり、それが解放されない?
- Auto adaptive compressionを解凍すると、
aggr show-efficiency
のTotal Physical Used
は若干増加する- 4 のようにInactive data compression実行前よりも大きくなるということはないが、Inactive data compression実行前よりも物理使用量が小さい
- 3 の仮説が正しく、Auto adaptive compressionを解凍してもコンパクションによって削減された分は戻らないのであれば納得
- Auto adaptive compressionを解凍しても、
Space Saved by Aggregate Data Reduction
とaggr show
のdata-compaction-space-saved
の結果が変わらない- 3 の仮説が正しく、Auto adaptive compressionを解凍してもコンパクションによって削減された分は戻らないのであれば納得するが、その場合解凍によって増加した300MBとは何だったのか疑問が残る
- 300MB増加分がInactive data compressionの圧縮で削減されたデータ量であれば、
Space Saved by Aggregate Data Reduction
とaggr show
のdata-compaction-space-saved
は解凍することで300MB減るのが正しい挙動では?
コンパクションのundo
コンパクションによってデータ削減されているのであればコンパクションのundoをします。
:*> volume efficiency undo -volume vol1 -data-compaction true The efficiency undo operation for volume "vol1" of Vserver "svm" has started.
コンパクションをundoして5分後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ -------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Disabled Idle for 01:38:15 Thu Jan 11 05:42:52 2024 Thu Jan 11 05:42:52 2024 0B 0% 0B 4.02GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 64GB 56.78GB 64GB 60.80GB 4.02GB 6% 0B 0% 0B 4.02GB 6% 4.02GB 7% - 4.02GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 4.02GB 0% Footprint in Performance Tier 4.06GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 107.5MB 0% Delayed Frees 47.09MB 0% File Operation Metadata 4KB 0% Total Footprint 4.17GB 0% Effective Total Footprint 4.17GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId02f18e50b38b6fe03-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 4.02GB Total Physical Used: 2.26GB Total Storage Efficiency Ratio: 1.77:1 Total Data Reduction Logical Used Without Snapshots: 4.02GB Total Data Reduction Physical Used Without Snapshots: 2.26GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.77:1 Total Data Reduction Logical Used without snapshots and flexclones: 4.02GB Total Data Reduction Physical Used without snapshots and flexclones: 2.26GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.77:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 4.02GB Total Physical Used in FabricPool Performance Tier: 2.30GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.75:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 4.02GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 2.30GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.75:1 Logical Space Used for All Volumes: 4.02GB Physical Space Used for All Volumes: 4.02GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 6.09GB Physical Space Used by the Aggregate: 2.26GB Space Saved by Aggregate Data Reduction: 3.82GB Aggregate Data Reduction SE Ratio: 2.69:1 Logical Size Used by Snapshot Copies: 660KB Physical Size Used by Snapshot Copies: 272KB Snapshot Volume Data Reduction Ratio: 2.43:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 2.43:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 856.6GB 861.8GB 5.20GB 8.54GB 1% 3.82GB 42% 173.9MB 0B 3.82GB 42% 173.9MB -
特に変わりありません。
このことから以下が考えられます。
data-compaction-space-saved
のデータはvolume efficiency undo -data-compaction true
でundoできないdata-compaction-space-saved
とあるが、実はコンパクションによるデータ削減ではない
ちなみにvolume efficiency undo -volume vol1 -extended-auto-adaptive-compression true
を実行しても特に変わりありませんでした。
48GiBのテストファイルの追加
4GiBという小さめのファイルが分かりにくくしているのかもしれません。48GiBのファイルを追加して再度試します。
$ yes \ $(base64 /dev/urandom -w 0 \ | head -c 1K ) \ | tr -d '\n' \ | sudo dd of=/mnt/fsxn/vol1/1KB_random_pattern_text_block_48GiB bs=4M count=12288 iflag=fullblock 12288+0 records in 12288+0 records out 51539607552 bytes (52 GB, 48 GiB) copied, 346.898 s, 149 MB/s $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-0a450d6e932f20fdc.fs-02f18e50b38b6fe03.fsx.us-east-1.amazonaws.com:/vol1 nfs4 61G 53G 8.6G 86% /mnt/fsxn/vol1
ファイルを追加して15分後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ -------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Disabled Idle for 02:06:31 Thu Jan 11 05:42:52 2024 Thu Jan 11 05:42:52 2024 0B 0% 0B 52.22GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 64GB 8.58GB 64GB 60.80GB 52.22GB 85% 0B 0% 0B 52.22GB 82% 52.22GB 86% - 52.22GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 52.22GB 6% Footprint in Performance Tier 52.29GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 322.4MB 0% Delayed Frees 73.09MB 0% File Operation Metadata 4KB 0% Total Footprint 52.61GB 6% Effective Total Footprint 52.61GB 6% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId02f18e50b38b6fe03-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 104.4GB Total Physical Used: 48.49GB Total Storage Efficiency Ratio: 2.15:1 Total Data Reduction Logical Used Without Snapshots: 52.21GB Total Data Reduction Physical Used Without Snapshots: 48.49GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.08:1 Total Data Reduction Logical Used without snapshots and flexclones: 52.21GB Total Data Reduction Physical Used without snapshots and flexclones: 48.49GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.08:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 104.4GB Total Physical Used in FabricPool Performance Tier: 48.67GB Total FabricPool Performance Tier Storage Efficiency Ratio: 2.15:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 52.22GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 48.67GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.07:1 Logical Space Used for All Volumes: 52.21GB Physical Space Used for All Volumes: 52.21GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 52.32GB Physical Space Used by the Aggregate: 48.49GB Space Saved by Aggregate Data Reduction: 3.82GB Aggregate Data Reduction SE Ratio: 1.08:1 Logical Size Used by Snapshot Copies: 52.22GB Physical Size Used by Snapshot Copies: 572KB Snapshot Volume Data Reduction Ratio: 95733.36:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 95733.36:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 808.0GB 861.8GB 53.78GB 57.18GB 6% 3.82GB 7% 173.9MB 0B 3.82GB 7% 173.9MB - ::*> aggr show-space Aggregate : aggr1 Performance Tier Feature Used Used% -------------------------------- ---------- ------ Volume Footprints 53.62GB 6% Aggregate Metadata 3.99GB 0% Snapshot Reserve 45.36GB 5% Total Used 99.14GB 11% Total Physical Used 57.18GB 6% Total Provisioned Space 65GB 7% Aggregate : aggr1 Object Store: FSxFabricpoolObjectStore Feature Used Used% -------------------------------- ---------- ------ Logical Used 0B - Logical Referenced Capacity 0B - Logical Unreferenced Capacity 0B - Total Physical Used 0B - 2 entries were displayed.
aggr show-efficiency
のTotal Physical Used
は2.26GBから48.49GBとaggr show
のphysical-used
は8.54GBから57.18GBに増加していました。
Space Saved by Aggregate Data Reduction
やdata-compaction-space-saved
は変化ありません。
Inactive data compressionの再実行
Inactive data compressionを実行します。
::*> volume efficiency on -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is enabled. ::*> volume efficiency modify -volume vol1 -application-io-size auto -compression true ::*> volume efficiency inactive-data-compression modify -volume vol1 -is-enabled true ::*> volume efficiency inactive-data-compression start -volume vol1 -inactive-days 0 Inactive data compression scan started on volume "vol1" in Vserver "svm" ::*> volume efficiency inactive-data-compression show -volume vol1 -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 2777 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 705520 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 700208 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 41 Time since Last Inactive Data Compression Scan ended(sec): 26 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 26 Average time for Cold Data Compression(sec): 12 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -volume vol1 -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: default Progress: RUNNING Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: 0% Phase1 L1s Processed: 8988 Phase1 Lns Skipped: L1: 0 L2: 0 L3: 0 L4: 0 L5: 0 L6: 0 L7: 0 Phase2 Total Blocks: 0 Phase2 Blocks Processed: 0 Number of Cold Blocks Encountered: 2296224 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 2285104 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 67 Time since Last Inactive Data Compression Scan ended(sec): 52 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 52 Average time for Cold Data Compression(sec): 12 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0% ::*> volume efficiency inactive-data-compression show -volume vol1 -instance Volume: vol1 Vserver: svm Is Enabled: true Scan Mode: - Progress: IDLE Status: SUCCESS Compression Algorithm: lzopro Failure Reason: - Total Blocks: - Total blocks Processed: - Percentage: - Phase1 L1s Processed: - Phase1 Lns Skipped: - Phase2 Total Blocks: - Phase2 Blocks Processed: - Number of Cold Blocks Encountered: 13682720 Number of Repacked Blocks: 0 Number of Compression Done Blocks: 13634040 Number of Vol-Overwrites: 0 Time since Last Inactive Data Compression Scan started(sec): 33 Time since Last Inactive Data Compression Scan ended(sec): 16 Time since Last Successful Inactive Data Compression Scan started(sec): - Time since Last Successful Inactive Data Compression Scan ended(sec): 16 Average time for Cold Data Compression(sec): 13 Tuning Enabled: true Threshold: 14 Threshold Upper Limit: 21 Threshold Lower Limit: 14 Client Read history window: 14 Incompressible Data Percentage: 0%
13,634,040ブロック圧縮されました。13,634,040ブロック × 4KB / 1,024 / 1,024 ≒ 52.01GiB
なので書き込まれているほとんどのデータブロックが圧縮されていそうです。
Inactive data compressionの実行が完了して15分後のStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Enabled Idle for 02:27:46 Thu Jan 11 05:42:52 2024 Thu Jan 11 05:42:52 2024 0B 0% 0B 52.23GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 64GB 8.57GB 64GB 60.80GB 52.23GB 85% 0B 0% 0B 52.39GB 82% 52.23GB 86% - 52.23GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 52.39GB 6% Footprint in Performance Tier 52.49GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 364.1MB 0% Delayed Frees 102.0MB 0% File Operation Metadata 4KB 0% Total Footprint 52.85GB 6% Footprint Data Reduction 50.25GB 6% Auto Adaptive Compression 50.25GB 6% Effective Total Footprint 2.60GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId02f18e50b38b6fe03-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 104.0GB Total Physical Used: 9.88GB Total Storage Efficiency Ratio: 10.52:1 Total Data Reduction Logical Used Without Snapshots: 51.80GB Total Data Reduction Physical Used Without Snapshots: 9.86GB Total Data Reduction Efficiency Ratio Without Snapshots: 5.26:1 Total Data Reduction Logical Used without snapshots and flexclones: 51.80GB Total Data Reduction Physical Used without snapshots and flexclones: 9.86GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 5.26:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 104.4GB Total Physical Used in FabricPool Performance Tier: 10.47GB Total FabricPool Performance Tier Storage Efficiency Ratio: 9.97:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 52.23GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 10.45GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 5.00:1 Logical Space Used for All Volumes: 51.80GB Physical Space Used for All Volumes: 51.80GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 59.63GB Physical Space Used by the Aggregate: 9.88GB Space Saved by Aggregate Data Reduction: 49.75GB Aggregate Data Reduction SE Ratio: 6.03:1 Logical Size Used by Snapshot Copies: 52.22GB Physical Size Used by Snapshot Copies: 169.3MB Snapshot Volume Data Reduction Ratio: 315.89:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 315.89:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 0 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 847.9GB 861.8GB 13.86GB 12.60GB 1% 49.75GB 78% 2.22GB 0B 49.75GB 78% 2.22GB -
aggr show-efficiency
のTotal Physical Used
は48.49GBから9.88GBとaggr show
のphysical-used
は57.18GBから12.60GBに減少していました。減少量も39.1GBと44.58GBと少し差がありますね。
Space Saved by Aggregate Data Reduction
とdata-compaction-space-saved
はどちらも49.75GBです。
解凍
それでは解凍をします。
::*> volume efficiency off -volume vol1 Efficiency for volume "vol1" of Vserver "svm" is disabled. ::*> volume efficiency undo -volume vol1 -auto-adaptive-compression true The efficiency undo operation for volume "vol1" of Vserver "svm" has started. ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing 1345536 KB Processed auto ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing 103424556 KB Processed auto ::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing Undo scan is waiting for redirect fixup to complete. auto
6分を経過したタイミングからUndo scan is waiting for redirect fixup to complete.
と表示されるようになりました。
この時点でのStorage Efficiency、ボリューム、aggregateの情報は以下のとおりです。
::*> volume efficiency show -volume vol1 Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Disabled Undoing Undo scan is waiting for redirect fixup to complete. auto ::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ -------- ---------------------------------------------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Disabled Undo scan is waiting for redirect fixup to complete. Thu Jan 11 05:42:52 2024 Thu Jan 11 05:42:52 2024 0B 0% 0B 52.23GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 64GB 8.57GB 64GB 60.80GB 52.23GB 85% 0B 0% 0B 52.40GB 82% 52.23GB 86% - 52.23GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 52.40GB 6% Footprint in Performance Tier 52.69GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 364.1MB 0% Delayed Frees 295.7MB 0% File Operation Metadata 4KB 0% Total Footprint 53.04GB 6% Footprint Data Reduction 50.44GB 6% Auto Adaptive Compression 50.44GB 6% Effective Total Footprint 2.61GB 0% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId02f18e50b38b6fe03-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 104.4GB Total Physical Used: 43.40GB Total Storage Efficiency Ratio: 2.41:1 Total Data Reduction Logical Used Without Snapshots: 52.21GB Total Data Reduction Physical Used Without Snapshots: 43.27GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.21:1 Total Data Reduction Logical Used without snapshots and flexclones: 52.21GB Total Data Reduction Physical Used without snapshots and flexclones: 43.27GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.21:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 104.5GB Total Physical Used in FabricPool Performance Tier: 43.62GB Total FabricPool Performance Tier Storage Efficiency Ratio: 2.39:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 52.23GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 43.48GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.20:1 Logical Space Used for All Volumes: 52.21GB Physical Space Used for All Volumes: 52.21GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 53.42GB Physical Space Used by the Aggregate: 43.40GB Space Saved by Aggregate Data Reduction: 10.02GB Aggregate Data Reduction SE Ratio: 1.23:1 Logical Size Used by Snapshot Copies: 52.22GB Physical Size Used by Snapshot Copies: 174.1MB Snapshot Volume Data Reduction Ratio: 307.09:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 307.09:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 807.5GB 861.8GB 54.25GB 53.81GB 6% 10.02GB 16% 458.1MB 0B 10.02GB 16% 458.1MB -
aggr show-efficiency
のTotal Physical Used
は9.88GBから43.40GBとaggr show
のphysical-used
は12.60GBから53.81GBに増加していました。増加量は33.52GBと40.58GBと少し差があります。
スキャンしなくてもundoだけでaggregateの物理消費量が減っていることが判明しました。
ただし、相変わらず値が異なるのは気になりますね。ちなみにCloudWatchメトリクスでStorageUsed
を確認すると、58.4GBとどちらとも違う値を指していました。
Space Saved by Aggregate Data Reduction
とdata-compaction-space-saved
はどちらも10.02GBです。4GiBの場合は全く変わらなかったので大きな変化です。
このまま10分ほど待ちましたがTotal Physical Used
とphysical-used
が100~200MB程度変わった程度で、大きな変化はありませんでした。
また、volume efficiency undo -volume vol1 -data-compaction true
も実行しました。10分ほどは何も変わりありませんでしたが、26時間後に確認すると。Space Saved by Aggregate Data Reduction
とdata-compaction-space-saved
が10.02GBから3.73GBに縮小していました。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ -------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Disabled Idle for 28:07:56 Thu Jan 11 05:42:52 2024 Thu Jan 11 05:42:52 2024 0B 0% 0B 52.23GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 64GB 8.57GB 64GB 60.80GB 52.23GB 85% 0B 0% 0B 52.23GB 82% 52.23GB 86% - 52.23GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 52.23GB 6% Footprint in Performance Tier 52.84GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 364.1MB 0% Delayed Frees 621.8MB 0% File Operation Metadata 4KB 0% Total Footprint 53.19GB 6% Effective Total Footprint 53.19GB 6% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId02f18e50b38b6fe03-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 104.5GB Total Physical Used: 48.60GB Total Storage Efficiency Ratio: 2.15:1 Total Data Reduction Logical Used Without Snapshots: 52.21GB Total Data Reduction Physical Used Without Snapshots: 48.60GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.07:1 Total Data Reduction Logical Used without snapshots and flexclones: 52.21GB Total Data Reduction Physical Used without snapshots and flexclones: 48.60GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.07:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 104.5GB Total Physical Used in FabricPool Performance Tier: 49.04GB Total FabricPool Performance Tier Storage Efficiency Ratio: 2.13:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 52.23GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 49.03GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.07:1 Logical Space Used for All Volumes: 52.21GB Physical Space Used for All Volumes: 52.21GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 52.33GB Physical Space Used by the Aggregate: 48.60GB Space Saved by Aggregate Data Reduction: 3.73GB Aggregate Data Reduction SE Ratio: 1.08:1 Logical Size Used by Snapshot Copies: 52.24GB Physical Size Used by Snapshot Copies: 1.37MB Snapshot Volume Data Reduction Ratio: 39017.90:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 39017.90:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 807.1GB 861.8GB 54.63GB 56.06GB 6% 3.73GB 6% 170.5MB 0B 3.73GB 6% 170.5MB -
さらに17時間待つと、Space Saved by Aggregate Data Reduction
とdata-compaction-space-saved
の値は0Bになっていました。コンパクションをundoすることで0Bになったのか、そのまま放置しても判断はつきませんがaggregtateレイヤーでのデータ削減効果を全て解放することができました。
::*> volume efficiency show -volume vol1 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ -------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol1 Disabled Idle for 45:59:04 Thu Jan 11 05:42:52 2024 Thu Jan 11 05:42:52 2024 0B 0% 0B 52.23GB ::*> volume show -volume vol1 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, physical-used, physical-used-percent vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared physical-used physical-used-percent logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------- --------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol1 64GB 8.57GB 64GB 60.80GB 52.23GB 85% 0B 0% 0B 52.23GB 82% 52.23GB 86% - 52.23GB 0B 0% ::*> volume show-footprint -volume vol1 Vserver : svm Volume : vol1 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 52.23GB 6% Footprint in Performance Tier 52.84GB 100% Footprint in FSxFabricpoolObjectStore 0B 0% Volume Guarantee 0B 0% Flexible Volume Metadata 364.1MB 0% Delayed Frees 621.8MB 0% File Operation Metadata 4KB 0% Total Footprint 53.19GB 6% Effective Total Footprint 53.19GB 6% ::*> aggr show-efficiency -instance Name of the Aggregate: aggr1 Node where Aggregate Resides: FsxId02f18e50b38b6fe03-01 Logical Size Used by Volumes, Clones, Snapshot Copies in the Aggregate: 104.5GB Total Physical Used: 52.33GB Total Storage Efficiency Ratio: 2.00:1 Total Data Reduction Logical Used Without Snapshots: 52.21GB Total Data Reduction Physical Used Without Snapshots: 52.32GB Total Data Reduction Efficiency Ratio Without Snapshots: 1.00:1 Total Data Reduction Logical Used without snapshots and flexclones: 52.21GB Total Data Reduction Physical Used without snapshots and flexclones: 52.32GB Total Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Total Logical Size Used by Volumes, Clones, Snapshot Copies in the FabricPool Performance Tier: 104.5GB Total Physical Used in FabricPool Performance Tier: 52.76GB Total FabricPool Performance Tier Storage Efficiency Ratio: 1.98:1 Total Data Reduction Logical Used without snapshots and flexclones in the FabricPool Performance Tier: 52.23GB Total Data Reduction Physical Used without snapshots and flexclones in the FabricPool Performance Tier: 52.76GB Total FabricPool Performance Tier Data Reduction Efficiency Ratio without snapshots and flexclones: 1.00:1 Logical Space Used for All Volumes: 52.21GB Physical Space Used for All Volumes: 52.21GB Space Saved by Volume Deduplication: 0B Space Saved by Volume Deduplication and pattern detection: 0B Volume Deduplication Savings ratio: 1.00:1 Space Saved by Volume Compression: 0B Volume Compression Savings ratio: 1.00:1 Space Saved by Inline Zero Pattern Detection: 0B Volume Data Reduction SE Ratio: 1.00:1 Logical Space Used by the Aggregate: 52.33GB Physical Space Used by the Aggregate: 52.33GB Space Saved by Aggregate Data Reduction: 0B Aggregate Data Reduction SE Ratio: 1.00:1 Logical Size Used by Snapshot Copies: 52.25GB Physical Size Used by Snapshot Copies: 1.59MB Snapshot Volume Data Reduction Ratio: 33738.97:1 Logical Size Used by FlexClone Volumes: 0B Physical Sized Used by FlexClone Volumes: 0B FlexClone Volume Data Reduction Ratio: 1.00:1 Snapshot And FlexClone Volume Data Reduction SE Ratio: 33738.97:1 Number of Volumes Offline: 0 Number of SIS Disabled Volumes: 1 Number of SIS Change Log Disabled Volumes: 1 ::*> aggr show -fields availsize, usedsize, size, physical-used, physical-used-percent, data-compaction-space-saved, data-compaction-space-saved-percent, data-compacted-count, composite-capacity-tier-used, sis-space-saved, sis-space-saved-percent, sis-shared-count, inactive-data-reporting-start-timestamp aggregate availsize size usedsize physical-used physical-used-percent data-compaction-space-saved data-compaction-space-saved-percent data-compacted-count composite-capacity-tier-used sis-space-saved sis-space-saved-percent sis-shared-count inactive-data-reporting-start-timestamp --------- --------- ------- -------- ------------- --------------------- --------------------------- ----------------------------------- -------------------- ---------------------------- --------------- ----------------------- ---------------- --------------------------------------- aggr1 807.1GB 861.8GB 54.63GB 57.25GB 6% 0B 0% 0B 0B 0B 0% 0B -
今までの疑問点に追記すると、以下のようになります。
- Inactive data compressionを実行すると
aggr show
のdata-compaction-space-saved
も増加する- Inactive data compressionを実行すると、裏側でコンパクションも実行される?
- -> 解凍後も
Space Saved by Aggregate Data Reduction
とdata-compaction-space-saved
の値は常に一致することと、Auto adaptive compression解凍後に変動することからdata-compaction-space-saved
はコンパクションのみの値ではないと推測
aggr show-efficiency
のTotal Physical Used
とaggr show
のphysical-used
が異なる- どちらもaggregateレイヤーの物理使用量を表示するものと認識しているが、実は違う?
- -> 追加検証でも判断付かず
- Auto adaptive compressionを解凍しても
aggr show
のdata-compaction-space-saved
は変わらない?- Inactive data compressionを実行すると、裏側でコンパクションも実行されるのが正であり、ほとんどコンパクションによるデータ削減であるというのであれば納得
- -> 52GiBで試した場合は解凍後に大きく変動したため一概には言えない
- Auto adaptive compressionを解凍すると、
aggr show
のphysical-used
がInactive data compression実行前よりも増加する- 解凍プロセスで新規にデータブロックを書き込む必要があり、それが解放されない?
- -> 52GiBで試した場合は減少したため一概には言えない
- Auto adaptive compressionを解凍すると、
aggr show-efficiency
のTotal Physical Used
は若干増加する- 4 のようにInactive data compression実行前よりも大きくなるということはないが、Inactive data compression実行前よりも物理使用量が小さい
- 3 の仮説が正しく、Auto adaptive compressionを解凍してもコンパクションによって削減された分は戻らないのであれば納得
- -> 52GiBで試した場合も同様の挙動
- Auto adaptive compressionを解凍しても、
Space Saved by Aggregate Data Reduction
とaggr show
のdata-compaction-space-saved
の結果が変わらない- 3 の仮説が正しく、Auto adaptive compressionを解凍してもコンパクションによって削減された分は戻らないのであれば納得するが、その場合解凍によって増加した300MBとは何だったのか疑問が残る
- 300MB増加分がInactive data compressionの圧縮で削減されたデータ量であれば、
Space Saved by Aggregate Data Reduction
とaggr show
のdata-compaction-space-saved
は解凍することで300MB減るのが正しい挙動では? - -> 52GiBで試した場合は大きく変動した
ポイントとしてはInactive data compression後にコンパクションがかかるのかどうか次第だと考えます。
2024/4/17 追記 : フィードバックにより以下KBが追加されました。結論としては、Inactive data compression実行後にコンパクションが動作します。また、Inactive data compressionによるデータ削減量も含めてコンパクションによるデータ削減量として計算されます。
Execution of inactive data compression increases data compaction savings as well
Applies to
Amazon FSx for Netapp ONTAP
Answer
- Data Compaction increases because compression savings are considered in data compaction savings.
- Compaction runs followed by compression.
Additional Information
Output reference :
- Before executing inactive data compression :
FsxId*> aggr show -fields data-compaction-space-saved, sis-space-saved aggregate data-compaction-space-saved sis-space-saved --------- --------------------------- --------------- aggr1 1.51GB 1.51GB
- After executing inactive data compression :
FsxId*> volume efficiency inactive-data-compression start -volume vol2 -inactive-days 0 FsxId*> aggr show -fields data-compaction-space-saved, sis-space-saved aggregate data-compaction-space-saved sis-space-saved --------- --------------------------- --------------- aggr1 1.87GB 1.87GB
TSSEによるインライン圧縮も解凍されてしまうため注意が必要
Amazon FSx for NetApp ONTAPのInactive data compressionによる圧縮を解凍してみました。
解凍(undo)自体は非常に簡単に行えますが、TSSEによるインライン圧縮も解凍されてしまうため注意が必要ですね。
一度解凍したデータブロックを後から8KBで圧縮をし直すのはなかなか大変そうです。
この記事が誰かの助けになれば幸いです。
以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!