[Amazon FSx for NetApp ONTAP] FSxのバックアップからリストアした際に重複排除は維持できるのか確認してみた
重複排除されたデータが含まれるバックアップからリストアした際のボリュームの物理使用量が気になる
こんにちは、のんピ(@non____97)です。
皆さんはAmazon FSx for NetApp ONTAP(以降FSxN)において、重複排除されたデータが含まれるバックアップからリストアした際のボリュームの物理使用量が気になったことはありますか? 私はあります。
以下記事でFSxのバックアップはFSxNファイルシステムのaggregateレイヤーのデータ削減とは別に、バックアップストレージ上でデータ削減を行うことが分かりました。そしてリストアした際はaggregateレイヤーのデータ削減効果を維持できないことも確認しました。
では重複排除の場合はどうでしょうか。重複排除を維持したままリストアできるのでしょうか。
実際に試してみたので紹介します。
いきなりまとめ
- FSxのバックアップからリストアした際に重複排除は維持できない
- ほぼ重複排除が効いていない状態でリストアされる
- SSDの空き容量が少ない状態で重複排除量が非常に多いボリュームのバックアップからリストアするとSSDの枯渇は避けられない
- リストア時にStorage EfficiencyをOnにすると、リストアされたボリューム上でStorage Efficiencyがかかる
- Storage Efficiencyが実行する際にFSxNファイルシステムのCPUやディスクIOPSなどのリソースを消費するため、パフォーマンスに影響がある
- FSxNファイルシステムのStorage Efficiencyの最大同時実行数は8つなので注意
- Tiering Policy Allの場合、重複排除がかかり切る前にキャパシティプールストレージに階層化されてしまうので注意
- 重複排除を効かせた状態でバックアップ / リストアしたいのであれば、SnapMirrorを使うことになる
やってみた
ボリュームの作成
まず、ボリュームを作成します。
ボリュームを作成する際に使用するFSxNファイルシステムやSVMは以下記事の大阪リージョンのものを利用します。
ボリュームはマネジメントコンソールから作成しました。
作成後、ONTAP CLIからStorage Efficiencyとボリュームの情報を確認します。
::> set diag Warning: These diagnostic commands are for use by NetApp personnel only. Do you want to continue? {y|n}: y ::*> volume efficiency show -volume vol2 -fields state, policy, storage-efficiency-mode, inline-compression, inline-dedupe, compression, data-compaction, auto-adaptive-compression-savings, auto-adaptive-compression-existing-volume, using-auto-adaptive-compression vserver volume state policy compression inline-compression storage-efficiency-mode inline-dedupe data-compaction auto-adaptive-compression-savings using-auto-adaptive-compression auto-adaptive-compression-existing-volume ------- ------ ------- ------ ----------- ------------------ ----------------------- ------------- --------------- --------------------------------- ------------------------------- ----------------------------------------- svm vol2 Enabled auto false true efficient true true true true false ::*> volume show -volume vol2 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ----- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 60.80GB 64GB 60.80GB 308KB 0% 0B 0% 0B 308KB 0% - 308KB 0B 0%
テキストファイルの作成
テキストファイルを作成します。
/dev/urandom
で生成したバイナリデータをBase64でエンコードした1KBの文字列を指定したバイト数分繰り返すことでテキストファイルを用意します。
$ sudo mkdir -p /mnt/fsxn/vol2 $ sudo mount -t nfs svm-0a7e0e36f5d9aebb9.fs-0f1302327a12b6488.fsx.ap-northeast-3.amazonaws.com:/vol2 /mnt/fsxn/vol2 $ yes \ $(base64 /dev/urandom -w 0 \ | head -c 1K ) \ | tr -d '\n' \ | sudo dd of=/mnt/fsxn/vol2/1KB_random_pattern_text_block_32GiB bs=4M count=8192 iflag=fullblock 8192+0 records in 8192+0 records out 34359738368 bytes (34 GB, 32 GiB) copied, 263.335 s, 130 MB/s $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-0a7e0e36f5d9aebb9.fs-0f1302327a12b6488.fsx.ap-northeast-3.amazonaws.com:/vol2 nfs4 61G 31G 31G 51% /mnt/fsxn/vol2
テキストファイル作成後、ONTAP CLIからStorage Efficiencyとボリュームの情報を確認します。
::*> volume efficiency show -volume vol2 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size vserver volume state progress last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------------- ------------ --------------- -------------- ----------------- svm vol2 Enabled 129132 KB (1%) Done 0B 35% 303.8MB 32.43GB ::*> volume show -volume vol2 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 30.11GB 64GB 60.80GB 30.69GB 50% 1.75GB 5% 56KB 32.44GB 53% - 32.44GB 0B 0%
ポストプロセスの重複排除が実行されていますね。
しばらく様子見します。
::*> volume efficiency show -volume vol2 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 00:34:13 Tue Jan 09 09:29:17 2024 Tue Jan 09 13:42:09 2024 7.07GB 35% 233.1MB 32.47GB 8::*> volume show -volume vol2 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 37.02GB 64GB 60.80GB 23.78GB 39% 8.69GB 27% 280KB 32.47GB 53% - 32.47GB 0B 0%
結局4時間以上かかりました。
4時間待ったおかげか、8.69GB重複排除されています。
NFSクライアントからもテキストファイル作成直後と比べて使用量が減っていることが分かります。
$ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-0a7e0e36f5d9aebb9.fs-0f1302327a12b6488.fsx.ap-northeast-3.amazonaws.com:/vol2 nfs4 61G 24G 38G 40% /mnt/fsxn/vol2
バイナリファイルの作成
圧縮が効きづらいバイナリファイルでも試します。
まず、4GiBのファイルを用意します。
$ sudo dd if=/dev/urandom of=/mnt/fsxn/vol2/random_pattern_binary_block_4GiB bs=1M count=4096 4096+0 records in 4096+0 records out 4294967296 bytes (4.3 GB, 4.0 GiB) copied, 32.2173 s, 133 MB/s $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-0a7e0e36f5d9aebb9.fs-0f1302327a12b6488.fsx.ap-northeast-3.amazonaws.com:/vol2 nfs4 61G 28G 33G 46% /mnt/fsxn/vol2
このタイミングでは重複排除による追加のデータ削減は発生していません。
::*> volume efficiency show -volume vol2 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------------ ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled 78616 KB (0%) Done Tue Jan 09 09:29:17 2024 Tue Jan 09 13:42:09 2024 7.07GB 6% 273.1MB 36.53GB ::*> volume show -volume vol2 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 33.04GB 64GB 60.80GB 27.76GB 45% 8.76GB 24% 284KB 36.53GB 60% - 36.53GB 0B 0%
作成したバイナリファイルを7つコピーします。
$ for i in {1..7}; do sudo cp /mnt/fsxn/vol2/random_pattern_binary_block_4GiB "/mnt/fsxn/vol2/random_pattern_binary_block_4GiB_copy_${i}" done $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-0a7e0e36f5d9aebb9.fs-0f1302327a12b6488.fsx.ap-northeast-3.amazonaws.com:/vol2 nfs4 61G 40G 22G 65% /mnt/fsxn/vol2 $ ls -l /mnt/fsxn/vol2 total 67373168 -rw-r--r--. 1 root root 34359738368 Jan 9 09:32 1KB_random_pattern_text_block_32GiB -rw-r--r--. 1 root root 4294967296 Jan 9 14:20 random_pattern_binary_block_4GiB -rw-r--r--. 1 root root 4294967296 Jan 9 14:29 random_pattern_binary_block_4GiB_copy_1 -rw-r--r--. 1 root root 4294967296 Jan 9 14:29 random_pattern_binary_block_4GiB_copy_2 -rw-r--r--. 1 root root 4294967296 Jan 9 14:30 random_pattern_binary_block_4GiB_copy_3 -rw-r--r--. 1 root root 4294967296 Jan 9 14:30 random_pattern_binary_block_4GiB_copy_4 -rw-r--r--. 1 root root 4294967296 Jan 9 14:31 random_pattern_binary_block_4GiB_copy_5 -rw-r--r--. 1 root root 4294967296 Jan 9 14:31 random_pattern_binary_block_4GiB_copy_6 -rw-r--r--. 1 root root 4294967296 Jan 9 14:32 random_pattern_binary_block_4GiB_copy_7
::*> volume efficiency show -volume vol2 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ------------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled 415152 KB (1%) Done Tue Jan 09 09:29:17 2024 Tue Jan 09 13:42:09 2024 7.07GB 23% 386.5MB 64.76GB ::*> volume show -volume vol2 -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 21.79GB 64GB 60.80GB 39.01GB 64% 25.74GB 40% 2.50GB 64.76GB 107% - 64.76GB 0B 0%
重複排除によるデータ削減量が25.74GBになりました。インライン重複排除がかなり効いていそうです。
限界まで重複排除を効かせたいので、手動でStorage Efficiencyを実行します。
既にStorage Efficiencyが動作していたので、一度停止して-scan-old-data
で実行します。
::*> volume efficiency start -volume vol2 -scan-old-data Warning: This operation scans all of the data in volume "vol2" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y Error: command failed: Failed to start efficiency on volume "vol2" of Vserver "svm": Another sis operation is currently active. ::*> volume efficiency stop -volume vol2 The efficiency operation for volume "vol2" of Vserver "svm" is being stopped. ::*> volume efficiency show -volume vol2 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-endvserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 00:00:04 Tue Jan 09 14:19:54 2024 Tue Jan 09 14:36:15 2024 0B 23% 386.5MB 64.76GB ::*> volume efficiency start -volume vol2 -scan-old-data Warning: This operation scans all of the data in volume "vol2" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol2" of Vserver "svm" has started. ::*> volume efficiency show -volume vol2 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled 772096 KB Scanned Tue Jan 09 14:19:54 2024 Tue Jan 09 14:36:15 2024 0B 0% 0B 64.28GB
なんだかいつもよりStorage Efficiencyのスピードが遅いです。IOPSもスループットも跳ねているわけではなかったです。
再度4時間以上待つのも大変なので次に進みます。
Storage EfficiencyをOnにしてリストア
バックアップを取得してリストアします。
現在ののStorage Efficiencyとボリューム、Snapshotの状態は以下のとおりです。
::*> volume efficiency show -volume vol2* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 06:13:40 Tue Jan 09 14:36:35 2024 Tue Jan 09 18:44:39 2024 38.25GB 0% 8KB 64.34GB ::*> volume show -volume vol2* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 25.22GB 64GB 60.80GB 35.58GB 58% 60.00GB 63% 4.00GB 95.58GB 157% - 64.34GB 0B 0% ::*> snapshot show -volume vol2 ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm vol2 backup-0c200f9854188e74d 34.43GB 54% 89%
Storage Efficiencyの実行は完了していますね。
また、Storage Efficiency実行中にバックアップを取得したので、Snapshotのサイズが34.43GBと膨れています。
それではリストアします。リストアする際はStorage EfficiencyをOnにします。
リストア中のボリュームを確認します。
::*> volume efficiency show -volume vol2* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 06:17:17 Tue Jan 09 14:36:35 2024 Tue Jan 09 18:44:39 2024 38.25GB 0% 8KB 64.34GB svm vol2_restored_se_on Enabled Idle for 00:01:29 Wed Jan 10 01:00:28 2024 Wed Jan 10 01:00:28 2024 0B 28% 188.5MB 19.94GB 2 entries were displayed. ::*> volume show -volume vol2* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 25.22GB 64GB 60.80GB 35.58GB 58% 60.00GB 63% 4.00GB 95.58GB 157% - 64.34GB 0B 0% svm vol2_restored_se_on 64GB 44.44GB 64GB 64GB 19.56GB 30% 589.1MB 3% 20KB 20.13GB 31% - 20.13GB 0B 0% 2 entries were displayed. --- ::*> volume efficiency show -volume vol2* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 06:18:26 Tue Jan 09 14:36:35 2024 Tue Jan 09 18:44:39 2024 38.25GB 0% 8KB 64.34GB svm vol2_restored_se_on Enabled 0 KB Searched Wed Jan 10 01:00:28 2024 Wed Jan 10 01:00:28 2024 0B 10% 338.0MB 35.10GB 2 entries were displayed. ::*> volume show -volume vol2* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 25.22GB 64GB 60.80GB 35.58GB 58% 60.00GB 63% 4.00GB 95.58GB 157% - 64.34GB 0B 0% svm vol2_restored_se_on 64GB 29.29GB 64GB 64GB 34.71GB 54% 589.1MB 2% 20KB 35.29GB 55% - 35.29GB 0B 0% 2 entries were displayed. --- ::*> volume efficiency show -volume vol2* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 06:19:44 Tue Jan 09 14:36:35 2024 Tue Jan 09 18:44:39 2024 38.25GB 0% 8KB 64.34GB svm vol2_restored_se_on Enabled 0 KB Searched Wed Jan 10 01:00:28 2024 Wed Jan 10 01:00:28 2024 0B 33% 492.5MB 50.84GB 2 entries were displayed. ::*> volume show -volume vol2* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 25.22GB 64GB 60.80GB 35.58GB 58% 60.00GB 63% 4.00GB 95.58GB 157% - 64.34GB 0B 0% svm vol2_restored_se_on 64GB 13.55GB 64GB 64GB 50.45GB 78% 589.1MB 1% 20KB 51.03GB 80% - 51.03GB 0B 0% 2 entries were displayed. --- ::*> volume efficiency show -volume vol2* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 06:21:14 Tue Jan 09 14:36:35 2024 Tue Jan 09 18:44:39 2024 38.25GB 0% 8KB 64.34GB svm vol2_restored_se_on Enabled 0 KB (0%) Done Wed Jan 10 01:00:28 2024 Wed Jan 10 01:00:28 2024 0B 49% 634.2MB 64.90GB 2 entries were displayed. ::*> volume show -volume vol2* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 25.22GB 64GB 60.80GB 35.58GB 58% 60.00GB 63% 4.00GB 95.58GB 157% - 64.34GB 0B 0% svm vol2_restored_se_on 71.90GB 7.58GB 71.90GB 71.90GB 64.32GB 89% 589.1MB 1% 20KB 64.90GB 90% - 64.90GB 0B 0% 2 entries were displayed.
見たところ重複排除によるデータ削減効果を全く維持できていないようです。
もう少し待つとボリュームのライフサイクルの状態が作成済み
に変わりました。
この時のボリュームの使用量もバックアップ取得時と比較して非常に多いです。これは困ります。
::*> volume efficiency show -volume vol2* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-endvserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 06:23:07 Tue Jan 09 14:36:35 2024 Tue Jan 09 18:44:39 2024 38.25GB 0% 8KB 64.34GB svm vol2_restored_se_on Enabled 198960 KB (0%) Done Wed Jan 10 01:00:28 2024 Wed Jan 10 01:00:28 2024 0B 49% 634.2MB 64.90GB 2 entries were displayed. ::*> volume show -volume vol2* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 25.22GB 64GB 60.80GB 35.58GB 58% 60.00GB 63% 4.00GB 95.58GB 157% - 64.34GB 0B 0% svm vol2_restored_se_on 71.90GB 7.57GB 71.90GB 71.90GB 64.32GB 89% 786.6MB 1% 28KB 65.09GB 91% - 64.90GB 0B 0% 2 entries were displayed.
このまま様子を見ます。
すると、Storage EfficiencyがOnだからか徐々に重複排除量が増えていきました。
::*> volume efficiency show -volume vol2* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-endvserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 06:30:41 Tue Jan 09 14:36:35 2024 Tue Jan 09 18:44:39 2024 38.25GB 0% 8KB 64.34GB svm vol2_restored_se_on Enabled 1039188 KB (4%) Done Wed Jan 10 01:00:28 2024 Wed Jan 10 01:00:28 2024 0B 49% 634.2MB 64.90GB 2 entries were displayed. ::*> volume show -volume vol2* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 25.22GB 64GB 60.80GB 35.58GB 58% 60.00GB 63% 4.00GB 95.58GB 157% - 64.34GB 0B 0% svm vol2_restored_se_on 71.90GB 7.57GB 71.90GB 71.90GB 64.33GB 89% 1.57GB 2% 52KB 65.90GB 92% - 64.90GB 0B 0% 2 entries were displayed.
そのため、FSxのバックアップからリストアする際の思想としては「重複排除はリストアされたボリューム上で実行してくれ」ということだと考えます。
また、リストア時の管理アクティビティの監査ログを確認しましたが、特に重複排除周りで有用なログはありませんでした。
::*> security audit log show -fields timestamp, node, application, vserver, username, input, state, message -state Error|Success -timestamp >"Wed Jan 10 01:00:00 2024" timestamp node application vserver username input state message -------------------------- ------------------------- ----------- ---------------------- ----------------- ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- ------- ------- "Wed Jan 10 01:00:10 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane POST /api/storage/volumes/?return_records=true : {"comment":"FSx.tmp.fsvol-0930084c85460e6d5.79ddbfb8-864c-4233-9e9b-52f473d65665","language":"c.utf_8","name":"vol2_restored_se_on","size":68719476736,"tiering":{"policy":"NONE"},"type":"dp","aggregates":[{"name":"aggr1","uuid":"09e00157-ab55-11ee-b1b8-195a72820387"}],"svm":{"name":"svm","uuid":"9934e544-ab55-11ee-b1b8-195a72820387"}} Success - "Wed Jan 10 01:00:21 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane PATCH /api/storage/volumes/9a9a34c5-af53-11ee-b1b8-195a72820387 : {"comment":""} Success - "Wed Jan 10 01:00:21 2024" FsxId0f1302327a12b6488-01 ssh FsxId0f1302327a12b6488 fsx-control-plane set -privilege diagnostic Success - "Wed Jan 10 01:00:21 2024" FsxId0f1302327a12b6488-01 ssh FsxId0f1302327a12b6488 fsx-control-plane system node run -node FsxId0f1302327a12b6488-01 -command wafl obj_cache flush Success - "Wed Jan 10 01:00:21 2024" FsxId0f1302327a12b6488-01 ssh FsxId0f1302327a12b6488 fsx-control-plane Logging out Success - "Wed Jan 10 01:00:21 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; system node run -node FsxId0f1302327a12b6488-01 -command wafl obj_cache flush"} Success - "Wed Jan 10 01:00:21 2024" FsxId0f1302327a12b6488-01 ssh FsxId0f1302327a12b6488 fsx-control-plane set -privilege diagnostic Success - "Wed Jan 10 01:00:21 2024" FsxId0f1302327a12b6488-01 ssh FsxId0f1302327a12b6488 fsx-control-plane system node run -node FsxId0f1302327a12b6488-02 -command wafl obj_cache flush Success - "Wed Jan 10 01:00:21 2024" FsxId0f1302327a12b6488-01 ssh FsxId0f1302327a12b6488 fsx-control-plane Logging out Success - "Wed Jan 10 01:00:21 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; system node run -node FsxId0f1302327a12b6488-02 -command wafl obj_cache flush"} Success - "Wed Jan 10 01:00:21 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"} Success - "Wed Jan 10 01:00:21 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane POST /api/snapmirror/relationships/?return_records=true : {"destination":{"path":"svm:vol2_restored_se_on"},"restore":true,"source":{"path":"amazon-fsx-ontap-backup-ap-northeast-3-d78270f6-b557c480:/objstore/0cc00000-0059-77e6-0000-00000008c487_rst","uuid":"0cc00000-0059-77e6-0000-00000008c487"}} Success - "Wed Jan 10 01:00:22 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane POST /api/snapmirror/relationships : uuid=a1605c4f-af53-11ee-b1b8-195a72820387 isv_name="AWS FSx" Success - "Wed Jan 10 01:00:22 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane POST /api/cluster/licensing/access_tokens/ : {"client_secret":***,"grant_type":"client_credentials","client_id":"clientId"} Success - "Wed Jan 10 01:00:22 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane POST /api/snapmirror/relationships/a1605c4f-af53-11ee-b1b8-195a72820387/transfers : isv_name="AWS FSx" Success - "Wed Jan 10 01:00:22 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane POST /api/snapmirror/relationships/a1605c4f-af53-11ee-b1b8-195a72820387/transfers?return_records=true : {"source_snapshot":"backup-0c200f9854188e74d"} Success - "Wed Jan 10 01:00:24 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 admin GET /api/private/cli/storage/failover?fields=node,possible,reason Success - "Wed Jan 10 01:00:25 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 admin GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid Success - "Wed Jan 10 01:05:25 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 admin GET /api/private/cli/storage/failover?fields=node,possible,reason Success - "Wed Jan 10 01:05:27 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 admin GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid Success - "Wed Jan 10 01:05:52 2024" FsxId0f1302327a12b6488-01 ssh FsxId0f1302327a12b6488 fsx-control-plane set -privilege diagnostic Success - "Wed Jan 10 01:05:52 2024" FsxId0f1302327a12b6488-01 ssh FsxId0f1302327a12b6488 fsx-control-plane volume efficiency inactive-data-compression stop -volume vol2_restored_se_on -vserver svm Success - "Wed Jan 10 01:05:52 2024" FsxId0f1302327a12b6488-01 ssh FsxId0f1302327a12b6488 fsx-control-plane Logging out Success - "Wed Jan 10 01:05:52 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; volume efficiency inactive-data-compression stop -volume vol2_restored_se_on -vserver svm"} Success - "Wed Jan 10 01:05:52 2024" FsxId0f1302327a12b6488-01 ssh FsxId0f1302327a12b6488 fsx-control-plane set -privilege diagnostic Success - "Wed Jan 10 01:05:52 2024" FsxId0f1302327a12b6488-01 ssh FsxId0f1302327a12b6488 fsx-control-plane volume efficiency inactive-data-compression modify -volume vol2_restored_se_on -vserver svm -is-enabled false Success - "Wed Jan 10 01:05:52 2024" FsxId0f1302327a12b6488-01 ssh FsxId0f1302327a12b6488 fsx-control-plane Logging out Success - "Wed Jan 10 01:05:52 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane POST /api/private/cli : {"input":"set -privilege diagnostic ; volume efficiency inactive-data-compression modify -volume vol2_restored_se_on -vserver svm -is-enabled false"} Success - "Wed Jan 10 01:06:03 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane PATCH /api/storage/volumes/9a9a34c5-af53-11ee-b1b8-195a72820387 : {"tiering":{"policy":"NONE"},"nas":{"path":"/vol2_restored_se_on","security_style":"unix"},"snapshot_policy":{"name":"none"}} Success - "Wed Jan 10 01:08:41 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 fsx-control-plane GET /api/private/cli/vserver/cifs/check/?fields=status%2Cstatus_details Success - "Wed Jan 10 01:15:24 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 admin GET /api/private/cli/storage/failover?fields=node,possible,reason Success - "Wed Jan 10 01:15:24 2024" FsxId0f1302327a12b6488-01 http FsxId0f1302327a12b6488 admin GET /api/private/cli/storage/aggregate?fields=raidstatus%2Ccomposite%2Croot%2Cuuid Success - 32 entries were displayed.
Storage EfficiencyをOffにしてリストア
Storage EfficiencyをOffにしてリストアするパターンも試してみます。
リストアされたボリュームのライフサイクルの状態が作成済み
に変わった後のStorage Efficiency、ボリュームの状態は以下のとおりです。
::*> volume efficiency show -volume vol2* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 06:55:01 Tue Jan 09 14:36:35 2024 Tue Jan 09 18:44:39 2024 38.25GB 0% 8KB 64.34GB svm vol2_restored_se_off Disabled Idle for 00:04:28 Wed Jan 10 01:31:21 2024 Wed Jan 10 01:35:12 2024 0B 51% 633.4MB 64.89GB svm vol2_restored_se_on Enabled 3616852 KB (14%) Done Wed Jan 10 01:00:28 2024 Wed Jan 10 01:00:28 2024 0B 49% 634.3MB 64.91GB 3 entries were displayed. ::*> volume show -volume vol2* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------- ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 25.22GB 64GB 60.80GB 35.58GB 58% 60.00GB 63% 4.00GB 95.58GB 157% - 64.34GB 0B 0% svm vol2_restored_se_off 71.70GB 7.47GB 71.70GB 71.70GB 64.23GB 89% 718.4MB 1% 28KB 64.93GB 91% - 64.89GB 0B 0% svm vol2_restored_se_on 71.90GB 7.55GB 71.90GB 71.90GB 64.34GB 89% 4.02GB 6% 132KB 68.37GB 95% - 64.91GB 0B 0% 3 entries were displayed.
やはりリストアしたタイミングで重複排除はほぼ効いていません。
重複排除が効ききったボリュームのバックアップからリストア
重複排除が効ききったボリュームのバックアップからリストアした場合も試します。
バックアップを取得すると、前回バックアップ取得時に作成されたSnapshotが削除され、ボリュームの物理使用量が30GBほど減りました。
::*> volume efficiency show -volume vol2* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 07:14:44 Tue Jan 09 14:36:35 2024 Tue Jan 09 18:44:39 2024 38.25GB 0% 8KB 64.34GB svm vol2_restored_se_off Disabled Idle for 00:24:11 Wed Jan 10 01:31:21 2024 Wed Jan 10 01:35:12 2024 0B 51% 633.4MB 64.89GB svm vol2_restored_se_on Enabled 5833440 KB (24%) Done Wed Jan 10 01:00:28 2024 Wed Jan 10 01:00:28 2024 0B 49% 634.3MB 64.91GB 3 entries were displayed. ::*> volume show -volume vol2* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 56.46GB 64GB 60.80GB 4.34GB 7% 60.00GB 93% 4.00GB 64.34GB 106% - 64.34GB 0B 0% svm vol2_restored_se_off 71.70GB 7.47GB 71.70GB 71.70GB 64.23GB 89% 718.4MB 1% 28KB 64.93GB 91% - 64.89GB 0B 0% svm vol2_restored_se_on 71.90GB 7.54GB 71.90GB 71.90GB 64.36GB 89% 6.14GB 9% 200KB 70.50GB 98% - 64.91GB 0B 0% 3 entries were displayed. ::*> snapshot show -volume vol2 ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm vol2 backup-0508c07ac0cda7f20 160KB 0% 0%
取得したバックアップからリストアします。
リストア中のボリュームを確認します。
::*> volume efficiency show -volume vol2* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 07:19:11 Tue Jan 09 14:36:35 2024 Tue Jan 09 18:44:39 2024 38.25GB 0% 8KB 64.34GB svm vol2_restored2_se_on Enabled Idle for 00:00:30 Wed Jan 10 02:03:20 2024 Wed Jan 10 02:03:20 2024 0B 7% 49.69MB 6.09GB svm vol2_restored_se_off Disabled Idle for 00:28:38 Wed Jan 10 01:31:21 2024 Wed Jan 10 01:35:12 2024 0B 51% 633.4MB 64.89GB svm vol2_restored_se_on Enabled 6318324 KB (26%) Done Wed Jan 10 01:00:28 2024 Wed Jan 10 01:00:28 2024 0B 49% 634.3MB 64.91GB 4 entries were displayed. ::*> volume show -volume vol2* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 56.46GB 64GB 60.80GB 4.34GB 7% 60.00GB 93% 4.00GB 64.34GB 106% - 64.34GB - - svm vol2_restored2_se_on 64GB 58.46GB 64GB 64GB 5.54GB 8% 829.6MB 13% 1008KB 6.35GB 10% - 6.35GB - - svm vol2_restored_se_off 71.70GB 7.47GB 71.70GB 71.70GB 64.23GB 89% 718.4MB 1% 28KB 64.93GB 91% - 64.89GB - - svm vol2_restored_se_on 71.90GB 7.53GB 71.90GB 71.90GB 64.36GB 89% 6.60GB 9% 216KB 70.96GB 99% - 64.91GB - - 4 entries were displayed. --- ::*> volume efficiency show -volume vol2* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-endvserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 07:23:59 Tue Jan 09 14:36:35 2024 Tue Jan 09 18:44:39 2024 38.25GB 0% 8KB 64.34GB svm vol2_restored2_se_on Enabled 14028 KB (0%) Done Wed Jan 10 02:03:20 2024 Wed Jan 10 02:03:20 2024 0B 50% 631.6MB 64.88GB svm vol2_restored_se_off Disabled Idle for 00:33:26 Wed Jan 10 01:31:21 2024 Wed Jan 10 01:35:12 2024 0B 51% 633.4MB 64.89GB svm vol2_restored_se_on Enabled 6719308 KB (27%) Done Wed Jan 10 01:00:28 2024 Wed Jan 10 01:00:28 2024 0B 49% 634.3MB 64.92GB 4 entries were displayed. ::*> volume show -volume vol2* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available, type vserver volume size available filesystem-size total used percent-used type dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ---- ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 56.46GB 64GB 60.80GB 4.34GB 7% RW 60.00GB 93% 4.00GB 64.34GB 106% - 64.34GB 0B 0% svm vol2_restored2_se_on 71.38GB 7.33GB 71.38GB 71.38GB 64.05GB 89% DP 849.3MB 1% 1012KB 64.88GB 91% - 64.88GB 0B 0% svm vol2_restored_se_off 71.70GB 7.47GB 71.70GB 71.70GB 64.23GB 89% RW 718.4MB 1% 28KB 64.93GB 91% - 64.89GB 0B 0% svm vol2_restored_se_on 71.90GB 7.53GB 71.90GB 71.90GB 64.37GB 89% RW 6.99GB 10% 228KB 71.35GB 99% - 64.92GB 0B 0% 4 entries were displayed. ::*>
約65GBリストアした時点で849.3MBしか重複排除できていません。
やはり重複排除が効いた状態でリストアしている訳ではなさそうです。
リストアされたボリュームのライフサイクルの状態が作成済み
に変わった後のStorage Efficiency、ボリュームの状態は以下のとおりです。
::*> volume efficiency show -volume vol2* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- ----------------- svm vol2 Enabled Idle for 07:26:15 Tue Jan 09 14:36:35 2024 Tue Jan 09 18:44:39 2024 38.25GB 0% 8KB 64.34GB svm vol2_restored2_se_on Enabled 245844 KB (1%) Done Wed Jan 10 02:03:20 2024 Wed Jan 10 02:03:20 2024 0B 50% 631.9MB 64.89GB svm vol2_restored_se_off Disabled Idle for 00:35:42 Wed Jan 10 01:31:21 2024 Wed Jan 10 01:35:12 2024 0B 51% 633.4MB 64.89GB svm vol2_restored_se_on Enabled 6952764 KB (28%) Done Wed Jan 10 01:00:28 2024 Wed Jan 10 01:00:28 2024 0B 49% 634.3MB 64.92GB 4 entries were displayed. ::*> volume show -volume vol2* -fields available, filesystem-size, total, used, percent-used, performance-tier-inactive-user-data, performance-tier-inactive-user-data-percent, size, dedupe-space-saved, dedupe-space-saved-percent, dedupe-space-shared,logical-used, logical-used-percent,logical-used-by-afs, logical-available vserver volume size available filesystem-size total used percent-used dedupe-space-saved dedupe-space-saved-percent dedupe-space-shared logical-used logical-used-percent logical-available logical-used-by-afs performance-tier-inactive-user-data performance-tier-inactive-user-data-percent ------- ------ ---- --------- --------------- ------- ------ ------------ ------------------ -------------------------- ------------------- ------------ -------------------- ----------------- ------------------- ----------------------------------- ------------------------------------------- svm vol2 64GB 56.46GB 64GB 60.80GB 4.34GB 7% 60.00GB 93% 4.00GB 64.34GB 106% - 64.34GB 0B 0% svm vol2_restored2_se_on 71.38GB 7.37GB 71.38GB 71.38GB 64.01GB 89% 1.04GB 2% 1016KB 65.06GB 91% - 64.89GB 0B 0% svm vol2_restored_se_off 71.70GB 7.47GB 71.70GB 71.70GB 64.23GB 89% 718.4MB 1% 28KB 64.93GB 91% - 64.89GB 0B 0% svm vol2_restored_se_on 71.90GB 7.53GB 71.90GB 71.90GB 64.37GB 89% 7.20GB 10% 236KB 71.57GB 100% - 64.92GB 0B 0% 4 entries were displayed. ::*> snapshot show -volume vol2* ---Blocks--- Vserver Volume Snapshot Size Total% Used% -------- -------- ------------------------------------- -------- ------ ----- svm vol2 backup-0508c07ac0cda7f20 160KB 0% 0% vol2_restored2_se_on backup-0508c07ac0cda7f20 174.9MB 0% 0% vol2_restored_se_off backup-0c200f9854188e74d 34.38MB 0% 0% vol2_restored_se_on backup-0c200f9854188e74d 6.67GB 9% 10% 4 entries were displayed.
特に重複排除量は変わりありません。
ほぼ重複排除が効いていない状態でリストアされる
Amazon FSx for NetApp ONTAPのバックアップからリストアした際に重複排除は維持できるのか確認してみました。
残念ですが、ほぼ重複排除が効いていない状態でリストアされることが分かりました。
これはぜひ改善して欲しいです。
リストアをするとボリュームの物理使用量が増え、SSDを圧迫してしまいます。
FSxのバックアップからリストアする際に帯域制限することもできません。そのため、SSDの空き容量が少ない状態で重複排除量が非常に多いボリュームのバックアップからリストアするとSSDの枯渇は避けられません。
FSxのバックアップからリストアする際の思想としては「重複排除はリストアされたボリューム上で実行してくれ」というのもあまり良くないと考えています。
理由は以下のとおりです。
- Storage Efficiencyが実行する際にFSxNファイルシステムのCPUやディスクIOPSなどのリソースを消費するため、パフォーマンスに影響がある
- FSxNファイルシステムのStorage Efficiencyの最大同時実行数は8つまでであるため、大量のボリュームをリストアすると時間がかかる
- Tiering Policy Allでリストアする場合、重複排除がかかり切る前にキャパシティプールストレージに階層化されてしまう
重複排除を維持できたままリストアできるに越したことないので、そのように修正して欲しいなと思っています。
現状、重複排除を効かせた状態でバックアップ / リストアしたいのであれば、SnapMirrorを使うことになります。
この記事が誰かの助けになれば幸いです。
以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!