[Amazon FSx for NetApp ONTAP] 同時にStorage Efficiencyを実行できるのは1ノードあたり8つまでなので注意しよう
複数のボリュームに一気にStorage Efficiencyを実行したい
こんにちは、のんピ(@non____97)です。
皆さんは複数のボリュームに一気にStorage Efficiencyを実行したいなと思ったことはありますか? 私はあります。
NetApp公式ドキュメントを確認すると、ONTAPではノードごとに最大8つの重複排除またはデータ圧縮操作を同時に実行可能と記載がありました。
You can run a maximum of eight concurrent deduplication or data compression operations per node. If any more efficiency operations are scheduled, the operations are queued.
Amazon FSx for NetApp ONTAP(以降FSxN)ではどうでしょうか。
以下記事で紹介している通り、FSxNでは1つのファイルシステムに2つのノードが動作しています。各ノードそれぞれが8つづつ捌いてくれると嬉しいです。
実際に動作確認してみました。
いきなりまとめ
- Storage Efficiencyは1ファイルシステムで最大同時に8つまで実行可能
- それ以上の数の実行リクエストをした場合はキューに入り、実行を待つ
- Inactive data compressionは1ファイルシステムで最大同時に1つまで実行可能
- キューに入るようなことは発生せず、2つ目のボリューム以降の実行は失敗する
やってみた
検証環境準備
検証環境の準備です。
8GiBのボリュームを20個作成します。vol1はFSxNファイルシステム作成時に一緒に作成しています。
::> volume create -volume vol2 -state online -junction-path /vol2 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol3 -state online -junction-path /vol3 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol4 -state online -junction-path /vol4 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol5 -state online -junction-path /vol5 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol6 -state online -junction-path /vol6 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol7 -state online -junction-path /vol7 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol8 -state online -junction-path /vol8 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol9 -state online -junction-path /vol9 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol10 -state online -junction-path /vol10 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol11 -state online -junction-path /vol11 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol12 -state online -junction-path /vol12 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol13 -state online -junction-path /vol13 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol14 -state online -junction-path /vol14 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol15 -state online -junction-path /vol15 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol16 -state online -junction-path /vol16 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol17 -state online -junction-path /vol17 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol18 -state online -junction-path /vol18 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol19 -state online -junction-path /vol19 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1 ::> volume create -volume vol20 -state online -junction-path /vol20 -size 8GB -tiering-policy none -snapshot-policy none -aggregate aggr1
ボリュームが作成されており、Storage Efficiencyが有効になっていることを確認します。
::> volume show Vserver Volume Aggregate State Type Size Available Used% --------- ------------ ------------ ---------- ---- ---------- ---------- ----- svm svm_root aggr1 online RW 1GB 972.1MB 0% svm vol1 aggr1 online RW 8GB 7.60GB 0% svm vol10 aggr1 online RW 8GB 7.60GB 0% svm vol11 aggr1 online RW 8GB 7.60GB 0% svm vol12 aggr1 online RW 8GB 7.60GB 0% svm vol13 aggr1 online RW 8GB 7.60GB 0% svm vol14 aggr1 online RW 8GB 7.60GB 0% svm vol15 aggr1 online RW 8GB 7.60GB 0% svm vol16 aggr1 online RW 8GB 7.60GB 0% svm vol17 aggr1 online RW 8GB 7.60GB 0% svm vol18 aggr1 online RW 8GB 7.60GB 0% svm vol19 aggr1 online RW 8GB 7.60GB 0% svm vol2 aggr1 online RW 8GB 7.60GB 0% svm vol20 aggr1 online RW 8GB 7.60GB 0% svm vol3 aggr1 online RW 8GB 7.60GB 0% svm vol4 aggr1 online RW 8GB 7.60GB 0% svm vol5 aggr1 online RW 8GB 7.60GB 0% svm vol6 aggr1 online RW 8GB 7.60GB 0% svm vol7 aggr1 online RW 8GB 7.60GB 0% svm vol8 aggr1 online RW 8GB 7.60GB 0% svm vol9 aggr1 online RW 8GB 7.60GB 0% 21 entries were displayed. ::> volume efficiency show Vserver Volume State Status Progress Policy ---------- ---------------- --------- ----------- ------------------ ---------- svm vol1 Enabled Idle Idle for 00:54:39 auto svm vol10 Enabled Idle Idle for 00:01:43 auto svm vol11 Enabled Idle Idle for 00:01:38 auto svm vol12 Enabled Idle Idle for 00:01:32 auto svm vol13 Enabled Idle Idle for 00:01:27 auto svm vol14 Enabled Idle Idle for 00:01:22 auto svm vol15 Enabled Idle Idle for 00:01:17 auto svm vol16 Enabled Idle Idle for 00:01:12 auto svm vol17 Enabled Idle Idle for 00:01:07 auto svm vol18 Enabled Idle Idle for 00:01:01 auto svm vol19 Enabled Idle Idle for 00:00:56 auto svm vol2 Enabled Idle Idle for 00:04:05 auto svm vol20 Enabled Idle Idle for 00:00:19 auto svm vol3 Enabled Idle Idle for 00:02:18 auto svm vol4 Enabled Idle Idle for 00:02:13 auto svm vol5 Enabled Idle Idle for 00:02:08 auto svm vol6 Enabled Idle Idle for 00:02:03 auto svm vol7 Enabled Idle Idle for 00:01:58 auto svm vol8 Enabled Idle Idle for 00:01:53 auto svm vol9 Enabled Idle Idle for 00:01:48 auto 20 entries were displayed. ::> volume efficiency show -fields state, policy, storage-efficiency-mode vserver volume state policy storage-efficiency-mode ------- ------ ------- ------ ----------------------- svm vol1 Enabled auto efficient svm vol10 Enabled auto efficient svm vol11 Enabled auto efficient svm vol12 Enabled auto efficient svm vol13 Enabled auto efficient svm vol14 Enabled auto efficient svm vol15 Enabled auto efficient svm vol16 Enabled auto efficient svm vol17 Enabled auto efficient svm vol18 Enabled auto efficient svm vol19 Enabled auto efficient svm vol2 Enabled auto efficient svm vol20 Enabled auto efficient svm vol3 Enabled auto efficient svm vol4 Enabled auto efficient svm vol5 Enabled auto efficient svm vol6 Enabled auto efficient svm vol7 Enabled auto efficient svm vol8 Enabled auto efficient svm vol9 Enabled auto efficient 20 entries were displayed.
NFSクライアントで作成したボリュームをマウントします。
$ for i in {1..20}; do # マウントポイントの作成 sudo mkdir -p /mnt/fsxn/vol${i} # ボリュームのマウント sudo mount -t nfs svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol${i} /mnt/fsxn/vol${i} done # マウントポイントの確認 $ ls -lv /mnt/fsxn/ total 80 drwxr-xr-x. 2 root root 4096 Dec 12 06:55 vol1 drwxr-xr-x. 2 root root 4096 Dec 12 07:46 vol2 drwxr-xr-x. 2 root root 4096 Dec 12 07:48 vol3 drwxr-xr-x. 2 root root 4096 Dec 12 07:48 vol4 drwxr-xr-x. 2 root root 4096 Dec 12 07:48 vol5 drwxr-xr-x. 2 root root 4096 Dec 12 07:48 vol6 drwxr-xr-x. 2 root root 4096 Dec 12 07:48 vol7 drwxr-xr-x. 2 root root 4096 Dec 12 07:48 vol8 drwxr-xr-x. 2 root root 4096 Dec 12 07:48 vol9 drwxr-xr-x. 2 root root 4096 Dec 12 07:48 vol10 drwxr-xr-x. 2 root root 4096 Dec 12 07:48 vol11 drwxr-xr-x. 2 root root 4096 Dec 12 07:48 vol12 drwxr-xr-x. 2 root root 4096 Dec 12 07:48 vol13 drwxr-xr-x. 2 root root 4096 Dec 12 07:49 vol14 drwxr-xr-x. 2 root root 4096 Dec 12 07:49 vol15 drwxr-xr-x. 2 root root 4096 Dec 12 07:49 vol16 drwxr-xr-x. 2 root root 4096 Dec 12 07:49 vol17 drwxr-xr-x. 2 root root 4096 Dec 12 07:49 vol18 drwxr-xr-x. 2 root root 4096 Dec 12 07:49 vol19 drwxr-xr-x. 2 root root 4096 Dec 12 07:50 vol20 # ボリュームがマウントされたことを確認 $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol1 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol1 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol2 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol2 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol3 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol3 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol4 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol4 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol5 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol5 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol6 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol6 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol7 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol7 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol8 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol8 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol9 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol9 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol10 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol10 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol11 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol11 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol12 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol12 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol13 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol13 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol14 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol14 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol15 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol15 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol16 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol16 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol17 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol17 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol18 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol18 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol19 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol19 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol20 nfs4 7.7G 384K 7.6G 1% /mnt/fsxn/vol20
各ボリュームに4GiBのファイルを作成します。なお、8並列でファイルを作成していますが、普段書き込みで150MBps出ているものが1ファイルあたり20MBpsほどになっていました。直列でも並列でも速度がほとんど変わらないことから、スループットキャパシティが128MBpsの場合はFSxN側がボトルネックになっていそうです。
$ seq 1 20 \ | xargs -i \ -P 8 \ bash -c "sudo dd if=/dev/urandom of=/mnt/fsxn/vol{}/test_file_1 bs=1M count=4096" $ df -hT -t nfs4 Filesystem Type Size Used Avail Use% Mounted on svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol1 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol1 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol2 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol2 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol3 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol3 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol4 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol4 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol5 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol5 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol6 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol6 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol7 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol7 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol8 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol8 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol9 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol9 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol10 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol10 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol11 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol11 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol12 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol12 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol13 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol13 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol14 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol14 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol15 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol15 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol16 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol16 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol17 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol17 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol18 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol18 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol19 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol19 svm-09126365bf30845f9.fs-0e124ba7bf90735da.fsx.us-east-1.amazonaws.com:/vol20 nfs4 7.7G 4.1G 3.6G 54% /mnt/fsxn/vol20
Storage Efficiencyの場合
Storage Efficiencyを実行前の状態を確認しておきます。
::> volume efficiency show -volume vol* -fields logical-data-size, state, progress vserver volume state progress logical-data-size ------- ------ ------- ----------------- ----------------- svm vol1 Enabled Idle for 00:09:36 4.07GB svm vol10 Enabled Idle for 00:02:54 4.08GB svm vol11 Enabled Idle for 00:02:46 4.08GB svm vol12 Enabled Idle for 00:02:50 4.08GB svm vol13 Enabled Idle for 00:02:42 4.08GB svm vol14 Enabled Idle for 00:02:39 4.08GB svm vol15 Enabled Idle for 00:02:35 4.08GB svm vol16 Enabled Idle for 00:02:27 4.08GB svm vol17 Enabled Idle for 00:02:31 4.08GB svm vol18 Enabled Idle for 00:02:23 4.08GB svm vol19 Enabled Idle for 00:02:18 4.08GB svm vol2 Enabled Idle for 00:03:09 4.08GB svm vol20 Enabled Idle for 00:02:14 4.08GB svm vol3 Enabled Idle for 00:03:05 4.08GB svm vol4 Enabled Idle for 00:03:13 4.08GB svm vol5 Enabled Idle for 00:03:17 4.08GB svm vol6 Enabled Idle for 00:06:14 4.08GB svm vol7 Enabled Idle for 00:03:02 4.09GB svm vol8 Enabled Idle for 00:03:20 4.08GB svm vol9 Enabled Idle for 00:02:58 4.08GB 20 entries were displayed.
Storage Efficiencyを用意した全ボリュームに対して実行します。
::> volume efficiency start -volume vol* -scan-old-data Warning: This operation scans all of the data in volume "vol1" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol1" of Vserver "svm" has started. Warning: This operation scans all of the data in volume "vol10" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol10" of Vserver "svm" has started. Warning: This operation scans all of the data in volume "vol11" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol11" of Vserver "svm" has started. Warning: This operation scans all of the data in volume "vol12" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol12" of Vserver "svm" has started. Warning: This operation scans all of the data in volume "vol13" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol13" of Vserver "svm" has started. Warning: This operation scans all of the data in volume "vol14" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol14" of Vserver "svm" has started. Warning: This operation scans all of the data in volume "vol15" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol15" of Vserver "svm" has started. Warning: This operation scans all of the data in volume "vol16" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol16" of Vserver "svm" has started. Warning: This operation scans all of the data in volume "vol17" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol17" of Vserver "svm" is queued. Warning: This operation scans all of the data in volume "vol18" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol18" of Vserver "svm" is queued. Warning: This operation scans all of the data in volume "vol19" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol19" of Vserver "svm" is queued. Warning: This operation scans all of the data in volume "vol2" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol2" of Vserver "svm" is queued. Warning: This operation scans all of the data in volume "vol20" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol20" of Vserver "svm" is queued. Warning: This operation scans all of the data in volume "vol3" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol3" of Vserver "svm" is queued. Warning: This operation scans all of the data in volume "vol4" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol4" of Vserver "svm" is queued. Warning: This operation scans all of the data in volume "vol5" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol5" of Vserver "svm" is queued. Warning: This operation scans all of the data in volume "vol6" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol6" of Vserver "svm" is queued. Warning: This operation scans all of the data in volume "vol7" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol7" of Vserver "svm" is queued. Warning: This operation scans all of the data in volume "vol8" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol8" of Vserver "svm" is queued. Warning: This operation scans all of the data in volume "vol9" of Vserver "svm". It might take a significant time, and degrade performance during that time. Do you want to continue? {y|n}: y The efficiency operation for volume "vol9" of Vserver "svm" is queued. 20 entries were acted on.
9つ目のボリュームの実行以降はキューに入れられていることが分かります。
実行後の各ボリュームのStorage Efficiencyの状態を確認します。
::> volume efficiency show -volume vol* -fields logical-data-size, state, progress vserver volume state progress logical-data-size ------- ------ ------- ----------------- ----------------- svm vol1 Enabled 851968 KB Scanned 4.02GB svm vol10 Enabled 399360 KB Scanned 4.03GB svm vol11 Enabled 0 KB Scanned 4.03GB svm vol12 Enabled 0 KB Scanned 4.03GB svm vol13 Enabled 0 KB Scanned 4.03GB svm vol14 Enabled 0 KB Scanned 4.03GB svm vol15 Enabled 0 KB Scanned 4.03GB svm vol16 Enabled 0 KB Scanned 4.03GB svm vol17 Enabled Idle for 00:03:56 4.08GB svm vol18 Enabled Idle for 00:03:48 4.08GB svm vol19 Enabled Idle for 00:03:43 4.08GB svm vol2 Enabled Idle for 00:04:34 4.08GB svm vol20 Enabled Idle for 00:03:39 4.08GB svm vol3 Enabled Idle for 00:04:30 4.08GB svm vol4 Enabled Idle for 00:04:38 4.08GB svm vol5 Enabled Idle for 00:04:43 4.08GB svm vol6 Enabled Idle for 00:07:40 4.08GB svm vol7 Enabled Idle for 00:04:28 4.09GB svm vol8 Enabled Idle for 00:04:46 4.08GB svm vol9 Enabled Idle for 00:04:24 4.08GB 20 entries were displayed. ::> volume efficiency show -volume vol* -fields logical-data-size, state, progress vserver volume state progress logical-data-size ------- ------ ------- ----------------- ----------------- svm vol1 Enabled 851968 KB Scanned 4.02GB svm vol10 Enabled 425984 KB Scanned 4.03GB svm vol11 Enabled 425984 KB Scanned 4.03GB svm vol12 Enabled 425984 KB Scanned 4.03GB svm vol13 Enabled 425984 KB Scanned 4.03GB svm vol14 Enabled 425984 KB Scanned 4.03GB svm vol15 Enabled 425984 KB Scanned 4.03GB svm vol16 Enabled 425984 KB Scanned 4.03GB svm vol17 Enabled Idle for 00:04:01 4.08GB svm vol18 Enabled Idle for 00:03:53 4.08GB svm vol19 Enabled Idle for 00:03:48 4.08GB svm vol2 Enabled Idle for 00:04:39 4.08GB svm vol20 Enabled Idle for 00:03:44 4.08GB svm vol3 Enabled Idle for 00:04:35 4.08GB svm vol4 Enabled Idle for 00:04:43 4.08GB svm vol5 Enabled Idle for 00:04:47 4.08GB svm vol6 Enabled Idle for 00:07:44 4.08GB svm vol7 Enabled Idle for 00:04:32 4.09GB svm vol8 Enabled Idle for 00:04:50 4.08GB svm vol9 Enabled Idle for 00:04:28 4.08GB 20 entries were displayed.
8つまでしか同時に実行できていないですね。
もう少し様子を見ます。
::> volume efficiency show -volume vol* -fields logical-data-size, state, progress, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ----------------- svm vol1 Enabled Idle for 00:00:19 Tue Dec 12 08:24:07 2023 Tue Dec 12 08:26:21 2023 4.04GB svm vol10 Enabled Idle for 00:00:17 Tue Dec 12 08:24:08 2023 Tue Dec 12 08:26:23 2023 4.05GB svm vol11 Enabled Idle for 00:00:17 Tue Dec 12 08:24:08 2023 Tue Dec 12 08:26:23 2023 4.05GB svm vol12 Enabled Idle for 00:00:11 Tue Dec 12 08:24:08 2023 Tue Dec 12 08:26:29 2023 4.05GB svm vol13 Enabled Idle for 00:00:17 Tue Dec 12 08:24:09 2023 Tue Dec 12 08:26:23 2023 4.06GB svm vol14 Enabled Idle for 00:00:11 Tue Dec 12 08:24:09 2023 Tue Dec 12 08:26:29 2023 4.05GB svm vol15 Enabled Idle for 00:00:16 Tue Dec 12 08:24:09 2023 Tue Dec 12 08:26:24 2023 4.05GB svm vol16 Enabled Idle for 00:00:08 Tue Dec 12 08:24:10 2023 Tue Dec 12 08:26:32 2023 4.05GB svm vol17 Enabled 1277952 KB Scanned Tue Dec 12 08:20:19 2023 Tue Dec 12 08:20:23 2023 4.03GB svm vol18 Enabled 425984 KB Scanned Tue Dec 12 08:20:27 2023 Tue Dec 12 08:20:31 2023 4.03GB svm vol19 Enabled 425984 KB Scanned Tue Dec 12 08:20:31 2023 Tue Dec 12 08:20:36 2023 4.03GB svm vol2 Enabled 425984 KB Scanned Tue Dec 12 08:19:41 2023 Tue Dec 12 08:19:45 2023 4.03GB svm vol20 Enabled 425984 KB Scanned Tue Dec 12 08:20:36 2023 Tue Dec 12 08:20:40 2023 4.03GB svm vol3 Enabled 0 KB Scanned Tue Dec 12 08:19:45 2023 Tue Dec 12 08:19:49 2023 4.03GB svm vol4 Enabled 0 KB Scanned Tue Dec 12 08:19:37 2023 Tue Dec 12 08:19:41 2023 4.03GB svm vol5 Enabled 0 KB Scanned Tue Dec 12 08:19:34 2023 Tue Dec 12 08:19:37 2023 4.03GB svm vol6 Enabled Idle for 00:10:00 Tue Dec 12 08:13:18 2023 Tue Dec 12 08:16:40 2023 4.08GB svm vol7 Enabled Idle for 00:06:48 Tue Dec 12 08:19:49 2023 Tue Dec 12 08:19:52 2023 4.09GB svm vol8 Enabled Idle for 00:07:06 Tue Dec 12 08:16:40 2023 Tue Dec 12 08:19:34 2023 4.08GB svm vol9 Enabled Idle for 00:06:44 Tue Dec 12 08:19:52 2023 Tue Dec 12 08:19:56 2023 4.08GB 20 entries were displayed. ::> volume efficiency show -volume vol* -fields logical-data-size, state, progress, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ----------------- svm vol1 Enabled Idle for 00:00:42 Tue Dec 12 08:24:07 2023 Tue Dec 12 08:26:21 2023 4.04GB svm vol10 Enabled Idle for 00:00:40 Tue Dec 12 08:24:08 2023 Tue Dec 12 08:26:23 2023 4.05GB svm vol11 Enabled Idle for 00:00:40 Tue Dec 12 08:24:08 2023 Tue Dec 12 08:26:23 2023 4.05GB svm vol12 Enabled Idle for 00:00:34 Tue Dec 12 08:24:08 2023 Tue Dec 12 08:26:29 2023 4.05GB svm vol13 Enabled Idle for 00:00:40 Tue Dec 12 08:24:09 2023 Tue Dec 12 08:26:23 2023 4.06GB svm vol14 Enabled Idle for 00:00:34 Tue Dec 12 08:24:09 2023 Tue Dec 12 08:26:29 2023 4.05GB svm vol15 Enabled Idle for 00:00:39 Tue Dec 12 08:24:09 2023 Tue Dec 12 08:26:24 2023 4.05GB svm vol16 Enabled Idle for 00:00:31 Tue Dec 12 08:24:10 2023 Tue Dec 12 08:26:32 2023 4.05GB svm vol17 Enabled 1730560 KB Scanned Tue Dec 12 08:20:19 2023 Tue Dec 12 08:20:23 2023 4.03GB svm vol18 Enabled 1277952 KB Scanned Tue Dec 12 08:20:27 2023 Tue Dec 12 08:20:31 2023 4.03GB svm vol19 Enabled 1277952 KB Scanned Tue Dec 12 08:20:31 2023 Tue Dec 12 08:20:36 2023 4.03GB svm vol2 Enabled 1277952 KB Scanned Tue Dec 12 08:19:41 2023 Tue Dec 12 08:19:45 2023 4.03GB svm vol20 Enabled 1277952 KB Scanned Tue Dec 12 08:20:36 2023 Tue Dec 12 08:20:40 2023 4.03GB svm vol3 Enabled 851968 KB Scanned Tue Dec 12 08:19:45 2023 Tue Dec 12 08:19:49 2023 4.03GB svm vol4 Enabled 851968 KB Scanned Tue Dec 12 08:19:37 2023 Tue Dec 12 08:19:41 2023 4.03GB svm vol5 Enabled 851968 KB Scanned Tue Dec 12 08:19:34 2023 Tue Dec 12 08:19:37 2023 4.03GB svm vol6 Enabled Idle for 00:10:23 Tue Dec 12 08:13:18 2023 Tue Dec 12 08:16:40 2023 4.08GB svm vol7 Enabled Idle for 00:07:11 Tue Dec 12 08:19:49 2023 Tue Dec 12 08:19:52 2023 4.09GB svm vol8 Enabled Idle for 00:07:29 Tue Dec 12 08:16:40 2023 Tue Dec 12 08:19:34 2023 4.08GB svm vol9 Enabled Idle for 00:07:07 Tue Dec 12 08:19:52 2023 Tue Dec 12 08:19:56 2023 4.08GB 20 entries were displayed.
8つのボリュームのStorage Efficiencyが完了したら、また8つのボリュームのStorage Efficiencyの実行が開始されました。
さらに待ってみます。
::> volume efficiency show -volume vol* -fields logical-data-size, state, progress, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ----------------- svm vol1 Enabled Idle for 00:02:46 Tue Dec 12 08:24:07 2023 Tue Dec 12 08:26:21 2023 4.04GB svm vol10 Enabled Idle for 00:02:44 Tue Dec 12 08:24:08 2023 Tue Dec 12 08:26:23 2023 4.05GB svm vol11 Enabled Idle for 00:02:44 Tue Dec 12 08:24:08 2023 Tue Dec 12 08:26:23 2023 4.05GB svm vol12 Enabled Idle for 00:02:38 Tue Dec 12 08:24:08 2023 Tue Dec 12 08:26:29 2023 4.05GB svm vol13 Enabled Idle for 00:02:44 Tue Dec 12 08:24:09 2023 Tue Dec 12 08:26:23 2023 4.06GB svm vol14 Enabled Idle for 00:02:38 Tue Dec 12 08:24:09 2023 Tue Dec 12 08:26:29 2023 4.05GB svm vol15 Enabled Idle for 00:02:43 Tue Dec 12 08:24:09 2023 Tue Dec 12 08:26:24 2023 4.05GB svm vol16 Enabled Idle for 00:02:35 Tue Dec 12 08:24:10 2023 Tue Dec 12 08:26:32 2023 4.05GB svm vol17 Enabled Idle for 00:00:35 Tue Dec 12 08:26:21 2023 Tue Dec 12 08:28:32 2023 4.05GB svm vol18 Enabled Idle for 00:00:22 Tue Dec 12 08:26:23 2023 Tue Dec 12 08:28:45 2023 4.05GB svm vol19 Enabled Idle for 00:00:30 Tue Dec 12 08:26:23 2023 Tue Dec 12 08:28:37 2023 4.05GB svm vol2 Enabled Idle for 00:00:22 Tue Dec 12 08:26:23 2023 Tue Dec 12 08:28:45 2023 4.05GB svm vol20 Enabled Idle for 00:00:19 Tue Dec 12 08:26:24 2023 Tue Dec 12 08:28:48 2023 4.05GB svm vol3 Enabled Idle for 00:00:15 Tue Dec 12 08:26:29 2023 Tue Dec 12 08:28:52 2023 4.06GB svm vol4 Enabled Idle for 00:00:12 Tue Dec 12 08:26:29 2023 Tue Dec 12 08:28:55 2023 4.05GB svm vol5 Enabled Idle for 00:00:12 Tue Dec 12 08:26:32 2023 Tue Dec 12 08:28:55 2023 4.06GB svm vol6 Enabled 3407872 KB Scanned Tue Dec 12 08:13:18 2023 Tue Dec 12 08:16:40 2023 4.03GB svm vol7 Enabled 2129920 KB Scanned Tue Dec 12 08:19:49 2023 Tue Dec 12 08:19:52 2023 4.04GB svm vol8 Enabled 1277952 KB Scanned Tue Dec 12 08:16:40 2023 Tue Dec 12 08:19:34 2023 4.03GB svm vol9 Enabled 1277952 KB Scanned Tue Dec 12 08:19:52 2023 Tue Dec 12 08:19:56 2023 4.03GB 20 entries were displayed. ::> volume efficiency show -volume vol* -fields logical-data-size, state, progress, last-op-begin, last-op-end vserver volume state progress last-op-begin last-op-end logical-data-size ------- ------ ------- ----------------- ------------------------ ------------------------ ----------------- svm vol1 Enabled Idle for 00:03:27 Tue Dec 12 08:24:07 2023 Tue Dec 12 08:26:21 2023 4.04GB svm vol10 Enabled Idle for 00:03:25 Tue Dec 12 08:24:08 2023 Tue Dec 12 08:26:23 2023 4.05GB svm vol11 Enabled Idle for 00:03:25 Tue Dec 12 08:24:08 2023 Tue Dec 12 08:26:23 2023 4.05GB svm vol12 Enabled Idle for 00:03:19 Tue Dec 12 08:24:08 2023 Tue Dec 12 08:26:29 2023 4.05GB svm vol13 Enabled Idle for 00:03:25 Tue Dec 12 08:24:09 2023 Tue Dec 12 08:26:23 2023 4.06GB svm vol14 Enabled Idle for 00:03:19 Tue Dec 12 08:24:09 2023 Tue Dec 12 08:26:29 2023 4.05GB svm vol15 Enabled Idle for 00:03:24 Tue Dec 12 08:24:09 2023 Tue Dec 12 08:26:24 2023 4.05GB svm vol16 Enabled Idle for 00:03:16 Tue Dec 12 08:24:10 2023 Tue Dec 12 08:26:32 2023 4.05GB svm vol17 Enabled Idle for 00:01:16 Tue Dec 12 08:26:21 2023 Tue Dec 12 08:28:32 2023 4.05GB svm vol18 Enabled Idle for 00:01:03 Tue Dec 12 08:26:23 2023 Tue Dec 12 08:28:45 2023 4.05GB svm vol19 Enabled Idle for 00:01:11 Tue Dec 12 08:26:23 2023 Tue Dec 12 08:28:37 2023 4.05GB svm vol2 Enabled Idle for 00:01:03 Tue Dec 12 08:26:23 2023 Tue Dec 12 08:28:45 2023 4.05GB svm vol20 Enabled Idle for 00:01:00 Tue Dec 12 08:26:24 2023 Tue Dec 12 08:28:48 2023 4.05GB svm vol3 Enabled Idle for 00:00:56 Tue Dec 12 08:26:29 2023 Tue Dec 12 08:28:52 2023 4.06GB svm vol4 Enabled Idle for 00:00:53 Tue Dec 12 08:26:29 2023 Tue Dec 12 08:28:55 2023 4.05GB svm vol5 Enabled Idle for 00:00:53 Tue Dec 12 08:26:32 2023 Tue Dec 12 08:28:55 2023 4.06GB svm vol6 Enabled Idle for 00:00:22 Tue Dec 12 08:28:32 2023 Tue Dec 12 08:29:26 2023 4.05GB svm vol7 Enabled Idle for 00:00:09 Tue Dec 12 08:28:37 2023 Tue Dec 12 08:29:39 2023 4.07GB svm vol8 Enabled Idle for 00:00:04 Tue Dec 12 08:28:45 2023 Tue Dec 12 08:29:44 2023 4.05GB svm vol9 Enabled Idle for 00:00:04 Tue Dec 12 08:28:45 2023 Tue Dec 12 08:29:44 2023 4.05GB 20 entries were displayed.
8つのボリュームのStorage Efficiencyが完了すると、残りの4つのStorage Efficiencyの実行が行われていました。
1つのファイルシステムでは最大同時に8つのボリュームのStorage Efficiencyしか実行できないことが分かりました。
Inactive data compressionの場合
Inactive data compressionも気になったので確認します。こちらも8つまででしょうか。それともInactive data compressionはaggregateレイヤーで動作するものなので、最大同時実行するは違った値なのでしょうか。
Inactive data compressionはデフォルトで無効化されています。
::*> volume efficiency inactive-data-compression show Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm ---------- ------ ---------- --------- -------- ------ --------------------- svm vol1 false - IDLE FAILURE lzopro svm vol10 false - IDLE FAILURE lzopro svm vol11 false - IDLE FAILURE lzopro svm vol12 false - IDLE FAILURE lzopro svm vol13 false - IDLE FAILURE lzopro svm vol14 false - IDLE FAILURE lzopro svm vol15 false - IDLE FAILURE lzopro svm vol16 false - IDLE FAILURE lzopro svm vol17 false - IDLE FAILURE lzopro svm vol18 false - IDLE FAILURE lzopro svm vol19 false - IDLE FAILURE lzopro svm vol2 false - IDLE FAILURE lzopro svm vol20 false - IDLE FAILURE lzopro svm vol3 false - IDLE FAILURE lzopro svm vol4 false - IDLE FAILURE lzopro svm vol5 false - IDLE FAILURE lzopro svm vol6 false - IDLE FAILURE lzopro svm vol7 false - IDLE FAILURE lzopro svm vol8 false - IDLE FAILURE lzopro svm vol9 false - IDLE FAILURE lzopro 20 entries were displayed.
有効にしてあげます。
::*> volume efficiency inactive-data-compression modify -volume vol* -is-enabled true 20 entries were modified. ::*> volume efficiency inactive-data-compression show -fields progress, status, percentage, last-op-start-time, last-op-end-time, is-enabled vserver volume progress status percentage is-enabled last-op-start-time last-op-end-time ------- ------ -------- ------- ---------- ---------- ------------------ ---------------- svm vol1 IDLE FAILURE - true 0 0 svm vol10 IDLE FAILURE - true 0 0 svm vol11 IDLE FAILURE - true 0 0 svm vol12 IDLE FAILURE - true 0 0 svm vol13 IDLE FAILURE - true 0 0 svm vol14 IDLE FAILURE - true 0 0 svm vol15 IDLE FAILURE - true 0 0 svm vol16 IDLE FAILURE - true 0 0 svm vol17 IDLE FAILURE - true 0 0 svm vol18 IDLE FAILURE - true 0 0 svm vol19 IDLE FAILURE - true 0 0 svm vol2 IDLE FAILURE - true 0 0 svm vol20 IDLE FAILURE - true 0 0 svm vol3 IDLE FAILURE - true 0 0 svm vol4 IDLE FAILURE - true 0 0 svm vol5 IDLE FAILURE - true 0 0 svm vol6 IDLE FAILURE - true 0 0 svm vol7 IDLE FAILURE - true 0 0 svm vol8 IDLE FAILURE - true 0 0 svm vol9 IDLE FAILURE - true 0 0 20 entries were displayed.
それでは全ボリュームに対してInactive data compressionを開始します。
::*> volume efficiency inactive-data-compression start -volume vol* -inactive-days 0 Inactive data compression scan started on volume "vol1" in Vserver "svm" Error: command failed on vserver "svm" volume "vol10": Failed to start inactive data compression scan on volume "vol10" in Vserver "svm". Reason: "Maximum scans already running." Warning: Do you want to continue running this command? {y|n}: y Error: command failed on vserver "svm" volume "vol11": Failed to start inactive data compression scan on volume "vol11" in Vserver "svm". Reason: "Maximum scans already running." Warning: Do you want to continue running this command? {y|n}: y Error: command failed on vserver "svm" volume "vol12": Failed to start inactive data compression scan on volume "vol12" in Vserver "svm". Reason: "Maximum scans already running." Warning: Do you want to continue running this command? {y|n}: y . . .
2つ目のボリュームからMaximum scans already running.
という理由でInactive data compressionの実行に失敗していますね。
実行状況を確認します。
::*> volume efficiency inactive-data-compression show -fields progress, status, percentage, last-op-start-time, last-op-end-time, is-enabled vserver volume progress status percentage is-enabled last-op-start-time last-op-end-time ------- ------ -------- ------- ---------- ---------- ------------------ ---------------- svm vol1 RUNNING FAILURE 0% true 0 0 svm vol10 IDLE FAILURE - true 0 0 svm vol11 IDLE FAILURE - true 0 0 svm vol12 IDLE FAILURE - true 0 0 svm vol13 IDLE FAILURE - true 0 0 svm vol14 IDLE FAILURE - true 0 0 svm vol15 IDLE FAILURE - true 0 0 svm vol16 IDLE FAILURE - true 0 0 svm vol17 IDLE FAILURE - true 0 0 svm vol18 IDLE FAILURE - true 0 0 svm vol19 IDLE FAILURE - true 0 0 svm vol2 IDLE FAILURE - true 0 0 svm vol20 IDLE FAILURE - true 0 0 svm vol3 IDLE FAILURE - true 0 0 svm vol4 IDLE FAILURE - true 0 0 svm vol5 IDLE FAILURE - true 0 0 svm vol6 IDLE FAILURE - true 0 0 svm vol7 IDLE FAILURE - true 0 0 svm vol8 IDLE FAILURE - true 0 0 svm vol9 IDLE FAILURE - true 0 0 20 entries were displayed. ::*> volume efficiency inactive-data-compression show -fields progress, status, percentage, last-op-start-time, last-op-end-time, is-enabled vserver volume progress status percentage is-enabled last-op-start-time last-op-end-time ------- ------ -------- ------- ---------- ---------- ------------------ ---------------- svm vol1 IDLE SUCCESS - true 16 15 svm vol10 IDLE FAILURE - true 0 0 svm vol11 IDLE FAILURE - true 0 0 svm vol12 IDLE FAILURE - true 0 0 svm vol13 IDLE FAILURE - true 0 0 svm vol14 IDLE FAILURE - true 0 0 svm vol15 IDLE FAILURE - true 0 0 svm vol16 IDLE FAILURE - true 0 0 svm vol17 IDLE FAILURE - true 0 0 svm vol18 IDLE FAILURE - true 0 0 svm vol19 IDLE FAILURE - true 0 0 svm vol2 IDLE FAILURE - true 0 0 svm vol20 IDLE FAILURE - true 0 0 svm vol3 IDLE FAILURE - true 0 0 svm vol4 IDLE FAILURE - true 0 0 svm vol5 IDLE FAILURE - true 0 0 svm vol6 IDLE FAILURE - true 0 0 svm vol7 IDLE FAILURE - true 0 0 svm vol8 IDLE FAILURE - true 0 0 svm vol9 IDLE FAILURE - true 0 0 20 entries were displayed.
キューに入れられている様子もなく、1つのボリュームのInactive data compressionが完了しても他のボリュームのInactive data compressionは始まりませんでした。
もう一度チャレンジしてみます。
FsxId0e124ba7bf90735da::*> volume efficiency inactive-data-compression start -volume vol2 -inactive-days 0 Inactive data compression scan started on volume "vol2" in Vserver "svm" FsxId0e124ba7bf90735da::*> volume efficiency inactive-data-compression start -volume vol3 -inactive-days 0 Error: command failed: Failed to start inactive data compression scan on volume "vol3" in Vserver "svm". Reason: "Maximum scans already running." FsxId0e124ba7bf90735da::*> volume efficiency inactive-data-compression start -volume vol3 -inactive-days 0 Error: command failed: Failed to start inactive data compression scan on volume "vol3" in Vserver "svm". Reason: "Maximum scans already running." FsxId0e124ba7bf90735da::*> volume efficiency inactive-data-compression start -volume vol3 -inactive-days 0 Error: command failed: Failed to start inactive data compression scan on volume "vol3" in Vserver "svm". Reason: "Maximum scans already running." ::*> volume efficiency inactive-data-compression show -fields progress, status, percentage, last-op-start-time, last-op-end-time, is-enabled vserver volume progress status percentage is-enabled last-op-start-time last-op-end-time ------- ------ -------- ------- ---------- ---------- ------------------ ---------------- svm vol1 IDLE SUCCESS - true 145 144 svm vol10 IDLE FAILURE - true 0 0 svm vol11 IDLE FAILURE - true 0 0 svm vol12 IDLE FAILURE - true 0 0 svm vol13 IDLE FAILURE - true 0 0 svm vol14 IDLE FAILURE - true 0 0 svm vol15 IDLE FAILURE - true 0 0 svm vol16 IDLE FAILURE - true 0 0 svm vol17 IDLE FAILURE - true 0 0 svm vol18 IDLE FAILURE - true 0 0 svm vol19 IDLE FAILURE - true 0 0 svm vol2 RUNNING FAILURE 0% true 0 0 svm vol20 IDLE FAILURE - true 0 0 svm vol3 IDLE FAILURE - true 0 0 svm vol4 IDLE FAILURE - true 0 0 svm vol5 IDLE FAILURE - true 0 0 svm vol6 IDLE FAILURE - true 0 0 svm vol7 IDLE FAILURE - true 0 0 svm vol8 IDLE FAILURE - true 0 0 svm vol9 IDLE FAILURE - true 0 0 20 entries were displayed.
やはり、同時に1つまでしか実行できないようです。
これは大量のボリュームがある時に全ボリュームを手動でやるのは大変そうです。大人しく日次のバックグラウンド処理に任せておく方が良いでしょう。
大量のボリュームに対してStorage Efficiencyを実行したい場合は注意しよう
Amazon FSx for NetApp ONTAPにおいて同時にStorage Efficiencyを実行できるのは1ノードあたり8つまでという制約を確認しました。
そのため、大量のボリュームがある環境で書き込みが常に発生しているような環境だと、Change logの使用率が20%を超過しているにも関わらずStorage Efficiencyが実行されないことが起こりえます。
以下のKBでは他のボリュームが5分おきにStorage Efficiencyを実行しているため、Change logの使用率が91%にも関わらずStorage Efficiency(ポストプロセス重複排除)がかからない事象を紹介しています。
また、-scan-old-data
を付与して実行するStorage Efficiencyや-inactive-days 0
を付与して実行するInactive data compressionは以下記事でも紹介しているとおり、ディスク、CPUどちらも負荷をかけます。
8:15ごろにある1つ目の山がテストファイルを書き込んだタイミングで、2つ目の山がStorage Efficiencyを手動で実行したタイミング、3つ目と4つ目の山がInactive data compressionを実行したタイミングです。
この記事が誰かの助けになれば幸いです。
以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!