[Amazon FSx for NetApp ONTAP] SnapMirrorのカットオーバーした際にStorage Efficiencyが有効になるか確認してみた
SnapMirrorのカットオーバーをした場合にStorage Efficiencyが本当に有効になるのか
こんにちは、のんピ(@non____97)です。
皆さんはSnapMirrorのカットオーバーをした場合にStorage Efficiencyが本当に有効になるのか気になったことはありますか? 私はあります。
SnapMirrorの転送先ボリュームのStorage Efficiencyを有効にすることができることや、転送元ボリュームと連動してInactive data compressionが転送先ボリュームで有効/無効に切り替わることは以下記事で検証しています。
では、転送先ボリュームでStorage Efficiencyを有効にしない場合、カットオーバー後のStorage Efficiencyのステータスはどうなるのでしょうか。
上述の記事で紹介しているとおり、SnapMirrorの転送元ボリュームでTSSEが有効であれば、転送先ボリュームでTSSEを有効にしていなくともTSSEによるデータ削減は維持された状態で転送されます。
ただし、上述の記事ではSnapMirrorのカットオーバー前に転送先ボリュームでStorage Efficiencyを有効化していました。もしかすると、カットオーバー前にStorage Efficiencyを有効化する前には、カットオーバー後に明示的に有効化する必要があるのでしょうか。
実際に試してみました。
いきなりまとめ
- 転送元ボリュームのStorage Efficiencyの状態に関わらず、SnapMirrorのカットオーバーをしたタイミングで、転送先ボリュームのStorage Efficiencyが有効になる
- Storage Efficiencyを効かせたくない場合は、カットオーバーのタイミングで無効化する必要がある
- DPボリューム作成時はSnapshot Policyとジャンクションパスを設定できない
- ボリュームタイプがDPの間、Snapshot Policyの変更はできない
- ボリューム作成後はジャンクションパスの設定ができる
やってみた
検証環境
検証環境は以下のとおりです。
SnapMirror relationshipは以下2パターンを作成します。
- マネジメントコンソールもしくはONTAP CLIでDPボリュームを作成してからSnapMirror relationshipを作成するパターン
snapmirror protect
でSnapMirror relationshipを作成するパターン
また、転送元ボリュームも以下の2パターン用意します。
- Storage Efficiencyが有効
- Storage Efficiencyが無効
まとめると以下のとおりです。
No | 転送先ボリュームの作成方法 | 転送元ボリュームのStorage Efficiency |
---|---|---|
1 | マネジメントコンソール | 有効 |
2 | snapmirror protect |
有効 |
3 | ONTAP CLI | 無効 |
4 | snapmirror protect |
無効 |
転送元ボリュームは既に作成済みです。
::*> volume show -volume vol_sm_se_test_pattern* -fields available, filesystem-size, total, user, used, percent-used, size, sis-space-saved, sis-space-saved-percent, type, is-sis-volume, is-sis-logging-enabled, is-sis-state-enabled
vserver volume size user available filesystem-size total used percent-used type is-sis-volume is-sis-logging-enabled is-sis-state-enabled sis-space-saved sis-space-saved-percent
------- ----------------------- ---- ---- --------- --------------- ------- ----- ------------ ---- ------------- ---------------------- -------------------- --------------- -----------------------
svm vol_sm_se_test_pattern1 32GB 0 30.40GB 32GB 30.40GB 332KB 0% RW true true true 0B 0%
svm vol_sm_se_test_pattern2 32GB 0 30.40GB 32GB 30.40GB 324KB 0% RW true true true 0B 0%
svm vol_sm_se_test_pattern3 32GB 0 30.40GB 32GB 30.40GB 380KB 0% RW true false false 0B 0%
svm vol_sm_se_test_pattern4 32GB 0 30.40GB 32GB 30.40GB 380KB 0% RW true false false 0B 0%
4 entries were displayed.
::*> volume efficiency show -volume vol_sm_se_test_pattern*
Vserver Volume State Status Progress Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm vol_sm_se_test_pattern1 Idle Idle for 05:41:56 auto
Enabled
svm vol_sm_se_test_pattern2 Idle Idle for 05:41:37 auto
Enabled
svm vol_sm_se_test_pattern3 Idle Idle for 05:41:16 auto
Disabled
svm vol_sm_se_test_pattern4 Idle Idle for 05:40:32 auto
Disabled
4 entries were displayed.
Inactive data compressionは無効のままです。
::*> volume efficiency inactive-data-compression show -volume vol_sm_se_test_pattern*
Vserver Volume Is-Enabled Scan Mode Progress Status Compression-Algorithm
---------- ------ ---------- --------- -------- ------ ---------------------
svm vol_sm_se_test_pattern1
false - IDLE SUCCESS
lzopro
svm vol_sm_se_test_pattern2
false - IDLE SUCCESS
lzopro
svm vol_sm_se_test_pattern3
false - IDLE SUCCESS
lzopro
svm vol_sm_se_test_pattern4
false - IDLE SUCCESS
lzopro
4 entries were displayed.
DPボリュームの作成
DPボリュームを作成します。
まず、マネジメントコンソールから作成します。
Storage Efficiencyは設定できないようです。また、Snapshot Policyとジャンクションパスも設定できないようですね。
ONTAP CLIからもボリューム作成時にSnapshot Policyとジャンクションパスを設定できるか確認します。
::*> volume create -volume vol_sm_se_test_pattern3_dst -vserver svm2 -state online -policy default -aggregate aggr1 -type DP -size 32GB -junction-active true -snapshot-policy default -junction-path /vol_sm_se_test_pattern3_dst
Error: command failed: Snapshot policy must be "none" for DC, DP and LS volumes.
::*> volume create -volume vol_sm_se_test_pattern3_dst -vserver svm2 -state online -policy default -aggregate aggr1 -type DP -size 32GB -junction-active true -snapshot-policy none -junction-path /vol_sm_se_test_pattern3_dst
Error: command failed: Only volumes of type "RW" can be mounted during create.
::*> volume create -volume vol_sm_se_test_pattern3_dst -vserver svm2 -state online -policy default -aggregate aggr1 -type DP -size 32GB -junction-active true -snapshot-policy none
[Job 2140] Job succeeded: Successful
DPボリューム作成時はSnapshot Policyとジャンクションパスを設定できないようですね。
現在のボリュームの状態は以下のとおりです。
::*> volume show -volume vol_sm_se_test_pattern* -fields available, filesystem-size, total, user, used, percent-used, size, sis-space-saved, sis-space-saved-percent, type, is-sis-volume, is-sis-logging-enabled, is-sis-state-enabled
vserver volume size user available filesystem-size total used percent-used type is-sis-volume is-sis-logging-enabled is-sis-state-enabled sis-space-saved sis-space-saved-percent
------- ----------------------- ---- ---- --------- --------------- ------- ----- ------------ ---- ------------- ---------------------- -------------------- --------------- -----------------------
svm vol_sm_se_test_pattern1 32GB 0 30.40GB 32GB 30.40GB 332KB 0% RW true true true 0B 0%
svm vol_sm_se_test_pattern2 32GB 0 30.40GB 32GB 30.40GB 324KB 0% RW true true true 0B 0%
svm vol_sm_se_test_pattern3 32GB 0 30.40GB 32GB 30.40GB 380KB 0% RW true false false 0B 0%
svm vol_sm_se_test_pattern4 32GB 0 30.40GB 32GB 30.40GB 380KB 0% RW true false false 0B 0%
svm2 vol_sm_se_test_pattern1_dst
32GB - 32.00GB 32GB 32GB 268KB 0% DP false false false 0B 0%
svm2 vol_sm_se_test_pattern3_dst
32GB - 32.00GB 32GB 32GB 268KB 0% DP false false false 0B 0%
6 entries were displayed.
いずれのDPボリュームもis-sis-*
なプロパティが全てfalse
になっています。
SnapMirror relationshipの作成
続いて、SnapMirror relationshipを作成します。
::*> snapmirror create -source-path svm:vol_sm_se_test_pattern1 -destination-path svm2:vol_sm_se_test_pattern1_dst -policy MirrorAllSnapshots
Operation succeeded: snapmirror create for the relationship with destination "svm2:vol_sm_se_test_pattern1_dst".
::*> snapmirror create -source-path svm:vol_sm_se_test_pattern3 -destination-path svm2:vol_sm_se_test_pattern3_dst -policy MirrorAllSnapshots
Operation succeeded: snapmirror create for the relationship with destination "svm2:vol_sm_se_test_pattern3_dst".
::*> snapmirror initialize -destination-path svm2:vol_sm_se_test_pattern1_dst
Operation is queued: snapmirror initialize of destination "svm2:vol_sm_se_test_pattern1_dst".
::*> snapmirror initialize -destination-path svm2:vol_sm_se_test_pattern3_dst
Operation is queued: snapmirror initialize of destination "svm2:vol_sm_se_test_pattern3_dst".
::*> snapmirror protect -destination-vserver svm2 -path-list svm:vol_sm_se_test_pattern2, svm:vol_sm_se_test_pattern4 -policy MirrorAllSnapshots -auto-initialize true -support-tiering true
[Job 2145] Job is queued: snapmirror protect for list of source endpoints beginning with "svm:vol_sm_se_test_pattern2".
::*> snapmirror show -destination-path svm2:vol_sm_se_test_pattern*
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol_sm_se_test_pattern2
XDP svm2:vol_sm_se_test_pattern2_dst
Uninitialized
Transferring 0B true 01/20 08:56:26
::*> snapmirror show -destination-path svm2:vol_sm_se_test_pattern*
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol_sm_se_test_pattern1
XDP svm2:vol_sm_se_test_pattern1_dst
Snapmirrored
Idle - true -
svm:vol_sm_se_test_pattern2
XDP svm2:vol_sm_se_test_pattern2_dst
Snapmirrored
Idle - true -
svm:vol_sm_se_test_pattern3
XDP svm2:vol_sm_se_test_pattern3_dst
Snapmirrored
Idle - true -
svm:vol_sm_se_test_pattern4
XDP svm2:vol_sm_se_test_pattern4_dst
Snapmirrored
Idle - true -
4 entries were displayed.
現在のボリューム一覧は以下のとおりです。
::*> volume show -volume vol_sm_se_test_pattern* -fields available, filesystem-size, total, user, used, percent-used, size, sis-space-saved, sis-space-saved-percent, type, is-sis-volume, is-sis-logging-enabled, is-sis-state-enabled
vserver volume size user available filesystem-size total used percent-used type is-sis-volume is-sis-logging-enabled is-sis-state-enabled sis-space-saved sis-space-saved-percent
------- ----------------------- ---- ---- --------- --------------- ------- ----- ------------ ---- ------------- ---------------------- -------------------- --------------- -----------------------
svm vol_sm_se_test_pattern1 32GB 0 30.40GB 32GB 30.40GB 368KB 0% RW true true true 0B 0%
svm vol_sm_se_test_pattern2 32GB 0 30.40GB 32GB 30.40GB 348KB 0% RW true true true 0B 0%
svm vol_sm_se_test_pattern3 32GB 0 30.40GB 32GB 30.40GB 408KB 0% RW true false false 0B 0%
svm vol_sm_se_test_pattern4 32GB 0 30.40GB 32GB 30.40GB 412KB 0% RW true false false 0B 0%
svm2 vol_sm_se_test_pattern1_dst
32GB 0 32.00GB 32GB 32GB 500KB 0% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern2_dst
128.0MB
0 121.3MB 128.0MB 121.6MB 312KB 0% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern3_dst
32GB 0 32.00GB 32GB 32GB 500KB 0% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern4_dst
128.0MB
0 121.3MB 128.0MB 121.6MB 312KB 0% DP true false false 0B 0%
8 entries were displayed.
SnapMirror relationshipを作成することでis-sis-volume
がfalse
だったDPボリュームがtrue
になりました。転送元ボリュームvol_sm_se_test_pattern3
はStorage Efficiencyを有効にしていないため、転送元ボリュームの状態に連動して変化した訳ではないようです。
DPボリュームのStorage Efficiencyの確認
転送先ボリュームのStorage Efficiency周りの情報を確認しておきます。
::*> volume efficiency show -volume vol_sm_se_test_pattern*
Vserver Volume State Status Progress Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm vol_sm_se_test_pattern1 Idle Idle for 06:02:01 auto
Enabled
svm vol_sm_se_test_pattern2 Idle Idle for 06:01:42 auto
Enabled
svm vol_sm_se_test_pattern3 Idle Idle for 06:01:21 auto
Disabled
svm vol_sm_se_test_pattern4 Idle Idle for 06:00:37 auto
Disabled
svm2 vol_sm_se_test_pattern1_dst
Idle Idle for 00:00:00 -
Disabled
svm2 vol_sm_se_test_pattern2_dst
Idle Idle for 00:00:00 -
Disabled
svm2 vol_sm_se_test_pattern3_dst
Idle Idle for 00:00:00 -
Disabled
svm2 vol_sm_se_test_pattern4_dst
Idle Idle for 00:00:00 -
Disabled
8 entries were displayed.
SnapMirrorの転送先ボリュームは全てDisabled
となっていますね。
転送先ボリュームのSnapshot Policyを変更
少し横道にそれますが、転送先ボリュームのSnapshot Policyを変更できるか確認します。
::*> volume modify -vserver svm2 -volume vol_sm_se_test_pattern1_dst -snapshot-policy default
Error: command failed: You cannot modify Snapshot policy on volumes of type DC, DP, LS or TMP.
はい、できませんでした。DPボリュームである間はSnapshot Policyの設定はできないようです。
転送先ボリュームのジャンクションパスを設定
転送先ボリュームのジャンクションパスを設定できるかも確認しておきます。
::*> volume mount -vserver svm2 -volume vol_sm_se_test_pattern1_dst -junction-path /vol_sm_se_test_pattern1_dst
Queued private job: 1124
::*> volume show -fields junction-path, junction-active -volume vol_sm_se_test_pattern*
vserver volume junction-path junction-active
------- ----------------------- ------------------------ ---------------
svm vol_sm_se_test_pattern1 /vol_sm_se_test_pattern1 true
svm vol_sm_se_test_pattern2 /vol_sm_se_test_pattern2 true
svm vol_sm_se_test_pattern3 /vol_sm_se_test_pattern3 true
svm vol_sm_se_test_pattern4 /vol_sm_se_test_pattern4 true
svm2 vol_sm_se_test_pattern1_dst
/vol_sm_se_test_pattern1_dst
true
svm2 vol_sm_se_test_pattern2_dst
- -
svm2 vol_sm_se_test_pattern3_dst
- -
svm2 vol_sm_se_test_pattern4_dst
- -
8 entries were displayed.
はい、問題なくできました。ジャンクションパスを設定できないのはDPボリュームを作成するタイミングだけのようです。
転送元ボリュームにテストファイルを追加
転送元ボリュームにテストファイルを追加します。
まず、各転送元ボリュームをマウントします。
$ sudo mkdir -p /mnt/fsxn/vol_sm_se_test_pattern1
$ sudo mkdir -p /mnt/fsxn/vol_sm_se_test_pattern2
$ sudo mkdir -p /mnt/fsxn/vol_sm_se_test_pattern3
$ sudo mkdir -p /mnt/fsxn/vol_sm_se_test_pattern4
$ sudo mount -t nfs svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern1 /mnt/fsxn/vol_sm_se_test_pattern1
$ sudo mount -t nfs svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern2 /mnt/fsxn/vol_sm_se_test_pattern2
$ sudo mount -t nfs svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern3 /mnt/fsxn/vol_sm_se_test_pattern3
$ sudo mount -t nfs svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern4 /mnt/fsxn/vol_sm_se_test_pattern4
$ df -hT -t nfs4
Filesystem Type Size Used Avail Use% Mounted on
svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern1 nfs4 31G 384K 31G 1% /mnt/fsxn/vol_sm_se_test_pattern1
svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern2 nfs4 31G 320K 31G 1% /mnt/fsxn/vol_sm_se_test_pattern2
svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern3 nfs4 31G 384K 31G 1% /mnt/fsxn/vol_sm_se_test_pattern3
svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern4 nfs4 31G 384K 31G 1% /mnt/fsxn/vol_sm_se_test_pattern4
各転送元ボリュームに1GiBのファイルを作成します。
$ for i in {1..4}; do
file_path="/mnt/fsxn/vol_sm_se_test_pattern${i}/1KB_random_pattern_text_block_2GiB"
echo "Creating file ${file_path}"
yes \
$(base64 /dev/urandom -w 0 \
| head -c 1K
) \
| tr -d '\n' \
| sudo dd of="${file_path}" bs=1M count=2048 iflag=fullblock
done
Creating file /mnt/fsxn/vol_sm_se_test_pattern1/1KB_random_pattern_text_block_2GiB
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 13.8272 s, 155 MB/s
Creating file /mnt/fsxn/vol_sm_se_test_pattern2/1KB_random_pattern_text_block_2GiB
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 14.4318 s, 149 MB/s
Creating file /mnt/fsxn/vol_sm_se_test_pattern3/1KB_random_pattern_text_block_2GiB
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 14.3543 s, 150 MB/s
Creating file /mnt/fsxn/vol_sm_se_test_pattern4/1KB_random_pattern_text_block_2GiB
2048+0 records in
2048+0 records out
2147483648 bytes (2.1 GB, 2.0 GiB) copied, 14.3706 s, 149 MB/s
$ df -hT -t nfs4
Filesystem Type Size Used Avail Use% Mounted on
svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern1 nfs4 31G 1.9G 29G 7% /mnt/fsxn/vol_sm_se_test_pattern1
svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern2 nfs4 31G 1.9G 29G 7% /mnt/fsxn/vol_sm_se_test_pattern2
svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern3 nfs4 31G 2.1G 29G 7% /mnt/fsxn/vol_sm_se_test_pattern3
svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern4 nfs4 31G 2.1G 29G 7% /mnt/fsxn/vol_sm_se_test_pattern4
Storage Efficiencyを有効化しているボリュームでは重複排除が効いているのか確認します。
::*> volume show -volume vol_sm_se_test_pattern* -fields available, filesystem-size, total, user, used, percent-used, size, sis-space-saved, sis-space-saved-percent, type, is-sis-volume, is-sis-logging-enabled, is-sis-state-enabled
vserver volume size user available filesystem-size total used percent-used type is-sis-volume is-sis-logging-enabled is-sis-state-enabled sis-space-saved sis-space-saved-percent
------- ----------------------- ---- ---- --------- --------------- ------- ------ ------------ ---- ------------- ---------------------- -------------------- --------------- -----------------------
svm vol_sm_se_test_pattern1 32GB 0 28.57GB 32GB 30.40GB 1.83GB 6% RW true true true 197.5MB 10%
svm vol_sm_se_test_pattern2 32GB 0 28.54GB 32GB 30.40GB 1.85GB 6% RW true true true 175.7MB 8%
svm vol_sm_se_test_pattern3 32GB 0 28.39GB 32GB 30.40GB 2.01GB 6% RW true false false 0B 0%
svm vol_sm_se_test_pattern4 32GB 0 28.39GB 32GB 30.40GB 2.01GB 6% RW true false false 0B 0%
svm2 vol_sm_se_test_pattern1_dst
32GB 0 32.00GB 32GB 32GB 524KB 0% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern2_dst
128.0MB
0 121.3MB 128.0MB 121.6MB 312KB 0% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern3_dst
32GB 0 32.00GB 32GB 32GB 524KB 0% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern4_dst
128.0MB
0 121.3MB 128.0MB 121.6MB 312KB 0% DP true false false 0B 0%
8 entries were displayed.
効いていますね。
それでは重複排除の結果を更に分かりやすくするために、作成したファイルのコピーをします。
$ for i in {1..4}; do
file_path="/mnt/fsxn/vol_sm_se_test_pattern${i}/1KB_random_pattern_text_block_2GiB"
echo "Copying file ${file_path}"
for j in {1..4}; do
sudo cp $file_path "${file_path}_copy${j}"
done
done
Copying file /mnt/fsxn/vol_sm_se_test_pattern1/1KB_random_pattern_text_block_2GiB
Copying file /mnt/fsxn/vol_sm_se_test_pattern2/1KB_random_pattern_text_block_2GiB
Copying file /mnt/fsxn/vol_sm_se_test_pattern3/1KB_random_pattern_text_block_2GiB
Copying file /mnt/fsxn/vol_sm_se_test_pattern4/1KB_random_pattern_text_block_2GiB
$ ls -lR /mnt/fsxn/vol_sm_se_test_pattern*
/mnt/fsxn/vol_sm_se_test_pattern1:
total 10527100
-rw-r--r--. 1 root root 2147483648 Jan 20 09:20 1KB_random_pattern_text_block_2GiB
-rw-r--r--. 1 root root 2147483648 Jan 20 09:24 1KB_random_pattern_text_block_2GiB_copy1
-rw-r--r--. 1 root root 2147483648 Jan 20 09:24 1KB_random_pattern_text_block_2GiB_copy2
-rw-r--r--. 1 root root 2147483648 Jan 20 09:24 1KB_random_pattern_text_block_2GiB_copy3
-rw-r--r--. 1 root root 2147483648 Jan 20 09:25 1KB_random_pattern_text_block_2GiB_copy4
/mnt/fsxn/vol_sm_se_test_pattern2:
total 10527100
-rw-r--r--. 1 root root 2147483648 Jan 20 09:20 1KB_random_pattern_text_block_2GiB
-rw-r--r--. 1 root root 2147483648 Jan 20 09:25 1KB_random_pattern_text_block_2GiB_copy1
-rw-r--r--. 1 root root 2147483648 Jan 20 09:25 1KB_random_pattern_text_block_2GiB_copy2
-rw-r--r--. 1 root root 2147483648 Jan 20 09:25 1KB_random_pattern_text_block_2GiB_copy3
-rw-r--r--. 1 root root 2147483648 Jan 20 09:26 1KB_random_pattern_text_block_2GiB_copy4
/mnt/fsxn/vol_sm_se_test_pattern3:
total 10527100
-rw-r--r--. 1 root root 2147483648 Jan 20 09:20 1KB_random_pattern_text_block_2GiB
-rw-r--r--. 1 root root 2147483648 Jan 20 09:26 1KB_random_pattern_text_block_2GiB_copy1
-rw-r--r--. 1 root root 2147483648 Jan 20 09:26 1KB_random_pattern_text_block_2GiB_copy2
-rw-r--r--. 1 root root 2147483648 Jan 20 09:26 1KB_random_pattern_text_block_2GiB_copy3
-rw-r--r--. 1 root root 2147483648 Jan 20 09:27 1KB_random_pattern_text_block_2GiB_copy4
/mnt/fsxn/vol_sm_se_test_pattern4:
total 10527100
-rw-r--r--. 1 root root 2147483648 Jan 20 09:20 1KB_random_pattern_text_block_2GiB
-rw-r--r--. 1 root root 2147483648 Jan 20 09:27 1KB_random_pattern_text_block_2GiB_copy1
-rw-r--r--. 1 root root 2147483648 Jan 20 09:27 1KB_random_pattern_text_block_2GiB_copy2
-rw-r--r--. 1 root root 2147483648 Jan 20 09:27 1KB_random_pattern_text_block_2GiB_copy3
-rw-r--r--. 1 root root 2147483648 Jan 20 09:27 1KB_random_pattern_text_block_2GiB_copy4
$ df -hT -t nfs4
Filesystem Type Size Used Avail Use% Mounted on
svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern1 nfs4 31G 9.7G 21G 32% /mnt/fsxn/vol_sm_se_test_pattern1
svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern2 nfs4 31G 9.3G 22G 31% /mnt/fsxn/vol_sm_se_test_pattern2
svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern3 nfs4 31G 11G 21G 34% /mnt/fsxn/vol_sm_se_test_pattern3
svm-04f22e3a82142d131.fs-0e64a4f5386f74c87.fsx.us-east-1.amazonaws.com:/vol_sm_se_test_pattern4 nfs4 31G 11G 21G 34% /mnt/fsxn/vol_sm_se_test_pattern4
重複排除具合を確認します。
::*> volume show -volume vol_sm_se_test_pattern* -fields available, filesystem-size, total, user, used, percent-used, size, sis-space-saved, sis-space-saved-percent, type, is-sis-volume, is-sis-logging-enabled, is-sis-state-enabled
vserver volume size user available filesystem-size total used percent-used type is-sis-volume is-sis-logging-enabled is-sis-state-enabled sis-space-saved sis-space-saved-percent
------- ----------------------- ---- ---- --------- --------------- ------- ------ ------------ ---- ------------- ---------------------- -------------------- --------------- -----------------------
svm vol_sm_se_test_pattern1 32GB 0 20.72GB 32GB 30.40GB 9.68GB 31% RW true true true 470.5MB 5%
svm vol_sm_se_test_pattern2 32GB 0 21.14GB 32GB 30.40GB 9.26GB 30% RW true true true 893.1MB 9%
svm vol_sm_se_test_pattern3 32GB 0 20.36GB 32GB 30.40GB 10.04GB
33% RW true false false 0B 0%
svm vol_sm_se_test_pattern4 32GB 0 20.36GB 32GB 30.40GB 10.04GB
33% RW true false false 0B 0%
svm2 vol_sm_se_test_pattern1_dst
32GB 0 32.00GB 32GB 32GB 524KB 0% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern2_dst
128.0MB
0 121.3MB 128.0MB 121.6MB 312KB 0% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern3_dst
32GB 0 32.00GB 32GB 32GB 524KB 0% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern4_dst
128.0MB
0 121.3MB 128.0MB 121.6MB 312KB 0% DP true false false 0B 0%
8 entries were displayed.
::*> volume efficiency show -volume vol_sm_se_test_pattern*
Vserver Volume State Status Progress Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm vol_sm_se_test_pattern1 Active 153072 KB (4%) Done
auto
Enabled
svm vol_sm_se_test_pattern2 Pending Idle for 06:31:39 auto
Enabled
svm vol_sm_se_test_pattern3 Idle Idle for 06:31:18 auto
Disabled
svm vol_sm_se_test_pattern4 Idle Idle for 06:30:34 auto
Disabled
svm2 vol_sm_se_test_pattern1_dst
Idle Idle for 00:00:00 -
Disabled
svm2 vol_sm_se_test_pattern2_dst
Idle Idle for 00:00:00 -
Disabled
svm2 vol_sm_se_test_pattern3_dst
Idle Idle for 00:00:00 -
Disabled
svm2 vol_sm_se_test_pattern4_dst
Idle Idle for 00:00:00 -
Disabled
8 entries were displayed.
効いていますね。
Storage Efficiencyが動作中なので思い切って一晩寝かしてみます。
::*> volume show -volume vol_sm_se_test_pattern* -fields available, filesystem-size, total, user, used, percent-used, size, sis-space-saved, sis-space-saved-percent, type, is-sis-volume, is-sis-logging-enabled, is-sis-state-enabled
vserver volume size user available filesystem-size total used percent-used type is-sis-volume is-sis-logging-enabled is-sis-state-enabled sis-space-saved sis-space-saved-percent
------- ----------------------- ---- ---- --------- --------------- ------- ------ ------------ ---- ------------- ---------------------- -------------------- --------------- -----------------------
svm vol_sm_se_test_pattern1 32GB 0 23.96GB 32GB 30.40GB 6.44GB 21% RW true true true 3.72GB 37%
svm vol_sm_se_test_pattern2 32GB 0 30.36GB 32GB 30.40GB 45.06MB
0% RW true true true 10.00GB 100%
svm vol_sm_se_test_pattern3 32GB 0 20.36GB 32GB 30.40GB 10.04GB
33% RW true false false 0B 0%
svm vol_sm_se_test_pattern4 32GB 0 20.36GB 32GB 30.40GB 10.04GB
33% RW true false false 0B 0%
svm2 vol_sm_se_test_pattern1_dst
32GB 0 32.00GB 32GB 32GB 600KB 0% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern2_dst
128.0MB
0 121.3MB 128.0MB 121.6MB 320KB 0% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern3_dst
32GB 0 32.00GB 32GB 32GB 600KB 0% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern4_dst
128.0MB
0 121.3MB 128.0MB 121.6MB 320KB 0% DP true false false 0B 0%
8 entries were displayed.
::*> volume efficiency show -volume vol_sm_se_test_pattern1, vol_sm_se_test_pattern2 -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size
------- ----------------------- ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- -----------------
svm vol_sm_se_test_pattern1 Enabled Idle for 16:54:35 Mon Jan 20 09:24:19 2025 Mon Jan 20 11:24:07 2025 3.37GB 19% 62.77MB 10.16GB
svm vol_sm_se_test_pattern2 Enabled Idle for 11:22:59 Mon Jan 20 11:24:07 2025 Mon Jan 20 16:55:43 2025 9.13GB 0% 4KB 10.04GB
2 entries were displayed.
かなり重複排除が効きました。vol_sm_se_test_pattern2
に至っては寝かしている間に10.00GBも削減させています。
SnapMirrorの差分転送
SnapMirrorの差分転送を行います。
::*> snapmirror show -destination-path svm2:vol_sm_se_test_pattern* -fields last-transfer-end-timestamp, status, state
source-path destination-path state status last-transfer-end-timestamp
--------------------------- -------------------------------- ------------ ------ ---------------------------
svm:vol_sm_se_test_pattern1 svm2:vol_sm_se_test_pattern1_dst Snapmirrored Idle 01/20 08:59:18
svm:vol_sm_se_test_pattern2 svm2:vol_sm_se_test_pattern2_dst Snapmirrored Idle 01/20 08:56:28
svm:vol_sm_se_test_pattern3 svm2:vol_sm_se_test_pattern3_dst Snapmirrored Idle 01/20 08:59:25
svm:vol_sm_se_test_pattern4 svm2:vol_sm_se_test_pattern4_dst Snapmirrored Idle 01/20 08:56:28
4 entries were displayed.
::*> snapmirror update -destination-path svm2:vol_sm_se_test_pattern*
Operation is queued: snapmirror update of destination "svm2:vol_sm_se_test_pattern1_dst".
Operation is queued: snapmirror update of destination "svm2:vol_sm_se_test_pattern2_dst".
Operation is queued: snapmirror update of destination "svm2:vol_sm_se_test_pattern3_dst".
Operation is queued: snapmirror update of destination "svm2:vol_sm_se_test_pattern4_dst".
4 entries were acted on.
::*> snapmirror show -destination-path svm2:vol_sm_se_test_pattern* -fields last-transfer-end-timestamp, status, state, total-progress
source-path destination-path state status total-progress last-transfer-end-timestamp
--------------------------- -------------------------------- ------------ ------------ -------------- ---------------------------
svm:vol_sm_se_test_pattern1 svm2:vol_sm_se_test_pattern1_dst Snapmirrored Transferring 6.14GB 01/20 08:59:18
svm:vol_sm_se_test_pattern2 svm2:vol_sm_se_test_pattern2_dst Snapmirrored Finalizing 105.5MB 01/20 08:56:28
svm:vol_sm_se_test_pattern3 svm2:vol_sm_se_test_pattern3_dst Snapmirrored Transferring 6.12GB 01/20 08:59:25
svm:vol_sm_se_test_pattern4 svm2:vol_sm_se_test_pattern4_dst Snapmirrored Transferring 6.16GB 01/20 08:56:28
4 entries were displayed.
::*> snapmirror show -destination-path svm2:vol_sm_se_test_pattern* -fields last-transfer-end-timestamp, status, state, total-progress
source-path destination-path state status total-progress last-transfer-end-timestamp
--------------------------- -------------------------------- ------------ ---------- -------------- ---------------------------
svm:vol_sm_se_test_pattern1 svm2:vol_sm_se_test_pattern1_dst Snapmirrored Finalizing 6.46GB 01/20 08:59:18
svm:vol_sm_se_test_pattern2 svm2:vol_sm_se_test_pattern2_dst Snapmirrored Idle - 01/21 04:25:49
svm:vol_sm_se_test_pattern3 svm2:vol_sm_se_test_pattern3_dst Snapmirrored Transferring
9.77GB 01/20 08:59:25
svm:vol_sm_se_test_pattern4 svm2:vol_sm_se_test_pattern4_dst Snapmirrored Transferring
9.82GB 01/20 08:56:28
4 entries were displayed.
::*> snapmirror show -destination-path svm2:vol_sm_se_test_pattern* -fields last-transfer-end-timestamp, status, state, total-progress
source-path destination-path state status total-progress last-transfer-end-timestamp
--------------------------- -------------------------------- ------------ ---------- -------------- ---------------------------
svm:vol_sm_se_test_pattern1 svm2:vol_sm_se_test_pattern1_dst Snapmirrored Finalizing 6.46GB 01/20 08:59:18
svm:vol_sm_se_test_pattern2 svm2:vol_sm_se_test_pattern2_dst Snapmirrored Idle - 01/21 04:25:49
svm:vol_sm_se_test_pattern3 svm2:vol_sm_se_test_pattern3_dst Snapmirrored Finalizing 10.24GB 01/20 08:59:25
svm:vol_sm_se_test_pattern4 svm2:vol_sm_se_test_pattern4_dst Snapmirrored Finalizing 10.24GB 01/20 08:56:28
4 entries were displayed.
::*> snapmirror show -destination-path svm2:vol_sm_se_test_pattern* -fields last-transfer-end-timestamp, last-transfer-duration, last-transfer-size, status, state, total-progress
source-path destination-path state status total-progress last-transfer-size last-transfer-duration last-transfer-end-timestamp
--------------------------- -------------------------------- ---------- ------ -------------- ------------------ ---------------------- ---------------------------
svm:vol_sm_se_test_pattern1 svm2:vol_sm_se_test_pattern1_dst Broken-off Idle - 6.46GB 0:3:6 01/21 04:26:09
svm:vol_sm_se_test_pattern2 svm2:vol_sm_se_test_pattern2_dst Broken-off Idle - 105.5MB 0:2:46 01/21 04:25:49
svm:vol_sm_se_test_pattern3 svm2:vol_sm_se_test_pattern3_dst Broken-off Idle - 10.24GB 0:3:5 01/21 04:26:08
svm:vol_sm_se_test_pattern4 svm2:vol_sm_se_test_pattern4_dst Broken-off Idle - 10.24GB 0:3:4 01/21 04:26:07
4 entries were displayed.
3分ほどで転送が完了しました。
SnapMirrorのスループットの計算は以下NetApp KBが参考になります。
現在の転送先ボリュームの状態を確認します。
::*> volume show -volume vol_sm_se_test_pattern*_dst -fields available, filesystem-size, total, user, used, percent-used, size, sis-space-saved, sis-space-saved-percent, type, is-sis-volume, is-sis-logging-enabled, is-sis-state-enabled
vserver volume size user available filesystem-size total used percent-used type is-sis-volume is-sis-logging-enabled is-sis-state-enabled sis-space-saved sis-space-saved-percent
------- --------------------------- ---- ---- --------- --------------- ----- ------ ------------ ---- ------------- ---------------------- -------------------- --------------- -----------------------
svm2 vol_sm_se_test_pattern1_dst 32GB 0 25.65GB 32GB 32GB 6.35GB 19% DP true false false 3.72GB 37%
svm2 vol_sm_se_test_pattern2_dst 128.0MB
0 79.45MB 128.0MB 121.6MB
42.18MB
34% DP true false false 10.00GB 100%
svm2 vol_sm_se_test_pattern3_dst 32GB 0 21.91GB 32GB 32GB 10.09GB
31% DP true false false 0B 0%
svm2 vol_sm_se_test_pattern4_dst 12.66GB
0 1.94GB 12.66GB 12.03GB
10.09GB
83% DP true false false 0B 0%
4 entries were displayed.
::*> volume efficiency show -volume vol_sm_se_test_pattern*_dst
Vserver Volume State Status Progress Policy
---------- ---------------- --------- ----------- ------------------ ----------
svm2 vol_sm_se_test_pattern1_dst
Idle Idle for 00:00:00 -
Disabled
svm2 vol_sm_se_test_pattern2_dst
Idle Idle for 00:00:00 -
Disabled
svm2 vol_sm_se_test_pattern3_dst
Idle Idle for 00:00:00 -
Disabled
svm2 vol_sm_se_test_pattern4_dst
Idle Idle for 00:00:00 -
Disabled
4 entries were displayed.
Storage Efficiency自体はDisable
ですが、重複排除を維持した状態で転送されていることが分かります。
SnapMirrorのカットオーバー
それではSnapMirrorのカットオーバーです。
::*> snapmirror break -destination-path svm2:vol_sm_se_test_pattern*
Operation succeeded: snapmirror break for destination "svm2:vol_sm_se_test_pattern1_dst".
Operation succeeded: snapmirror break for destination "svm2:vol_sm_se_test_pattern2_dst".
Operation succeeded: snapmirror break for destination "svm2:vol_sm_se_test_pattern3_dst".
Operation succeeded: snapmirror break for destination "svm2:vol_sm_se_test_pattern4_dst".
4 entries were acted on.
::*> snapmirror show -destination-path svm2:vol_sm_se_test_pattern*
Progress
Source Destination Mirror Relationship Total Last
Path Type Path State Status Progress Healthy Updated
----------- ---- ------------ ------- -------------- --------- ------- --------
svm:vol_sm_se_test_pattern1
XDP svm2:vol_sm_se_test_pattern1_dst
Broken-off
Idle - true -
svm:vol_sm_se_test_pattern2
XDP svm2:vol_sm_se_test_pattern2_dst
Broken-off
Idle - true -
svm:vol_sm_se_test_pattern3
XDP svm2:vol_sm_se_test_pattern3_dst
Broken-off
Idle - true -
svm:vol_sm_se_test_pattern4
XDP svm2:vol_sm_se_test_pattern4_dst
Broken-off
Idle - true -
4 entries were displayed.
現在のボリュームの状態を確認します。
::*> volume show -volume vol_sm_se_test_pattern* -fields available, filesystem-size, total, user, used, percent-used, size, sis-space-saved, sis-space-saved-percent, type, is-sis-volume, is-sis-logging-enabled, is-sis-state-enabled
vserver volume size user available filesystem-size total used percent-used type is-sis-volume is-sis-logging-enabled is-sis-state-enabled sis-space-saved sis-space-saved-percent
------- ----------------------- ---- ---- --------- --------------- ------- ------ ------------ ---- ------------- ---------------------- -------------------- --------------- -----------------------
svm vol_sm_se_test_pattern1 32GB 0 23.96GB 32GB 30.40GB 6.44GB 21% RW true true true 3.72GB 37%
svm vol_sm_se_test_pattern2 32GB 0 30.36GB 32GB 30.40GB 45.45MB
0% RW true true true 10.00GB 100%
svm vol_sm_se_test_pattern3 32GB 0 20.36GB 32GB 30.40GB 10.04GB
33% RW true false false 0B 0%
svm vol_sm_se_test_pattern4 32GB 0 20.36GB 32GB 30.40GB 10.04GB
33% RW true false false 0B 0%
svm2 vol_sm_se_test_pattern1_dst
32GB 0 25.65GB 32GB 32GB 6.35GB 19% RW true true true 3.74GB 37%
svm2 vol_sm_se_test_pattern2_dst
128.0MB
0 80.01MB 128.0MB 121.6MB 41.62MB
34% RW true true true 10.00GB 100%
svm2 vol_sm_se_test_pattern3_dst
32GB 0 21.91GB 32GB 32GB 10.09GB
31% RW true true true 52.73MB 1%
svm2 vol_sm_se_test_pattern4_dst
12.66GB
0 1.98GB 12.66GB 12.03GB 10.05GB
83% RW true true true 19.29MB 0%
8 entries were displayed.
::*> volume efficiency show -volume vol_sm_se_test_pattern* -fields changelog-usage, changelog-size, logical-data-size, state, progress, last-op-size, last-op-begin, last-op-end
vserver volume state progress last-op-begin last-op-end last-op-size changelog-usage changelog-size logical-data-size
------- ----------------------- ------- ----------------- ------------------------ ------------------------ ------------ --------------- -------------- -----------------
svm vol_sm_se_test_pattern1 Enabled Idle for 17:07:15 Mon Jan 20 09:24:19 2025 Mon Jan 20 11:24:07 2025 3.37GB 19% 62.77MB 10.16GB
svm vol_sm_se_test_pattern2 Enabled Idle for 11:35:39 Mon Jan 20 11:24:07 2025 Mon Jan 20 16:55:43 2025 9.13GB 0% 4KB 10.04GB
svm vol_sm_se_test_pattern3 Disabled
Idle for 25:31:02 Mon Jan 20 03:00:20 2025 Mon Jan 20 03:00:20 2025 0B 0% 0B 10.04GB
svm vol_sm_se_test_pattern4 Disabled
Idle for 25:30:18 Mon Jan 20 03:01:04 2025 Mon Jan 20 03:01:04 2025 0B 0% 0B 10.04GB
svm2 vol_sm_se_test_pattern1_dst
Enabled Idle for 00:01:37 Tue Jan 21 04:29:45 2025 Tue Jan 21 04:29:45 2025 0B 0% 0B 10.06GB
svm2 vol_sm_se_test_pattern2_dst
Enabled Idle for 00:01:34 Tue Jan 21 04:29:48 2025 Tue Jan 21 04:29:48 2025 0B 0% 0B 10.04GB
svm2 vol_sm_se_test_pattern3_dst
Enabled Idle for 00:01:31 Tue Jan 21 04:29:51 2025 Tue Jan 21 04:29:51 2025 0B 0% 0B 10.10GB
svm2 vol_sm_se_test_pattern4_dst
Enabled Idle for 00:01:28 Tue Jan 21 04:29:54 2025 Tue Jan 21 04:29:54 2025 0B 0% 0B 10.07GB
8 entries were displayed.
カットオーバーをしたタイミングで、転送元ボリュームのStorage Efficiencyの状態に関わらず、SnapMirrorのカットオーバーをしたタイミングで、転送先ボリュームのStorage Efficiencyが有効になりました。
転送元ボリュームではStorage Efficiencyが無効である場合も、カットオーバーすると転送先ボリュームで少し重複排除が効いていることが分かりますね。
ということで、SnapMirrorのカットオーバー前にStorage Efficiencyは明示的に有効化する必要はないようです。
明示的に転送先でStorage Efficiencyを有効にしなくとも、SnapMirrorのカットオーバーをしたタイミングで有効になる
Amazon FSx for NetApp ONTAPにて、SnapMirrorのカットオーバーした際にStorage Efficiencyが有効になるか確認してみました。
結論は明示的に転送先でStorage Efficiencyを有効にしなくとも、SnapMirrorのカットオーバーをしたタイミングで有効になります。SnapMirror relationship作成後の一手間が不要になるのでありがたいですね。
逆にStorage Efficiencyを効かせたくない場合は、カットオーバーのタイミングで無効化する必要があります。
この記事が誰かの助けになれば幸いです。
以上、クラウド事業本部 コンサルティング部の のんピ(@non____97)でした!