Red Hat Enterprise LinuxのEC2インスタンスに複数のENIやIPアドレスを割り当てている場合はIMDSv2のみにしづらい件

nm-cloud-setup が IMDSv2をサポートするまで、複数のIPアドレスを割り当てる環境においてはIMDSv1を使い続けることになりそう
2023.08.19

nm-cloud-setup ってなんだろう?

こんにちは、のんピ(@non____97)です。

皆さんはIMDSv2のみに限定したいなと思ったことはありますか? 私はあります。

IMDSv2のみに限定する場合、IMDSv1が使われていないことを確認する必要があります。

以前、IMDSv1を使用しているプロセスを特定できるIMDSパケットアナライザーの記事を投稿しました。

こちらの検証の中でRed Hat Enterprise Linux(以降RHEL)のEC2インスタンスではnm-cloud-setupがIMDSv1を使ってることを確認できました。

その時はnm-cloud-setupが何ものなのかスルーしてしまったのですが、今回改めて調査してみたので紹介します。不要であればこのサービスを停止しても良いと考えます。

2023/12/8追記 ここから

2023年11月中頃に提供されたErrataを適用することでnm-cloud-setupはIMDSv2をサポートするようになりました

それぞれのOSのErrataを確認すると、Add support for IMDSv2 to nm-cloud-setupと記載されていることが分かります。

対応するBugzillaのリンクは以下のとおりです。

2023/12/8追記 ここまで

いきなりまとめ

  • nm-cloud-setupはメタデータから情報を取得してVMのネットワーク設定を更新するサービス
  • 2023/8/19時点でnm-cloud-setupはIMDSv2をサポートしていない 2023年11月中頃よりnm-cloud-setupはIMDSv2をサポートするようになりました
    • IMDSv2しか有効化されていない場合は処理が行われない
  • nm-cloud-setupにより複数のENIやIPアドレスを割り当てられた場合にOS内のルートテーブルが変更され、追加されたIPアドレスを使って通信できるようになる
    • nm-cloud-setupが動作しない場合は、この処理を手動で行う必要がある
    • 何もしなければ追加したENIやIPアドレスを使った通信ができない
  • 複数のENIやIPアドレスが割り当てられていない場合はnm-cloud-setupが動作しなくても影響はない

nm-cloud-setup とは

nm-cloud-setupとはなんでしょうか。

Red Hatのドキュメントには以下のような記載がありました。

通常、仮想マシン (VM) には、DHCP で設定可能なインターフェイスが 1 つだけあります。ただし、VM によっては、DHCP で設定できない 1 つのインターフェイスに複数のネットワークインターフェイス、IP アドレス、および IP サブネットがある場合があります。また、管理者は、マシンの実行中にネットワークを再設定できます。nm-cloud-setup ユーティリティーは、クラウドサービスプロバイダーのメタデータサーバーから設定情報を自動的に取得し、パブリッククラウド内の VM のネットワーク設定を更新します。

第50章 nm-cloud-setup を使用してパブリッククラウドのネットワークインターフェイスを自動的に設定する Red Hat Enterprise Linux 9 | Red Hat Customer Portal

メタデータから情報を取得してVMのネットワーク設定をアップデートするようですね。

このメタデータを取得する方法がIMDSv2に対応していないということです。

マニュアルも確認してみます。

$ man nm-cloud-setup | col -bfx > nm-cloud-setup.txt
troff: <standard input>:463: warning [p 4, 0.2i]: can\'t break line

$ cat nm-cloud-setup.txt
NM-CLOUD-SETUP(8)                                              Automatic Network Configuratio                                              NM-CLOUD-SETUP(8)

NAME
       nm-cloud-setup - Overview of Automatic Network Configuration in Cloud

OVERVIEW
       When running a virtual machine in a public cloud environment, it is desirable to automatically configure the network of that VM. In simple setups,
       the VM only has one network interface and the public cloud supports automatic configuration via DHCP, DHCP6 or IPv6 autoconf. However, the virtual
       machine might have multiple network interfaces, or multiple IP addresses and IP subnets on one interface which cannot be configured via DHCP. Also,
       the administrator may reconfigure the network while the machine is running. NetworkManager's nm-cloud-setup is a tool that automatically picks up
       such configuration in cloud environments and updates the network configuration of the host.

       Multiple cloud providers are supported. See the section called “SUPPORTED CLOUD PROVIDERS”.

USE
       The goal of nm-cloud-setup is to be configuration-less and work automatically. All you need is to opt-in to the desired cloud providers (see the
       section called “ENVIRONMENT VARIABLES”) and run /usr/libexec/nm-cloud-setup.

       Usually this is done by enabling the nm-cloud-setup.service systemd service and let it run periodically. For that there is both a
       nm-cloud-setup.timer systemd timer and a NetworkManager dispatcher script.

DETAILS
       nm-cloud-setup configures the network by fetching the configuration from the well-known meta data server of the cloud provider. That means, it
       already needs the network configured to the point where it can reach the meta data server. Commonly that means, that a simple connection profile is
       activated that possibly uses DHCP to get the primary IP address. NetworkManager will create such a profile for ethernet devices automatically if it
       is not configured otherwise via "no-auto-default" setting in NetworkManager.conf. One possible alternative may be to create such an initial profile
       with nmcli device connect "$DEVICE" or nmcli connection add type ethernet ....

       By setting the user-data org.freedesktop.nm-cloud-setup.skip=yes on the profile, nm-cloud-setup will skip the device.

       nm-cloud-setup modifies the run time configuration akin to nmcli device modify. With this approach, the configuration is not persisted and only
       preserved until the device disconnects.

   /usr/libexec/nm-cloud-setup
       The binary /usr/libexec/nm-cloud-setup does most of the work. It supports no command line arguments but can be configured via environment variables.
       See the section called “ENVIRONMENT VARIABLES” for the supported environment variables.

       By default, all cloud providers are disabled unless you opt-in by enabling one or several providers. If cloud providers are enabled, the program
       tries to fetch the host's configuration from a meta data server of the cloud via HTTP. If configuration could be not fetched, no cloud provider are
       detected and the program quits. If host configuration is obtained, the corresponding cloud provider is successfully detected. Then the network of the
       host will be configured.

       It is intended to re-run nm-cloud-setup every time when the configuration (maybe) changes. The tool is idempotent, so it should be OK to also run it
       more often than necessary. You could run /usr/libexec/nm-cloud-setup directly. However it may be preferable to restart the nm-cloud-setup systemd
       service instead or use the timer or dispatcher script to run it periodically (see below).

   nm-cloud-setup.service systemd unit
       Usually /usr/libexec/nm-cloud-setup is not run directly, but only by systemctl restart nm-cloud-setup.service. This ensures that the tool only runs
       once at any time. It also allows to integrate with the nm-cloud-setup systemd timer, and to enable/disable the service via systemd.

       As you need to set environment variable to configure nm-cloud-setup binary, you can do so via systemd override files. Try systemctl edit
       nm-cloud-setup.service.

   nm-cloud-setup.timer systemd timer
       /usr/libexec/nm-cloud-setup is intended to run whenever an update is necessary. For example, during boot when when changing the network configuration
       of the virtual machine via the cloud provider.

       One way to do this, is by enabling the nm-cloud-setup.timer systemd timer with systemctl enable --now nm-cloud-setup.timer.

   /usr/lib/NetworkManager/dispatcher.d/90-nm-cloud-setup.sh
       There is also a NetworkManager dispatcher script that will run for example when an interface is activated by NetworkManager. Together with the
       nm-cloud-setup.timer systemd timer this script is to automatically pick up changes to the network.

       The dispatcher script will do nothing, unless the systemd service is enabled. To use the dispatcher script you should therefor run systemctl enable
       nm-cloud-setup.service once.

ENVIRONMENT VARIABLES
       The following environment variables are used to configure /usr/libexec/nm-cloud-setup. You may want to configure them with a drop-in for the systemd
       service. For example by calling systemctl edit nm-cloud-setup.service and configuring [Service] Environment=, as described in systemd.exec(5) manual.

       •   NM_CLOUD_SETUP_LOG: control the logging verbosity. Set it to one of TRACE, DEBUG, INFO, WARN, ERR or OFF. The program will print message on
           stdout and the default level is WARN.

       •   NM_CLOUD_SETUP_AZURE: boolean, whether Microsoft Azure support is enabled. Defaults to no.

       •   NM_CLOUD_SETUP_EC2: boolean, whether Amazon EC2 (AWS) support is enabled. Defaults to no.

       •   NM_CLOUD_SETUP_GCP: boolean, whether Google GCP support is enabled. Defaults to no.

       •   NM_CLOUD_SETUP_ALIYUN: boolean, whether Alibaba Cloud (Aliyun) support is enabled. Defaults to no.

EXAMPLE SETUP FOR CONFIGURING AND PREDEPLOYING NM-CLOUD-SETUP
       As detailed before, nm-cloud-setup needs to be explicitly enabled. As it runs as a systemd service and timer, that basically means to enable and
       configure those. This can be done by dropping the correct files and symlinks to disk.

       The following example enables nm-cloud-setup for Amazon EC2 cloud:

           dnf install -y NetworkManager-cloud-setup

           mkdir -p /etc/systemd/system/nm-cloud-setup.service.d
           cat > /etc/systemd/system/nm-cloud-setup.service.d/10-enable-ec2.conf << EOF
           [Service]
           Environment=NM_CLOUD_SETUP_EC2=yes
           EOF

           # systemctl enable nm-cloud-setup.service
           mkdir -p /etc/systemd/system/NetworkManager.service.wants/
           ln -s /usr/lib/systemd/system/nm-cloud-setup.service /etc/systemd/system/NetworkManager.service.wants/nm-cloud-setup.service

           # systemctl enable nm-cloud-setup.timer
           mkdir -p /etc/systemd/system/timers.target.wants/
           ln -s /etc/systemd/system/timers.target.wants/nm-cloud-setup.timer /usr/lib/systemd/system/nm-cloud-setup.timer

           # systemctl daemon-reload

SUPPORTED CLOUD PROVIDERS
   Amazon EC2 (AWS)
       For AWS, the tools tries to fetch configuration from http://169.254.169.254/. Currently, it only configures IPv4 and does nothing about IPv6. It will
       do the following.

       •   First fetch http://169.254.169.254/latest/meta-data/ to determine whether the expected API is present. This determines whether EC2 environment is
           detected and whether to proceed to configure the host using EC2 meta data.

       •   Fetch http://169.254.169.254/2018-09-24/meta-data/network/interfaces/macs/ to get the list of available interface. Interfaces are identified by
           their MAC address.

       •   Then for each interface fetch http://169.254.169.254/2018-09-24/meta-data/network/interfaces/macs/$MAC/subnet-ipv4-cidr-block and
           http://169.254.169.254/2018-09-24/meta-data/network/interfaces/macs/$MAC/local-ipv4s. Thereby we get a list of local IPv4 addresses and one CIDR
           subnet block.

       •   Then nm-cloud-setup iterates over all interfaces for which it could fetch IP configuration. If no ethernet device for the respective MAC address
           is found, it is skipped. Also, if the device is currently not activated in NetworkManager or if the currently activated profile has a user-data
           org.freedesktop.nm-cloud-setup.skip=yes, it is skipped.

           If only one interface and one address is configured, then the tool does nothing and leaves the automatic configuration that was obtained via
           DHCP.

           Otherwise, the tool will change the runtime configuration of the device.

           •   Add static IPv4 addresses for all the configured addresses from local-ipv4s with prefix length according to subnet-ipv4-cidr-block. For
               example, we might have here 2 IP addresses like "172.16.5.3/24,172.16.5.4/24".

           •   Choose a route table 30400 + the index of the interface and add a default route 0.0.0.0/0. The gateway is the first IP address in the CIDR
               subnet block. For example, we might get a route "0.0.0.0/0 172.16.5.1 10 table=30400".

               Also choose a route table 30200 + the interface index. This contains a direct routes to the subnets of this interface.

           •   Finally, add a policy routing rule for each address. For example "priority 30200 from 172.16.5.3/32 table 30200, priority 30200 from
               172.16.5.4/32 table 30200". and "priority 30400 from 172.16.5.3/32 table 30400, priority 30400 from 172.16.5.4/32 table 30400" The 30200+
               rules select the table to reach the subnet directly, while the 30400+ rules use the default route. Also add a rule "priority 30350 table main
               suppress_prefixlength 0". This has a priority between the two previous rules and causes a lookup of routes in the main table while ignoring
               the default route. The purpose of this is so that other specific routes in the main table are honored over the default route in table 30400+.

           With above example, this roughly corresponds for interface eth0 to nmcli device modify "eth0" ipv4.addresses "172.16.5.3/24,172.16.5.4/24"
           ipv4.routes "172.16.5.0/24 0.0.0.0 10 table=30200, 0.0.0.0/0 172.16.5.1 10 table=30400" ipv4.routing-rules "priority 30200 from 172.16.5.3/32
           table 30200, priority 30200 from 172.16.5.4/32 table 30200, priority 20350 table main suppress_prefixlength 0, priority 30400 from 172.16.5.3/32
           table 30400, priority 30400 from 172.16.5.4/32 table 30400". Note that this replaces the previous addresses, routes and rules with the new
           information. But also note that this only changes the run time configuration of the device. The connection profile on disk is not affected.

   Google Cloud Platform (GCP)
       For GCP, the meta data is fetched from URIs starting with http://metadata.google.internal/computeMetadata/v1/ with a HTTP header "Metadata-Flavor:
       Google". Currently, the tool only configures IPv4 and does nothing about IPv6. It will do the following.

       •   First fetch http://metadata.google.internal/computeMetadata/v1/instance/id to detect whether the tool runs on Google Cloud Platform. Only if the
           platform is detected, it will continue fetching the configuration.

       •   Fetch http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/ to get the list of available interface indexes. These
           indexes can be used for further lookups.

       •   Then, for each interface fetch http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/$IFACE_INDEX/mac to get the
           corresponding MAC address of the found interfaces. The MAC address is used to identify the device later on.

       •   Then, for each interface with a MAC address fetch
           http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/$IFACE_INDEX/forwarded-ips/ and then all the found IP addresses at
           http://metadata.google.internal/computeMetadata/v1/instance/network-interfaces/$IFACE_INDEX/forwarded-ips/$FIPS_INDEX.

       •   At this point, we have a list of all interfaces (by MAC address) and their configured IPv4 addresses.

           For each device, we lookup the currently applied connection in NetworkManager. That implies, that the device is currently activated in
           NetworkManager. If no such device was in NetworkManager, or if the profile has user-data org.freedesktop.nm-cloud-setup.skip=yes, we skip the
           device. Now for each found IP address we add a static route "$FIPS_ADDR/32 0.0.0.0 100 type=local" and reapply the change.

           The effect is not unlike calling nmcli device modify "$DEVICE" ipv4.routes "$FIPS_ADDR/32 0.0.0.0 100 type=local [,...]"  for all relevant
           devices and all found addresses.

   Microsoft Azure
       For Azure, the meta data is fetched from URIs starting with http://169.254.169.254/metadata/instance with a URL parameter
       "?format=text&api-version=2017-04-02" and a HTTP header "Metadata:true". Currently, the tool only configures IPv4 and does nothing about IPv6. It
       will do the following.

       •   First fetch http://169.254.169.254/metadata/instance?format=text&api-version=2017-04-02 to detect whether the tool runs on Azure Cloud. Only if
           the platform is detected, it will continue fetching the configuration.

       •   Fetch http://169.254.169.254/metadata/instance/network/interface/?format=text&api-version=2017-04-02 to get the list of available interface
           indexes. These indexes can be used for further lookups.

       •   Then, for each interface fetch
           http://169.254.169.254/metadata/instance/network/interface/$IFACE_INDEX/macAddress?format=text&api-version=2017-04-02 to get the corresponding
           MAC address of the found interfaces. The MAC address is used to identify the device later on.

       •   Then, for each interface with a MAC address fetch
           http://169.254.169.254/metadata/instance/network/interface/$IFACE_INDEX/ipv4/ipAddress/?format=text&api-version=2017-04-02 to get the list of
           (indexes of) IP addresses on that interface.

       •   Then, for each IP address index fetch the address at
           http://169.254.169.254/metadata/instance/network/interface/$IFACE_INDEX/ipv4/ipAddress/$ADDR_INDEX/privateIpAddress?format=text&api-version=2017-04-02.
           Also fetch the size of the subnet and prefix for the interface from
           http://169.254.169.254/metadata/instance/network/interface/$IFACE_INDEX/ipv4/subnet/0/address/?format=text&api-version=2017-04-02. and
           http://169.254.169.254/metadata/instance/network/interface/$IFACE_INDEX/ipv4/subnet/0/prefix/?format=text&api-version=2017-04-02.

       •   At this point, we have a list of all interfaces (by MAC address) and their configured IPv4 addresses.

           Then the tool configures the system like doing for AWS environment. That is, using source based policy routing with the tables/rules 30200/30400.

   Alibaba Cloud (Aliyun)
       For Aliyun, the tools tries to fetch configuration from http://100.100.100.200/. Currently, it only configures IPv4 and does nothing about IPv6. It
       will do the following.

       •   First fetch http://100.100.100.200/2016-01-01/meta-data/ to determine whether the expected API is present. This determines whether Aliyun
           environment is detected and whether to proceed to configure the host using Aliyun meta data.

       •   Fetch http://100.100.100.200/2016-01-01/meta-data/network/interfaces/macs/ to get the list of available interface. Interfaces are identified by
           their MAC address.

       •   Then for each interface fetch http://100.100.100.200/2016-01-01/meta-data/network/interfaces/macs/$MAC/vpc-cidr-block,
           http://100.100.100.200/2016-01-01/meta-data/network/interfaces/macs/$MAC/private-ipv4s,
           http://100.100.100.200/2016-01-01/meta-data/network/interfaces/macs/$MAC/netmask and
           http://100.100.100.200/2016-01-01/meta-data/network/interfaces/macs/$MAC/gateway. Thereby we get a list of private IPv4 addresses, one CIDR
           subnet block and private IPv4 addresses prefix.

       •   Then nm-cloud-setup iterates over all interfaces for which it could fetch IP configuration. If no ethernet device for the respective MAC address
           is found, it is skipped. Also, if the device is currently not activated in NetworkManager or if the currently activated profile has a user-data
           org.freedesktop.nm-cloud-setup.skip=yes, it is skipped. Also, there is only one interface and one IP address, the tool does nothing.

           Then the tool configures the system like doing for AWS environment. That is, using source based policy routing with the tables/rules 30200/30400.
           One difference to AWS is that the gateway is also fetched via metadata instead of using the first IP address in the subnet.

SEE ALSO
       NetworkManager(8) nmcli(1)

NetworkManager 1.42.2                                                                                                                      NM-CLOUD-SETUP(8)

メタデータを取得してnmcli device modifyのように設定変更をするようですね。また、こちらの設定は永続化されずデバイスが切断されるまでの状態を保持されるようです。

メタデータを取得できない場合はプログラムを終了するとの記載もありますね。そもそも接続プロファイルにorg.freedesktop.nm-cloud-setup.skip=yesとすることでnm-cloud-setupによる設定変更をさせないこともできそうです。

EC2インスタンスにおいては以下のような処理が行われます。

  1. `http://169.254.169.254/latest/meta-data/`にアクセスして、期待するAPIが存在するか確認するか判断する
  2. `http://169.254.169.254/2018-09-24/meta-data/network/interfaces/macs/`にアクセスして、利用可能なインターフェースのリストを取得する
  3. 各インタフェースについて`http://169.254.169.254/2018-09-24/meta-data/network/interfaces/macs/$MAC/subnet-ipv4-cidr-block`と`http://169.254.169.254/2018-09-24/meta-data/network/interfaces/macs/$MAC/local-ipv4s`にアクセスし、IPv4アドレスのリストとサブネットのCIDRを取得する
  4. IP設定を取得できるすべてのインターフェイスで繰り返し以下の処理をする
    • それぞれのMACアドレスに対応するイーサネットデバイスが見つからない場合はスキップする
    • デバイスが現在 NetworkManager で有効化されていない場合や、現在有効化されているプロファイルにorg.freedesktop.nm-cloud-setup.skip=yesが指定されている場合はスキップする
    • 1つのインターフェースと1つのIPアドレスしか設定されていない場合は特に何もせず、DHCP経由で取得された自動設定のままとする
    • 上述のいずれにも当てはまらない場合は、以降の処理を行う
    • subnet-ipv4-cidr-blockに従ったプレフィックス長で、local-ipv4sからすべての構成済みアドレスに静的IPv4アドレスを追加する
    • ルートテーブル30200以上のインターフェースのインデックスを選択し、デフォルトルート0.0.0.0/0を追加する
    • 各アドレスにポリシールーティングルールを追加する

IMDSv2のみに制限すると、1つ目の処理が失敗するので、nm-cloud-setupは何も処理を行わないと考えられます。

また、ENIが1つでIPアドレスをも1つのみ設定されている場合にはスキップされるので、複数のENIや複数のIP アドレスが設定されていない場合はIMDSv2のみにしても問題なさそうです。

検証環境

検証環境は以下の通りです。

Red Hat Enterprise LinuxのEC2インスタンスに複数のENIやIPアドレスを割り当てている場合はIMDSv2のみにできない件検証環境構成図

検証環境はAWS CDKでデプロイします。使用したコードは以下リポジトリに保存しています。

デプロイ後、EC2インスタンスに接続してIDMSパケットアナライザーを実行して4分ほど放置します。

$ sudo python3 /aws-imds-packet-analyzer/src/imds_snoop.py
Setting log folder to root RW access only, permission was: 0o755
Logging to /var/log/imds/imds-trace.log
Starting ImdsPacketAnalyzer...
Output format: Info Level:[INFO/ERROR...] IMDS version:[IMDSV1/2?] (pid:[pid]:[process name]:argv:[argv]) -> repeats 3 times for parent process
[WARNING] IMDSv1(!) (pid:1599:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:1599:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:1599:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/0e:cb:24:2f:a1:f3/subnet-ipv4-cidr-block HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[INFO] IMDSv2 (pid:1319:ssm-agent-worke argv:/usr/bin/ssm-agent-worker) called by -> (pid:923:amazon-ssm-agen argv:/usr/bin/amazon-ssm-agent) -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: PUT /latest/api/token HTTP/1.1, Host: 169.254.169.254, User-Agent: aws-sdk-go/1.44.260 (go1.19.10; linux; amd64), Content-Length: 0, X-Aws-Ec2-Metadata-Token-Ttl-Seconds: 21600, Accept-Encoding: gzip,
Possibly lost 1 samples
[INFO] IMDSv2 (pid:1319:ssm-agent-worke argv:/usr/bin/ssm-agent-worker) called by -> (pid:923:amazon-ssm-agen argv:/usr/bin/amazon-ssm-agent) -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/dynamic/instance-identity/document HTTP/1.1, Host: 169.254.169.254, User-Agent: aws-sdk-go/1.44.260 (go1.19.10;linux; amd64), X-Aws-Ec2-Metadata-Token: AQAEAH5ohA2kh_PLUAa3ZYkUC0M-68qKvnF5aCRlvLJEa3ytaAfmow==, Accept-Encoding: gzip,
Possibly lost 1 samples
[INFO] IMDSv2 (pid:1319:ssm-agent-worke argv:/usr/bin/ssm-agent-worker) called by -> (pid:923:amazon-ssm-agen argv:/usr/bin/amazon-ssm-agent) -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/instance-id HTTP/1.1, Host: 169.254.169.254, User-Agent: aws-sdk-go/1.44.260 (go1.19.10; linux; amd64), X-Aws-Ec2-Metadata-Token: AQAEAH5ohA2kh_PLUAa3ZYkUC0M-68qKvnF5aCRlvLJEa3ytaAfmow==, Accept-Encoding: gzip,
[INFO] IMDSv2 (pid:1319:ssm-agent-worke argv:/usr/bin/ssm-agent-worker) called by -> (pid:923:amazon-ssm-agen argv:/usr/bin/amazon-ssm-agent) -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/placement/availability-zone HTTP/1.1, Host: 169.254.169.254, User-Agent: aws-sdk-go/1.44.260 (go1.19.10; linux; amd64), X-Aws-Ec2-Metadata-Token: AQAEAH5ohA2kh_PLUAa3ZYkUC0M-68qKvnF5aCRlvLJEa3ytaAfmow==, Accept-Encoding: gzip,
[INFO] IMDSv2 (pid:1319:ssm-agent-worke argv:/usr/bin/ssm-agent-worker) called by -> (pid:923:amazon-ssm-agen argv:/usr/bin/amazon-ssm-agent) -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/placement/availability-zone-id HTTP/1.1, Host: 169.254.169.254, User-Agent: aws-sdk-go/1.44.260 (go1.19.10; linux; amd64), X-Aws-Ec2-Metadata-Token: AQAEAH5ohA2kh_PLUAa3ZYkUC0M-68qKvnF5aCRlvLJEa3ytaAfmow==, Accept-Encoding: gzip,

nm-cloud-setupがIMDSv1で通信していることが分かります。

nm-cloud-setupの情報確認

nm-cloud-setupのサービスや設定ファイルなどの情報を確認しておきます。

サービスは以下の通りです。

$ systemctl status nm-cloud-setup
○ nm-cloud-setup.service - Automatically configure NetworkManager in cloud
     Loaded: loaded (/usr/lib/systemd/system/nm-cloud-setup.service; enabled; preset: disabled)
    Drop-In: /usr/lib/systemd/system/nm-cloud-setup.service.d
             └─10-rh-enable-for-ec2.conf
     Active: inactive (dead) since Fri 2023-08-18 22:57:31 UTC; 4min 25s ago
TriggeredBy: ● nm-cloud-setup.timer
       Docs: man:nm-cloud-setup(8)
    Process: 683 ExecStart=/usr/libexec/nm-cloud-setup (code=exited, status=0/SUCCESS)
   Main PID: 683 (code=exited, status=0/SUCCESS)
        CPU: 99ms

Aug 18 22:57:31 ip-10-10-10-10.ec2.internal systemd[1]: Starting Automatically configure NetworkManager in cloud...
Aug 18 22:57:31 ip-10-10-10-10.ec2.internal systemd[1]: nm-cloud-setup.service: Deactivated successfully.
Aug 18 22:57:31 ip-10-10-10-10.ec2.internal systemd[1]: Finished Automatically configure NetworkManager in cloud.

サービスの設定ファイルは以下の通りです。

$ cat /usr/lib/systemd/system/nm-cloud-setup.service
[Unit]
Description=Automatically configure NetworkManager in cloud
Documentation=man:nm-cloud-setup(8)
After=NetworkManager.service

[Service]
Type=oneshot
ExecStart=/usr/libexec/nm-cloud-setup

#Environment=NM_CLOUD_SETUP_LOG=TRACE

# Cloud providers are disabled by default. You need to
# Opt-in by setting the right environment variable for
# the provider.
#
# Create a drop-in file to overwrite these variables or
# use systemctl edit.
#Environment=NM_CLOUD_SETUP_EC2=yes
#Environment=NM_CLOUD_SETUP_GCP=yes
#Environment=NM_CLOUD_SETUP_AZURE=yes
#Environment=NM_CLOUD_SETUP_ALIYUN=yes

CapabilityBoundingSet=
LockPersonality=yes
MemoryDenyWriteExecute=yes
NoNewPrivileges=yes
PrivateDevices=yes
PrivateTmp=yes
ProtectControlGroups=yes
ProtectHome=yes
ProtectHostname=yes
ProtectKernelLogs=yes
ProtectKernelModules=yes
ProtectKernelTunables=yes
ProtectSystem=strict
RestrictAddressFamilies=AF_UNIX AF_NETLINK AF_INET AF_INET6
RestrictNamespaces=yes
RestrictRealtime=yes
RestrictSUIDSGID=yes
SystemCallFilter=@system-service

[Install]
WantedBy=NetworkManager.service

$ cat /usr/lib/systemd/system/nm-cloud-setup.service.d/10-rh-enable-for-ec2.conf
[Service]
Environment=NM_CLOUD_SETUP_EC2=yes

各種パラメーターの設定ができそうですね。

デフォルトのデバイス情報

次にデフォルトのデバイス情報を確認します。

$ nmcli connection show
NAME         UUID                                  TYPE      DEVICE
System eth0  5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03  ethernet  eth0
lo           8032a1af-6a34-4366-92c2-28cda9cd63b7  loopback  lo

$ nmcli device status
DEVICE  TYPE      STATE                   CONNECTION
eth0    ethernet  connected               System eth0
lo      loopback  connected (externally)  lo

$ nmcli
eth0: connected to System eth0
        "Amazon.com Elastic"
        ethernet (ena), 0E:CB:24:2F:A1:F3, hw, mtu 9001
        ip4 default
        inet4 10.10.10.10/27
        route4 10.10.10.0/27 metric 100
        route4 default via 10.10.10.1 metric 100
        inet6 fe80::ccb:24ff:fe2f:a1f3/64
        route6 fe80::/64 metric 256

lo: connected (externally) to lo
        "lo"
        loopback (unknown), 00:00:00:00:00:00, sw, mtu 65536
        inet4 127.0.0.1/8
        inet6 ::1/128
        route6 ::1/128 metric 256

DNS configuration:
        servers: 10.10.10.2
        domains: ec2.internal
        interface: eth0

Use "nmcli device show" to get complete information about known devices and
"nmcli connection show" to get an overview on active connection profiles.

Consult nmcli(1) and nmcli-examples(7) manual pages for complete usage details.

$ nmcli device show
GENERAL.DEVICE:                         eth0
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         0E:CB:24:2F:A1:F3
GENERAL.MTU:                            9001
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     System eth0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         10.10.10.10/27
IP4.GATEWAY:                            10.10.10.1
IP4.ROUTE[1]:                           dst = 10.10.10.0/27, nh = 0.0.0.0, mt = 100
IP4.ROUTE[2]:                           dst = 0.0.0.0/0, nh = 10.10.10.1, mt = 100
IP4.DNS[1]:                             10.10.10.2
IP4.DOMAIN[1]:                          ec2.internal
IP6.ADDRESS[1]:                         fe80::ccb:24ff:fe2f:a1f3/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 256

GENERAL.DEVICE:                         lo
GENERAL.TYPE:                           loopback
GENERAL.HWADDR:                         00:00:00:00:00:00
GENERAL.MTU:                            65536
GENERAL.STATE:                          100 (connected (externally))
GENERAL.CONNECTION:                     lo
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
IP4.ADDRESS[1]:                         127.0.0.1/8
IP4.GATEWAY:                            --
IP6.ADDRESS[1]:                         ::1/128
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = ::1/128, nh = ::, mt = 256

特に面白いことはありません。

ポリシーベースルーティングのルールやルートも確認しておきましょう。

$ ip rule show
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default

$ ip route show table main
default via 10.10.10.1 dev eth0 proto dhcp src 10.10.10.10 metric 100
10.10.10.0/27 dev eth0 proto kernel scope link src 10.10.10.10 metric 100

$ ip route show table local
local 10.10.10.10 dev eth0 proto kernel scope host src 10.10.10.10
broadcast 10.10.10.31 dev eth0 proto kernel scope link src 10.10.10.10
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1

これといって言及するものはなさそうです。

ENIを追加した場合のデバイス情報

IMDSv1 と IMDSv2

それではENIを追加した場合のデバイス情報がどのようになるのか確認します。

まずはIMDSv1 と IMDSv2どちらも有効の場合について確認をしていきます。

$ aws ec2 describe-instances \
    --instance-id i-087cfc06bf0259dd5 \
    --query "Reservations[].Instances[].MetadataOptions"
[
    {
        "State": "applied",
        "HttpTokens": "optional",
        "HttpPutResponseHopLimit": 1,
        "HttpEndpoint": "enabled",
        "HttpProtocolIpv6": "disabled",
        "InstanceMetadataTags": "disabled"
    }
]

EC2インスタンスと異なるサブネットにENIを作成します。

ネットワークインターフェイスの作成___EC2_Management_Console

作成したENIをEC2インスタンスにアタッチします。

ネットワークインターフェイスをアタッチ___EC2_Management_Console

作成したENIにElastic IPアドレスを割り当てます。

Elastic_IP_アドレスの関連付け___EC2_Management_Console

このタイミングでIMDSパケットアナライザーを確認すると、ちょうどENIをアタッチしたタイミングでIMDSv1の通信が発生していました。

$ sudo python3 /aws-imds-packet-analyzer/src/imds_snoop.py
Logging to /var/log/imds/imds-trace.log
Starting ImdsPacketAnalyzer...
Output format: Info Level:[INFO/ERROR...] IMDS version:[IMDSV1/2?] (pid:[pid]:[process name]:argv:[argv]) -> repeats 3 times for parent process
[WARNING] IMDSv1(!) (pid:1685:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:1685:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:1685:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/0e:cb:24:2f:a1:f3/subnet-ipv4-cidr-block HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:1685:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/0e:cb:24:2f:a1:f3/local-ipv4s HTTP/1.1, Host: 169.254.169.254, Accept: */*,
Possibly lost 2 samples
[WARNING] IMDSv1(!) (pid:1715:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:1715:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:1715:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/0e:02:a6:fb:cb:ab/subnet-ipv4-cidr-block HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:1738:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
Possibly lost 3 samples
[WARNING] IMDSv1(!) (pid:1738:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,

デバイス情報も確認してみましょう。

$ nmcli connection show
NAME                UUID                                  TYPE      DEVICE
System eth0         5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03  ethernet  eth0
lo                  8032a1af-6a34-4366-92c2-28cda9cd63b7  loopback  lo
Wired connection 1  b330912a-d05c-3e82-ba5a-f39ac6128784  ethernet  eth1

$ nmcli device status
DEVICE  TYPE      STATE                   CONNECTION
eth0    ethernet  connected               System eth0
eth1    ethernet  connected               Wired connection 1
lo      loopback  connected (externally)  lo

$ nmcli device show
GENERAL.DEVICE:                         eth0
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         0E:CB:24:2F:A1:F3
GENERAL.MTU:                            9001
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     System eth0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         10.10.10.10/27
IP4.GATEWAY:                            10.10.10.1
IP4.ROUTE[1]:                           dst = 10.10.10.0/27, nh = 0.0.0.0, mt = 100
IP4.ROUTE[2]:                           dst = 0.0.0.0/0, nh = 10.10.10.1, mt = 100
IP4.ROUTE[3]:                           dst = 10.10.10.0/27, nh = 0.0.0.0, mt = 10, table=30201
IP4.ROUTE[4]:                           dst = 0.0.0.0/0, nh = 10.10.10.1, mt = 10, table=30401
IP4.DNS[1]:                             10.10.10.2
IP4.DOMAIN[1]:                          ec2.internal
IP6.ADDRESS[1]:                         fe80::ccb:24ff:fe2f:a1f3/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 256

GENERAL.DEVICE:                         eth1
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         0E:02:A6:FB:CB:AB
GENERAL.MTU:                            9001
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     Wired connection 1
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/3
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         10.10.10.50/27
IP4.GATEWAY:                            10.10.10.33
IP4.ROUTE[1]:                           dst = 10.10.10.32/27, nh = 0.0.0.0, mt = 10, table=30200
IP4.ROUTE[2]:                           dst = 10.10.10.32/27, nh = 0.0.0.0, mt = 101
IP4.ROUTE[3]:                           dst = 0.0.0.0/0, nh = 10.10.10.33, mt = 10, table=30400
IP4.ROUTE[4]:                           dst = 0.0.0.0/0, nh = 10.10.10.33, mt = 101
IP4.DNS[1]:                             10.10.10.2
IP4.DOMAIN[1]:                          ec2.internal
IP6.ADDRESS[1]:                         fe80::ec59:9c5f:16e9:f50d/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 1024

GENERAL.DEVICE:                         lo
GENERAL.TYPE:                           loopback
GENERAL.HWADDR:                         00:00:00:00:00:00
GENERAL.MTU:                            65536
GENERAL.STATE:                          100 (connected (externally))
GENERAL.CONNECTION:                     lo
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
IP4.ADDRESS[1]:                         127.0.0.1/8
IP4.GATEWAY:                            --
IP6.ADDRESS[1]:                         ::1/128
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = ::1/128, nh = ::, mt = 256

eth1が追加されていますね。また、ルートテーブル30200302013040030401のルートが設定されています。

ルールを確認してみましょう。

$ ip rule show
0:      from all lookup local
30200:  from 10.10.10.50 lookup 30200 proto static
30201:  from 10.10.10.10 lookup 30201 proto static
30350:  from all lookup main suppress_prefixlength 0 proto static
30400:  from 10.10.10.50 lookup 30400 proto static
30401:  from 10.10.10.10 lookup 30401 proto static
32766:  from all lookup main
32767:  from all lookup default

$ ip route show table main
default via 10.10.10.1 dev eth0 proto dhcp src 10.10.10.10 metric 100
default via 10.10.10.33 dev eth1 proto dhcp src 10.10.10.50 metric 101
10.10.10.0/27 dev eth0 proto kernel scope link src 10.10.10.10 metric 100
10.10.10.32/27 dev eth1 proto kernel scope link src 10.10.10.50 metric 101

$ ip route show table local
local 10.10.10.10 dev eth0 proto kernel scope host src 10.10.10.10
broadcast 10.10.10.31 dev eth0 proto kernel scope link src 10.10.10.10
local 10.10.10.50 dev eth1 proto kernel scope host src 10.10.10.50
broadcast 10.10.10.63 dev eth1 proto kernel scope link src 10.10.10.50
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1

$ ip route show table 30200
10.10.10.32/27 dev eth1 proto static scope link metric 10

$ ip route show table 30201
10.10.10.0/27 dev eth0 proto static scope link metric 10

$ ip route show table 30400
default via 10.10.10.33 dev eth1 proto static metric 10

$ ip route show table 30401
default via 10.10.10.1 dev eth0 proto static metric 10

nm-cloud-setupのマニュアルに記載の内容のルーティング情報が定義されていることが分かります。

試しに各インターフェースから通信できるか確認します。

$ ping -c 1 -I eth0 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 10.10.10.10 eth0: 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=107 time=1.25 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.250/1.250/1.250/0.000 ms

$ ping -c 1 -I eth1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 10.10.10.50 eth1: 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=50 time=1.30 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.301/1.301/1.301/0.000 ms

どちらも通信できますね。

各インターフェースへの通信もできるかどうか確認します。

# eth0 のパブリックIPアドレスへの疎通確認
$ ping -c 1 3.84.174.175
PING 3.84.174.175 (3.84.174.175): 56 data bytes
64 bytes from 3.84.174.175: icmp_seq=0 ttl=46 time=187.033 ms

--- 3.84.174.175 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 187.033/187.033/187.033/nan ms

# eth1 のパブリックIPアドレスへの疎通確認
$ ping -c 1 35.172.124.44
PING 35.172.124.44 (35.172.124.44): 56 data bytes
64 bytes from 35.172.124.44: icmp_seq=0 ttl=51 time=186.436 ms

--- 35.172.124.44 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 186.436/186.436/186.436/nan ms

どちらも通信できました。

確認したいことは確認できたので、ENIをデタッチします。

デタッチする際はIMDSパケットアナライザーは特に何も表示しませんでした。

$ sudo python3 aws-imds-packet-analyzer/src/imds_snoop.py
Logging to /var/log/imds/imds-trace.log
Starting ImdsPacketAnalyzer...
Output format: Info Level:[INFO/ERROR...] IMDS version:[IMDSV1/2?] (pid:[pid]:[process name]:argv:[argv]) -> repeats 3 times for parent process

デバイス情報を確認します。

$ nmcli device show
GENERAL.DEVICE:                         eth0
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         0E:CB:24:2F:A1:F3
GENERAL.MTU:                            9001
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     System eth0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         10.10.10.10/27
IP4.GATEWAY:                            10.10.10.1
IP4.ROUTE[1]:                           dst = 10.10.10.0/27, nh = 0.0.0.0, mt = 100
IP4.ROUTE[2]:                           dst = 0.0.0.0/0, nh = 10.10.10.1, mt = 100
IP4.ROUTE[3]:                           dst = 10.10.10.0/27, nh = 0.0.0.0, mt = 10, table=30201
IP4.ROUTE[4]:                           dst = 0.0.0.0/0, nh = 10.10.10.1, mt = 10, table=30401
IP4.DNS[1]:                             10.10.10.2
IP4.DOMAIN[1]:                          ec2.internal
IP6.ADDRESS[1]:                         fe80::ccb:24ff:fe2f:a1f3/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 256

GENERAL.DEVICE:                         lo
GENERAL.TYPE:                           loopback
GENERAL.HWADDR:                         00:00:00:00:00:00
GENERAL.MTU:                            65536
GENERAL.STATE:                          100 (connected (externally))
GENERAL.CONNECTION:                     lo
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
IP4.ADDRESS[1]:                         127.0.0.1/8
IP4.GATEWAY:                            --
IP6.ADDRESS[1]:                         ::1/128
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = ::1/128, nh = ::, mt = 256

$ ip rule show
0:      from all lookup local
30201:  from 10.10.10.10 lookup 30201 proto static
30350:  from all lookup main suppress_prefixlength 0 proto static
30401:  from 10.10.10.10 lookup 30401 proto static
32766:  from all lookup main
32767:  from all lookup default

まだnm-cloud-setupによって作成されたルート情報は残っていますね。

IMDSv2のみ

次にIMDSv2のみ有効な状態で同じ操作を行ってみます。

$ aws ec2 modify-instance-metadata-options \
    --instance-id i-087cfc06bf0259dd5 \
    --http-tokens required
{
    "InstanceId": "i-087cfc06bf0259dd5",
    "InstanceMetadataOptions": {
        "State": "pending",
        "HttpTokens": "required",
        "HttpPutResponseHopLimit": 1,
        "HttpEndpoint": "enabled",
        "HttpProtocolIpv6": "disabled",
        "InstanceMetadataTags": "disabled"
    }
}

IMDSv2のみ有効後、ENIをアタッチします。

ENIをアタッチするとIMDSパケットアナライザーのログには以下のようにIMDSv1の通信が発生したことが記録されていました。

$ sudo python3 /aws-imds-packet-analyzer/src/imds_snoop.py
Logging to /var/log/imds/imds-trace.log
Starting ImdsPacketAnalyzer...
Output format: Info Level:[INFO/ERROR...] IMDS version:[IMDSV1/2?] (pid:[pid]:[process name]:argv:[argv]) -> repeats 3 times for parent process
[WARNING] IMDSv1(!) (pid:2615:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:2615:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:2615:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:2637:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:2637:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:2637:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:2637:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:2637:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:2637:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:2637:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,

デバイス情報を確認します。

$ nmcli device show
GENERAL.DEVICE:                         eth0
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         0E:CB:24:2F:A1:F3
GENERAL.MTU:                            9001
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     System eth0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         10.10.10.10/27
IP4.GATEWAY:                            10.10.10.1
IP4.ROUTE[1]:                           dst = 10.10.10.0/27, nh = 0.0.0.0, mt = 100
IP4.ROUTE[2]:                           dst = 0.0.0.0/0, nh = 10.10.10.1, mt = 100
IP4.ROUTE[3]:                           dst = 10.10.10.0/27, nh = 0.0.0.0, mt = 10, table=30201
IP4.ROUTE[4]:                           dst = 0.0.0.0/0, nh = 10.10.10.1, mt = 10, table=30401
IP4.DNS[1]:                             10.10.10.2
IP4.DOMAIN[1]:                          ec2.internal
IP6.ADDRESS[1]:                         fe80::ccb:24ff:fe2f:a1f3/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 256

GENERAL.DEVICE:                         eth1
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         0E:02:A6:FB:CB:AB
GENERAL.MTU:                            9001
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     Wired connection 1
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/6
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         10.10.10.50/27
IP4.GATEWAY:                            10.10.10.33
IP4.ROUTE[1]:                           dst = 10.10.10.32/27, nh = 0.0.0.0, mt = 101
IP4.ROUTE[2]:                           dst = 0.0.0.0/0, nh = 10.10.10.33, mt = 101
IP4.DNS[1]:                             10.10.10.2
IP4.DOMAIN[1]:                          ec2.internal
IP6.ADDRESS[1]:                         fe80::a6c:3e67:e755:b3ce/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 1024

GENERAL.DEVICE:                         lo
GENERAL.TYPE:                           loopback
GENERAL.HWADDR:                         00:00:00:00:00:00
GENERAL.MTU:                            65536
GENERAL.STATE:                          100 (connected (externally))
GENERAL.CONNECTION:                     lo
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
IP4.ADDRESS[1]:                         127.0.0.1/8
IP4.GATEWAY:                            --
IP6.ADDRESS[1]:                         ::1/128
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = ::1/128, nh = ::, mt = 256

$ ip rule show
0:      from all lookup local
30201:  from 10.10.10.10 lookup 30201 proto static
30350:  from all lookup main suppress_prefixlength 0 proto static
30401:  from 10.10.10.10 lookup 30401 proto static
32766:  from all lookup main
32767:  from all lookup default

$ ip route show table main
default via 10.10.10.1 dev eth0 proto dhcp src 10.10.10.10 metric 100
default via 10.10.10.33 dev eth1 proto dhcp src 10.10.10.50 metric 101
10.10.10.0/27 dev eth0 proto kernel scope link src 10.10.10.10 metric 100
10.10.10.32/27 dev eth1 proto kernel scope link src 10.10.10.50 metric 101

$ ip route show table local
local 10.10.10.10 dev eth0 proto kernel scope host src 10.10.10.10
broadcast 10.10.10.31 dev eth0 proto kernel scope link src 10.10.10.10
local 10.10.10.50 dev eth1 proto kernel scope host src 10.10.10.50
broadcast 10.10.10.63 dev eth1 proto kernel scope link src 10.10.10.50
local 127.0.0.0/8 dev lo proto kernel scope host src 127.0.0.1
local 127.0.0.1 dev lo proto kernel scope host src 127.0.0.1
broadcast 127.255.255.255 dev lo proto kernel scope link src 127.0.0.1

eth1が追加されていますが、eth1に3020030401といったルートテーブルが作成されていませんね。

各インターフェースから通信できるか確認します。

$ ping -c 1 -I eth0 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 10.10.10.10 eth0: 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=107 time=1.22 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.219/1.219/1.219/0.000 ms

$ ping -c 1 -I eth1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 10.10.10.50 eth1: 56(84) bytes of data.

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 0 received, 100% packet loss, time 0ms

eth1からの通信ができなくなりました。

各インターフェースへの通信もできるかどうか確認します。

# eth0 のパブリックIPアドレスへの疎通確認
$ ping -c 1 3.84.174.175
PING 3.84.174.175 (3.84.174.175): 56 data bytes
64 bytes from 3.84.174.175: icmp_seq=0 ttl=46 time=182.889 ms

--- 3.84.174.175 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 182.889/182.889/182.889/0.000 ms

# eth1 のパブリックIPアドレスへの疎通確認
$ ping -c 1 35.172.124.44
PING 35.172.124.44 (35.172.124.44): 56 data bytes

--- 35.172.124.44 ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss

eth1への通信ができなくなりました。

以上の動作からENIが複数ある場合はnm-cloud-setupの恩恵を受けることができないことが分かりました。「IMDSv1は無効にしたい。でも複数ENIを割り当てたい。」という場合は手動でポリシーベースルーティングを行う必要がありそうです。

ENIをデタッチした後、再起動した場合のデバイスの状態も確認しましょう。

$ nmcli device show
GENERAL.DEVICE:                         eth0
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         0E:CB:24:2F:A1:F3
GENERAL.MTU:                            9001
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     System eth0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         10.10.10.10/27
IP4.GATEWAY:                            10.10.10.1
IP4.ROUTE[1]:                           dst = 10.10.10.0/27, nh = 0.0.0.0, mt = 100
IP4.ROUTE[2]:                           dst = 0.0.0.0/0, nh = 10.10.10.1, mt = 100
IP4.DNS[1]:                             10.10.10.2
IP4.DOMAIN[1]:                          ec2.internal
IP6.ADDRESS[1]:                         fe80::ccb:24ff:fe2f:a1f3/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 256

GENERAL.DEVICE:                         lo
GENERAL.TYPE:                           loopback
GENERAL.HWADDR:                         00:00:00:00:00:00
GENERAL.MTU:                            65536
GENERAL.STATE:                          100 (connected (externally))
GENERAL.CONNECTION:                     lo
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
IP4.ADDRESS[1]:                         127.0.0.1/8
IP4.GATEWAY:                            --
IP6.ADDRESS[1]:                         ::1/128
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = ::1/128, nh = ::, mt = 256

$ ip rule show
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default

残っていた3020130401といったルートが削除されました。

同じENIにセカンダリIPアドレスを追加した場合のデバイス情報

IMDSv1 と IMDSv2

続いて、同じENIにセカンダリIPアドレスを追加した場合のデバイス情報を確認します。

IMDSv2はオプションで使えるように戻しておきます。

$ aws ec2 modify-instance-metadata-options \
    --instance-id i-087cfc06bf0259dd5 \
    --http-tokens required
{
    "InstanceId": "i-087cfc06bf0259dd5",
    "InstanceMetadataOptions": {
        "State": "pending",
        "HttpTokens": "optional",
        "HttpPutResponseHopLimit": 1,
        "HttpEndpoint": "enabled",
        "HttpProtocolIpv6": "disabled",
        "InstanceMetadataTags": "disabled"
    }
}

セカンダリIPアドレスを割り当てます。

IP_アドレスの管理___EC2_Management_Console

セカンダリIPアドレス追加後、Elastic IPアドレスを関連付けます。

この Elastic IP アドレスの再関連付けを許可する

IMDSパケットアナライザーのログ確認すると、特に何も記録されておらず、デバイス情報も更新されていませんでした。

$ sudo python3 /aws-imds-packet-analyzer/src/imds_snoop.py
Logging to /var/log/imds/imds-trace.log
Starting ImdsPacketAnalyzer...
Output format: Info Level:[INFO/ERROR...] IMDS version:[IMDSV1/2?] (pid:[pid]:[process name]:argv:[argv]) -> repeats 3 times for parent process

NetworkManagerを再起動させて、DHCPでIPアドレスを取得しにいきます。

$ sudo systemctl restart NetworkManager.service

すると、IMDSパケットアナライザーのログにIMDSv1の通信があったと記録されました。

[WARNING] IMDSv1(!) (pid:1600:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:1600:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
Possibly lost 1 samples
[WARNING] IMDSv1(!) (pid:1600:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/0e:cb:24:2f:a1:f3/local-ipv4s HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:1647:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /latest/meta-data/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:1647:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/ HTTP/1.1, Host: 169.254.169.254, Accept: */*,
[WARNING] IMDSv1(!) (pid:1647:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/0e:cb:24:2f:a1:f3/subnet-ipv4-cidr-block HTTP/1.1, Host: 169.254.169.254, Accept: */*,

デバイス情報を確認します。

$ nmcli device show
GENERAL.DEVICE:                         eth0
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         0E:CB:24:2F:A1:F3
GENERAL.MTU:                            9001
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     System eth0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         10.10.10.19/27
IP4.ADDRESS[2]:                         10.10.10.10/27
IP4.GATEWAY:                            10.10.10.1
IP4.ROUTE[1]:                           dst = 0.0.0.0/0, nh = 10.10.10.1, mt = 100
IP4.ROUTE[2]:                           dst = 10.10.10.0/27, nh = 0.0.0.0, mt = 100
IP4.ROUTE[3]:                           dst = 10.10.10.0/27, nh = 0.0.0.0, mt = 10, table=30200
IP4.ROUTE[4]:                           dst = 10.10.10.0/27, nh = 0.0.0.0, mt = 100
IP4.ROUTE[5]:                           dst = 0.0.0.0/0, nh = 10.10.10.1, mt = 10, table=30400
IP4.DNS[1]:                             10.10.10.2
IP4.DOMAIN[1]:                          ec2.internal
IP6.ADDRESS[1]:                         fe80::ccb:24ff:fe2f:a1f3/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 256

GENERAL.DEVICE:                         lo
GENERAL.TYPE:                           loopback
GENERAL.HWADDR:                         00:00:00:00:00:00
GENERAL.MTU:                            65536
GENERAL.STATE:                          100 (connected (externally))
GENERAL.CONNECTION:                     lo
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
IP4.ADDRESS[1]:                         127.0.0.1/8
IP4.GATEWAY:                            --
IP6.ADDRESS[1]:                         ::1/128
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = ::1/128, nh = ::, mt = 256

$ ip rule show
0:      from all lookup local
30200:  from 10.10.10.10 lookup 30200 proto static
30200:  from 10.10.10.19 lookup 30200 proto static
30350:  from all lookup main suppress_prefixlength 0 proto static
30400:  from 10.10.10.10 lookup 30400 proto static
30400:  from 10.10.10.19 lookup 30400 proto static
32766:  from all lookup main
32767:  from all lookup default

302003040030350のルールが追加されていますね。

各IPアドレスから通信できるか確認します。

$ ping -c 1 -I 10.10.10.10 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 10.10.10.10 : 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=108 time=1.44 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.439/1.439/1.439/0.000 ms
[ec2-user@ip-10-10-10-10 ~]$
[ec2-user@ip-10-10-10-10 ~]$ ping -c 1 -I 10.10.10.19 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 10.10.10.19 : 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=46 time=1.52 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.519/1.519/1.519/0.000 ms

どちらも通信できますね。

各IPアドレスへの通信もできるかどうか確認します。

# プライマリIPアドレス のパブリックIPアドレスへの疎通確認
$ ping -c 1 35.168.16.54
PING 35.168.16.54 (35.168.16.54): 56 data bytes
64 bytes from 35.168.16.54: icmp_seq=0 ttl=46 time=183.928 ms

--- 35.168.16.54 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 183.928/183.928/183.928/0.000 ms

# セカンダリIPアドレス のパブリックIPアドレスへの疎通確認
$ ping -c 1 35.172.124.44
PING 35.172.124.44 (35.172.124.44): 56 data bytes
64 bytes from 35.172.124.44: icmp_seq=0 ttl=52 time=177.948 ms

--- 35.172.124.44 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 177.948/177.948/177.948/0.000 ms

どちらも通信できました。

一旦セカンダリIPアドレスの割り当ては解除します。

IMDSv2のみ

次にIMDSv2のみ有効化して、セカンダリIPアドレスの割り当てを行います。

$ sudo systemctl restart NetworkManager.service

$ nmcli device show
GENERAL.DEVICE:                         eth0
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         0E:CB:24:2F:A1:F3
GENERAL.MTU:                            9001
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     System eth0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         10.10.10.10/27
IP4.GATEWAY:                            10.10.10.1
IP4.ROUTE[1]:                           dst = 0.0.0.0/0, nh = 10.10.10.1, mt = 100
IP4.ROUTE[2]:                           dst = 10.10.10.0/27, nh = 0.0.0.0, mt = 100
IP4.DNS[1]:                             10.10.10.2
IP4.DOMAIN[1]:                          ec2.internal
IP6.ADDRESS[1]:                         fe80::ccb:24ff:fe2f:a1f3/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 256

GENERAL.DEVICE:                         lo
GENERAL.TYPE:                           loopback
GENERAL.HWADDR:                         00:00:00:00:00:00
GENERAL.MTU:                            65536
GENERAL.STATE:                          100 (connected (externally))
GENERAL.CONNECTION:                     lo
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
IP4.ADDRESS[1]:                         127.0.0.1/8
IP4.GATEWAY:                            --
IP6.ADDRESS[1]:                         ::1/128
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = ::1/128, nh = ::, mt = 256

$ ip rule show
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default

すると、いくら待ってもOS上でセカンダリIPアドレスが表示されません。

AWS CLIから確認すると、確かにセカンダリIPアドレスは割り当てられていそうです。

$ aws ec2 describe-instances \
    --instance-ids i-087cfc06bf0259dd5 \
    --query 'Reservations[].Instances[].NetworkInterfaces[].PrivateIpAddresses[].[Association.PublicIp, PrivateIpAddress]'
[
    [
        "35.168.16.54",
        "10.10.10.10"
    ],
    [
        "35.172.124.44",
        "10.10.10.12"
    ]
]

通信できないことは確実ですが、一応各IPアドレスから通信できるか確認します。

$ ping -c 1 -I 10.10.10.10 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 10.10.10.10 : 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=108 time=1.42 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.423/1.423/1.423/0.000 ms
[ec2-user@ip-10-10-10-10 ~]$ ping -c 1 -I 10.10.10.12 8.8.8.8
ping: bind: Cannot assign requested address

セカンダリIPアドレスがそもそも認識されていないので通信失敗しました。

各IPアドレスへの通信もできるかどうか確認します。

# プライマリIPアドレス のパブリックIPアドレスへの疎通確認
$ ping -c 1 35.168.16.54
PING 35.168.16.54 (35.168.16.54): 56 data bytes
64 bytes from 35.168.16.54: icmp_seq=0 ttl=46 time=186.580 ms

--- 35.168.16.54 ping statistics ---
1 packets transmitted, 1 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 186.580/186.580/186.580/nan ms

# セカンダリIPアドレス のパブリックIPアドレスへの疎通確認
$ ping -c 1 35.172.124.44
PING 35.172.124.44 (35.172.124.44): 56 data bytes

--- 35.172.124.44 ping statistics ---
1 packets transmitted, 0 packets received, 100.0% packet loss

セカンダリIPアドレスへの通信のみ失敗するようになりました。

以上のことからIMDSv2のみに限定されている場合はnm-cloud-setupが動作できず、複数IPアドレスを割り当てることができないことが分かりました。

202312/8追記 nm-cloud-setup が本当にIMDSv2に対応したか確認してみる

記事の冒頭に追記したとおり、2023年11月中旬ごろにnm-cloud-setupがIMDSv2に対応したようです。

本当にIMDSv2に対応したのか確認してみます。

検証は以下のRHEL 9.2のEC2インスタンスで行います。

  • AMI名 : RHEL-9.2.0_HVM-20230503-x86_64-41-Hourly2-GP2
  • AMI ID : ami-026ebd4cfe2c043b2

まず、nm-cloud-setupがIMDSv2に対応できるようにNetworkManagerをアップデートします。

$ sudo dnf info NetworkManager.x86_64
Last metadata expiration check: 21:24:17 ago on Thu 07 Dec 2023 04:22:10 AM UTC.
Installed Packages
Name         : NetworkManager
Epoch        : 1
Version      : 1.42.2
Release      : 1.el9
Architecture : x86_64
Size         : 6.0 M
Source       : NetworkManager-1.42.2-1.el9.src.rpm
Repository   : @System
Summary      : Network connection manager and user applications
URL          : https://networkmanager.dev/
License      : GPLv2+ and LGPLv2+
Description  : NetworkManager is a system service that manages network interfaces and
             : connections based on user or automatic configuration. It supports
             : Ethernet, Bridge, Bond, VLAN, Team, InfiniBand, Wi-Fi, mobile broadband
             : (WWAN), PPPoE and other devices, and supports a variety of different VPN
             : services.

Available Packages
Name         : NetworkManager
Epoch        : 1
Version      : 1.44.0
Release      : 3.el9
Architecture : x86_64
Size         : 2.3 M
Source       : NetworkManager-1.44.0-3.el9.src.rpm
Repository   : rhel-9-baseos-rhui-rpms
Summary      : Network connection manager and user applications
URL          : https://networkmanager.dev/
License      : GPLv2+ and LGPLv2+
Description  : NetworkManager is a system service that manages network interfaces and
             : connections based on user or automatic configuration. It supports
             : Ethernet, Bridge, Bond, VLAN, Team, InfiniBand, Wi-Fi, mobile broadband
             : (WWAN), PPPoE and other devices, and supports a variety of different VPN
             : services.

$ sudo dnf install NetworkManager.x86_64
Last metadata expiration check: 21:24:21 ago on Thu 07 Dec 2023 04:22:10 AM UTC.
Package NetworkManager-1:1.42.2-1.el9.x86_64 is already installed.
Dependencies resolved.
================================================================================================================
 Package                           Architecture  Version                Repository                         Size
================================================================================================================
Upgrading:
 NetworkManager                    x86_64        1:1.44.0-3.el9         rhel-9-baseos-rhui-rpms           2.3 M
 NetworkManager-cloud-setup        x86_64        1:1.44.0-3.el9         rhel-9-appstream-rhui-rpms         77 k
 NetworkManager-libnm              x86_64        1:1.44.0-3.el9         rhel-9-baseos-rhui-rpms           1.8 M
 NetworkManager-team               x86_64        1:1.44.0-3.el9         rhel-9-baseos-rhui-rpms            43 k
 NetworkManager-tui                x86_64        1:1.44.0-3.el9         rhel-9-baseos-rhui-rpms           249 k

Transaction Summary
================================================================================================================
Upgrade  5 Packages

Total download size: 4.4 M
Is this ok [y/N]: y
Downloading Packages:
(1/5): NetworkManager-cloud-setup-1.44.0-3.el9.x86_64.rpm                       1.7 MB/s |  77 kB     00:00
(2/5): NetworkManager-team-1.44.0-3.el9.x86_64.rpm                              3.5 MB/s |  43 kB     00:00
(3/5): NetworkManager-libnm-1.44.0-3.el9.x86_64.rpm                              22 MB/s | 1.8 MB     00:00
(4/5): NetworkManager-tui-1.44.0-3.el9.x86_64.rpm                               7.6 MB/s | 249 kB     00:00
(5/5): NetworkManager-1.44.0-3.el9.x86_64.rpm                                    20 MB/s | 2.3 MB     00:00
----------------------------------------------------------------------------------------------------------------
Total                                                                            22 MB/s | 4.4 MB     00:00
Running transaction check
Transaction check succeeded.
.
.
(中略)
.
.
Upgraded:
  NetworkManager-1:1.44.0-3.el9.x86_64                NetworkManager-cloud-setup-1:1.44.0-3.el9.x86_64
  NetworkManager-libnm-1:1.44.0-3.el9.x86_64          NetworkManager-team-1:1.44.0-3.el9.x86_64
  NetworkManager-tui-1:1.44.0-3.el9.x86_64

Complete!

NetworkManager-1:1.44.0-3.el9.x86_64がインストールされました。BugzillaにはNetworkManager-1.43.3-1.el9から対応しているようなので、このバージョンでも問題なさそうですね。

デフォルトのデバイス情報やルーティング情報を確認しておきます。

$ nmcli device show
Warning: nmcli (1.44.0) and NetworkManager (1.42.2) versions don't match. Restarting NetworkManager is advised.
GENERAL.DEVICE:                         eth0
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         12:F3:65:29:13:23
GENERAL.MTU:                            9001
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     System eth0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         172.31.86.227/20
IP4.GATEWAY:                            172.31.80.1
IP4.ROUTE[1]:                           dst = 172.31.80.0/20, nh = 0.0.0.0, mt = 100
IP4.ROUTE[2]:                           dst = 0.0.0.0/0, nh = 172.31.80.1, mt = 100
IP4.DNS[1]:                             172.31.0.2
IP4.DOMAIN[1]:                          ec2.internal
IP6.ADDRESS[1]:                         fe80::10f3:65ff:fe29:1323/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 256

GENERAL.DEVICE:                         lo
GENERAL.TYPE:                           loopback
GENERAL.HWADDR:                         00:00:00:00:00:00
GENERAL.MTU:                            65536
GENERAL.STATE:                          100 (connected (externally))
GENERAL.CONNECTION:                     lo
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
IP4.ADDRESS[1]:                         127.0.0.1/8
IP4.GATEWAY:                            --
IP6.ADDRESS[1]:                         ::1/128
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = ::1/128, nh = ::, mt = 256

$ ip rule show
0:      from all lookup local
32766:  from all lookup main
32767:  from all lookup default

IMDSパケットアナライザーを実行しておきます。

この状態でENIをEC2インスタンスに追加します。

すると、IMDSパケットアナライザーが「nm-cloud-setupがIMDSv2の通信をした」とロギングしてくれました。確かにIMDSv2をサポートしてくれていそうですね。

$ sudo python3 aws-imds-packet-analyzer/src/imds_snoop.py
Logging to /var/log/imds/imds-trace.log
cannot attach kprobe, probe entry may not exist
Starting ImdsPacketAnalyzer...
Output format: Info Level:[INFO/ERROR...] IMDS version:[IMDSV1/2?] (pid:[pid]:[process name]:argv:[argv]) -> repeats 3 times for parent process
[INFO] IMDSv2 (pid:34702:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: PUT /latest/api/token HTTP/1.1, Host: 169.254.169.254, Accept: */*, X-aws-ec2-metadata-token-ttl-seconds: 180,
[INFO] IMDSv2 (pid:34702:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/ HTTP/1.1, Host: 169.254.169.254, Accept: */*, X-aws-ec2-metadata-token: AQAEANaFJkn8iZ_qC8BXbRnPLXblZ46YpsW6QLJp2bO47Oe7xSsdIQ==,
[INFO] IMDSv2 (pid:34702:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/12:86:0f:8c:8a:9b/subnet-ipv4-cidr-block HTTP/1.1, Host: 169.254.169.254, Accept: */*, X-aws-ec2-metadata-token: AQAEANaFJkn8iZ_qC8BXbRnPLXblZ46YpsW6QLJp2bO47Oe7xSsdIQ==,
[INFO] IMDSv2 (pid:34733:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: PUT /latest/api/token HTTP/1.1, Host: 169.254.169.254, Accept: */*, X-aws-ec2-metadata-token-ttl-seconds: 180,
[INFO] IMDSv2 (pid:34733:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/ HTTP/1.1, Host: 169.254.169.254, Accept: */*, X-aws-ec2-metadata-token: AQAEANaFJkm758HVQO7EnxSfan6PNh8U-MWyCO5Lh_Q_MOV3a4e6WA==,
[INFO] IMDSv2 (pid:34733:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/12:f3:65:29:13:23/subnet-ipv4-cidr-block HTTP/1.1, Host: 169.254.169.254, Accept: */*, X-aws-ec2-metadata-token: AQAEANaFJkm758HVQO7EnxSfan6PNh8U-MWyCO5Lh_Q_MOV3a4e6WA==,
Possibly lost 3 samples
[INFO] IMDSv2 (pid:34760:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: PUT /latest/api/token HTTP/1.1, Host: 169.254.169.254, Accept: */*, X-aws-ec2-metadata-token-ttl-seconds: 180,
[INFO] IMDSv2 (pid:34760:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/ HTTP/1.1, Host: 169.254.169.254, Accept: */*, X-aws-ec2-metadata-token: AQAEANaFJkmtJhc4xXsvI6IX1yQe-yBRnOKt8acmiBreMJFk07PLIg==,
Possibly lost 3 samples
[INFO] IMDSv2 (pid:34760:nm-cloud-setup argv:/usr/libexec/nm-cloud-setup) called by -> (pid:1:systemd argv:/usr/lib/systemd/systemd --switched-root --system --deserialize 31) Req details: GET /2018-09-24/meta-data/network/interfaces/macs/12:86:0f:8c:8a:9b/subnet-ipv4-cidr-block HTTP/1.1, Host: 169.254.169.254, Accept: */*, X-aws-ec2-metadata-token: AQAEANaFJkmtJhc4xXsvI6IX1yQe-yBRnOKt8acmiBreMJFk07PLIg==,

デバイス情報やルーティング情報を確認すると、ルーティングルールが追加されていることが分かります。

$ nmcli device show
Warning: nmcli (1.44.0) and NetworkManager (1.42.2) versions don't match. Restarting NetworkManager is advised.
GENERAL.DEVICE:                         eth0
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         12:F3:65:29:13:23
GENERAL.MTU:                            9001
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     System eth0
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/2
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         172.31.86.227/20
IP4.GATEWAY:                            172.31.80.1
IP4.ROUTE[1]:                           dst = 172.31.80.0/20, nh = 0.0.0.0, mt = 100
IP4.ROUTE[2]:                           dst = 0.0.0.0/0, nh = 172.31.80.1, mt = 100
IP4.ROUTE[3]:                           dst = 172.31.80.0/20, nh = 0.0.0.0, mt = 10, table=30201
IP4.ROUTE[4]:                           dst = 0.0.0.0/0, nh = 172.31.80.1, mt = 10, table=30401
IP4.DNS[1]:                             172.31.0.2
IP4.DOMAIN[1]:                          ec2.internal
IP6.ADDRESS[1]:                         fe80::10f3:65ff:fe29:1323/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 256

GENERAL.DEVICE:                         eth1
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         12:86:0F:8C:8A:9B
GENERAL.MTU:                            9001
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     Wired connection 1
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/3
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         172.31.89.246/20
IP4.GATEWAY:                            172.31.80.1
IP4.ROUTE[1]:                           dst = 172.31.80.0/20, nh = 0.0.0.0, mt = 10, table=30200
IP4.ROUTE[2]:                           dst = 172.31.80.0/20, nh = 0.0.0.0, mt = 101
IP4.ROUTE[3]:                           dst = 0.0.0.0/0, nh = 172.31.80.1, mt = 10, table=30400
IP4.ROUTE[4]:                           dst = 0.0.0.0/0, nh = 172.31.80.1, mt = 101
IP4.DNS[1]:                             172.31.0.2
IP4.DOMAIN[1]:                          ec2.internal
IP6.ADDRESS[1]:                         fe80::5ea6:92b6:9a60:ef3d/64
IP6.GATEWAY:                            --

GENERAL.DEVICE:                         lo
GENERAL.TYPE:                           loopback
GENERAL.HWADDR:                         00:00:00:00:00:00
GENERAL.MTU:                            65536
GENERAL.STATE:                          100 (connected (externally))
GENERAL.CONNECTION:                     lo
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/1
IP4.ADDRESS[1]:                         127.0.0.1/8
IP4.GATEWAY:                            --
IP6.ADDRESS[1]:                         ::1/128
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = ::1/128, nh = ::, mt = 256

$ ip rule show
0:      from all lookup local
30200:  from 172.31.89.246 lookup 30200 proto static
30201:  from 172.31.86.227 lookup 30201 proto static
30350:  from all lookup main suppress_prefixlength 0 proto static
30400:  from 172.31.89.246 lookup 30400 proto static
30401:  from 172.31.86.227 lookup 30401 proto static
32766:  from all lookup main
32767:  from all lookup default

追加したENIから通信できるか確認してみると、問題なく通信できることを確認できました。

$ ping -c 1 -I eth1 8.8.8.8
PING 8.8.8.8 (8.8.8.8) from 172.31.89.246 eth1: 56(84) bytes of data.
64 bytes from 8.8.8.8: icmp_seq=1 ttl=58 time=1.70 ms

--- 8.8.8.8 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.703/1.703/1.703/0.000 ms

nm-cloud-setup が IMDSv2をサポートするまで、複数のIPアドレスを割り当てる環境においてはIMDSv1を使い続けることになりそう

Red Hat Enterprise LinuxのEC2インスタンスに複数のENIやIPアドレスを割り当てている場合はIMDSv2のみにしづらいことをお伝えしました。

nm-cloud-setupが IMDSv2をサポートするまで、複数のIPアドレスを割り当てる環境においてはIMDSv1を使い続けることになりそうですね。

IMDSv2のみに制限したいからといって、手動でポリシーベースルーティングやデバイス情報の設定変更をするのは少し面倒です。

ちなみに、IMDSv1も有効化した状態でnm-cloud-setupによる設定が反映された後、IMDSv2に制限するのは得策ではありません。nm-cloud-setupによって設定は永続化されないためOS再起動などでクリアされるからです。

やるとすればcloud-init/etc/rc.localか何かでIMDSv2に対応した記法でメタデータを取得してip rule add fromip route add tableを実行する必要があると考えます。

この記事が誰かの助けになれば幸いです。

以上、AWS事業本部 コンサルティング部の のんピ(@non____97)でした!