「LXD」の版間の差分
Kusanaginoturugi (トーク | 投稿記録) (→トラブルシューティング: insert === lxd-agent inside a virtual machine ===) |
Kusanaginoturugi (トーク | 投稿記録) (→トラブルシューティング: 日本語版の節の削除と英語版の節の挿入(記事が古い)) |
||
261行目: | 261行目: | ||
$ lxc-checkconfig |
$ lxc-checkconfig |
||
+ | === Resource limits are not applied when viewed from inside a container === |
||
− | === CONFIG_USER_NS 無しでコンテナを起動する === |
||
− | イメージを起動するにはイメージの生成時に {{ic|1=security.privileged=true}} が必要です: |
||
+ | Install {{Pkg|lxcfs}} and [[start]] {{ic|lxcfs.service}}. |
||
− | $ lxc launch ubuntu:16.04 ubuntu -c security.privileged=true |
||
+ | lxd will need to be restarted. [[Enable]] {{ic|lxcfs.service}} for the service to be started at boot time. |
||
− | もしくは既存のイメージを使う場合は設定を編集してください: |
||
+ | === Starting a virtual machine fails === |
||
− | {{hc|$ lxc config edit ubuntu| |
||
− | name: ubuntu |
||
− | profiles: |
||
− | - default |
||
− | config: |
||
− | ... |
||
− | security.privileged: "true" |
||
− | ... |
||
− | devices: |
||
− | root: |
||
− | path: / |
||
− | type: disk |
||
− | ephemeral: false |
||
− | }} |
||
+ | If you see the error: {{ic|Error: Required EFI firmware settings file missing: /usr/share/ovmf/x64/OVMF_VARS.ms.fd}} |
||
− | 新しいコンテナで {{ic|1=security.privileged=true}} を有効にしたい場合、デフォルトプロファイルの設定を編集してください: |
||
+ | Arch Linux does not distribute secure boot signed ovmf firmware, to boot virtual machines you need to disable secure boot for the time being. |
||
− | $ lxc profile edit default |
||
+ | $ lxc launch ubuntu:18.04 test-vm --vm -c security.secureboot=false |
||
− | === 非特権の Arch コンテナ で IPv4 アドレスが取得できない === |
||
+ | This can also be added to the default profile by doing: |
||
− | LXD v2.20 ではコンテナが {{ic|1=systemd-networkd}} サービスを起動できずに IPv4 アドレスが取得できません。LXD ホストで以下のコマンドを実行してからコンテナを再起動することで解決できます ([https://github.com/lxc/lxd/issues/4071 Github Issue]): |
||
− | $ lxc profile set default security. |
+ | $ lxc profile set default security.secureboot=false |
+ | |||
+ | === No IPv4 with systemd-networkd === |
||
+ | |||
+ | Starting with version version 244.1, systemd detects if {{ic|/sys}} is writable by containers. If it is, udev is automatically started and breaks IPv4 in unprivileged containers. See [https://github.com/systemd/systemd-stable/commit/96d7083c5499b264ecebd6a30a92e0e8fda14cd5 commit bf331d8] and [https://discuss.linuxcontainers.org/t/no-ipv4-on-arch-linux-containers/6395 discussion on linuxcontainers]. |
||
+ | |||
+ | On containers created past 2020, there should already be a {{ic|systemd.networkd.service}} override to work around this issue, create it if it is not: |
||
+ | |||
+ | {{hc|1=/etc/systemd/system/systemd-networkd.service.d/lxc.conf|2= |
||
+ | [Service] |
||
+ | BindReadOnlyPaths=/sys |
||
+ | }} |
||
+ | You could also work around this issue by setting {{ic|1=raw.lxc: lxc.mount.auto = proc:rw sys:ro}} in the profile of the container to ensure {{ic|/sys}} is read-only for the entire container, although this may be problematic, as per the linked discussion above. |
||
− | 原因は systemd の networkd ユニットが利用しているカーネルキーリングが非特権コンテナで機能しないためです。上記のコマンドを使うことでシステムコールは未実装を返すようになります。 |
||
== 参照 == |
== 参照 == |
2020年10月22日 (木) 10:56時点における版
関連記事
LXD はコンテナ(LXC経由)および仮想マシン(QEMU経由)のマネージャー/ハイパーバイザーです。
セットアップ
必要なソフトウェア
lxd パッケージをインストールして、lxd.service
を有効にします。
他のインストール方法
snapdAUR パッケージをインストールしてから以下のコマンドを実行することで snapd を使って LXD をインストールできます:
# snap install lxd
Setup for unpriviledged containers
It is recommended to use unpriviledged containers (See Linux_Containers#Privileged_containers_or_unprivileged_containers for an explanation of the difference).
In order to use them, you need to enable support to run unprivileged containers.
Once enabled, every container will be started unpriviledged
by default.
For the alternative see howto set up priviledged containers.
LXD の設定
LXD を使うにはストレージプールと (インターネットを使いたい場合) ネットワークを設定する必要があります。以下のコマンドを root で実行してください:
# lxd init
非特権ユーザーとして LXD にアクセス
デフォルトでは LXD デーモンは lxd
グループのユーザーからのアクセスを許可するので、ユーザーをグループに追加してください:
# usermod -a -G lxd <user>
使用方法
LXD consists of two parts:
- the daemon (the lxd binary)
- the client (the lxc binary)
The client is used to control one or multiple daemon(s).
The client can also be used to control remote LXD servers.
Overview of commands
You can get an overview of all available commands by typing:
$ lxc
Create a container
You can create a container with lxc launch
, for example:
$ lxc launch ubuntu:20.04
Container are based on images, that are downloaded from image servers or remote LXD servers.
You can see the list of already added servers with:
$ lxc remote list
You can list all images on a server with lxc image list
, for example:
$ lxc image list images:
This will show you all images on one of the default servers: images.linuxcontainers.org
You can also search for images by adding terms like the distribution name:
$ lxc image list images:debian
Launch a container with an image from a specific server with:
$ lxc launch servername:imagename
For example:
$ lxc launch images:centos/8/amd64 centos
To create an amd64 Arch container:
$ lxc launch images:archlinux/current/amd64 arch
Create a virtual machine
Just add the flag --vm
to lxc launch
:
$ lxc launch ubuntu:20.04 --vm
Use and manage a container or VM
See Instance managament in the official Getting Started Guide of LXD.
Container/VM configuration (optional)
You can add various options to instances (containers and VMs).
See Configuration of instances in the official Advanced Guide of LXD for details.
ヒントとテクニック
Access the containers by name on the host
This assumes that you are using the default bridge, that it is named lxdbr0 and that you are using systemd-resolved.
# systemd-resolve --interface lxdbr0 --set-domain '~lxd' --set-dns $(lxc network get lxdbr0 ipv4.address | cut -d / -f 1)
You can now access the containers by name:
$ ping containername.lxd
Other solution
It seems that the systemd-resolve solution stops working after some time.
Another solution is to create a /etc/systemd/network/lxd.network
that contains (replace x and y to match your bridge IP):
[Match] Name=lxdbr0 [Network] DNS=10.x.y.1 Domains=~lxd IgnoreCarrierLoss=yes [Address] Address=10.x.y.1/24 Gateway=10.x.y.1
And then enable and start systemd-networkd.service
.
Use Wayland and Xorg applications
There are multiple methods to use GUI applications inside containers.
You can find an overview in the official Forum of LXD: https://discuss.linuxcontainers.org/t/overview-gui-inside-containers/8767
Method 1: Use the host's Wayland or Xorg Server
Summary: In this method we grant containers access to the host's sockets of Wayland (+XWayland) or Xorg.
1. Add the following devices to a containers profile.
See also: LXD-Documentation regarding Devices
General device for the GPU:
mygpu: type: gpu
Device for the Wayland Socket:
Notes:
- Adjust the Display (wayland-0) accordingly.
- Add the folders in /mnt and /tmp inside the container, if they don't already exist.
Waylandsocket: bind: container connect: unix:/run/user/1000/wayland-0 listen: unix:/mnt/wayland1/wayland-0 uid: "1000" gid: "1000" security.gid: "1000" security.uid: "1000" mode: "0777" type: proxy
Device for the Xorg (or XWayland) Socket:
Note: Adjust the Display Number accordingly (for example X1 instead of X0).
Xsocket: bind: container connect: unix:/tmp/.X11-unix/X0 listen: unix:/mnt/xorg1/X0 uid: "1000" gid: "1000" security.gid: "1000" security.uid: "1000" mode: "0777" type: proxy
2. Link the sockets to the right location inside the container.
Note: These Scripts need to be run after each start of the container; you can automate this with systemd for example.
Shell-Script to link the Wayland socket:
#!/bin/sh mkdir /run/user/1000 ln -s /mnt/wayland1/wayland-0 /run/user/1000/wayland-0
Link the Xorg (or XWayland) socket:
#!/bin/sh ln -s /mnt/xorg1/X0 /tmp/.X11-unix/X0
3. Add Environment variables to the users config inside the container.
Note: Adjust the Display Numbers and/or the filename (.profile) accordingly.
For Wayland:
$ echo "export XDG_RUNTIME_DIR=/run/user/1000" >> ~/.profile $ echo "export WAYLAND_DISPLAY=wayland-0" >> ~/.profile $ echo "export QT_QPA_PLATFORM=wayland" >> ~/.profile
For Xorg (or XWayland):
$ echo "export DISPLAY=:0" >> .profile
Reload the .profile:
$ . .profile
トラブルシューティング
lxd-agent inside a virtual machine
Inside some virtual machine images the lxd-agent
is not enabled by default.
In this case you have to enable it manually, for example by mounting a 9p
network share. This requires console access with a valid user.
1. Login with lxc console
:
Replace virtualmachine-name
accordingly.
$ lxc console virtualmachine-name
Login as root:
$ su root
Mount the network share:
$ mount -t 9p config /mnt/
Go into the folder and run the install script (this will enable the lxd-agent inside the VM):
$ cd /mnt/ $ ./install.sh
After sucessful install, reboot with:
$ reboot
Afterwards the lxd-agent
is available and lxc exec
should work.
カーネルコンフィグの確認
デフォルトで Arch Linux のカーネルは Linux Containers とフロントエンドの LXD が動くようにコンパイルされています。カスタムカーネルを使っている場合やカーネルオプションを変更している場合、LXD が動かない場合があります。コンテナを実行できるようにカーネルが設定されているか確認してください:
$ lxc-checkconfig
Resource limits are not applied when viewed from inside a container
Install lxcfs and start lxcfs.service
.
lxd will need to be restarted. Enable lxcfs.service
for the service to be started at boot time.
Starting a virtual machine fails
If you see the error: Error: Required EFI firmware settings file missing: /usr/share/ovmf/x64/OVMF_VARS.ms.fd
Arch Linux does not distribute secure boot signed ovmf firmware, to boot virtual machines you need to disable secure boot for the time being.
$ lxc launch ubuntu:18.04 test-vm --vm -c security.secureboot=false
This can also be added to the default profile by doing:
$ lxc profile set default security.secureboot=false
No IPv4 with systemd-networkd
Starting with version version 244.1, systemd detects if /sys
is writable by containers. If it is, udev is automatically started and breaks IPv4 in unprivileged containers. See commit bf331d8 and discussion on linuxcontainers.
On containers created past 2020, there should already be a systemd.networkd.service
override to work around this issue, create it if it is not:
/etc/systemd/system/systemd-networkd.service.d/lxc.conf
[Service] BindReadOnlyPaths=/sys
You could also work around this issue by setting raw.lxc: lxc.mount.auto = proc:rw sys:ro
in the profile of the container to ensure /sys
is read-only for the entire container, although this may be problematic, as per the linked discussion above.