「LXD」の版間の差分

提供: ArchWiki
ナビゲーションに移動 検索に移動
(→‎セットアップ: 記事を更新)
(→‎ヒントとテクニック: 記事を最新に更新)
118行目: 118行目:
 
=== Access the containers by name on the host ===
 
=== Access the containers by name on the host ===
   
This assumes that you are using the default bridge, that it is named lxdbr0 and that you are using [[systemd-resolved]].
+
This assumes that you are using the default bridge that it is named {{ic|lxdbr0}} and that you are using [[systemd-resolved]].
  +
  +
# systemd-resolve --interface lxdbr0 --set-domain '~lxd' --set-dns $(lxc network get lxdbr0 ipv4.address | cut -d / -f 1)
   
# systemd-resolve --interface lxdbr0 --set-domain '~lxd' --set-dns $(lxc network get lxdbr0 ipv4.address | cut -d / -f 1)
 
 
 
You can now access the containers by name:
 
You can now access the containers by name:
   
$ ping ''containername''.lxd
+
$ ping ''containername''.lxd
   
 
==== Other solution ====
 
==== Other solution ====
   
It seems that the systemd-resolve solution stops working after some time.
+
{{Accuracy|When does ''systemd-resolve'' stop working? Is there a bug report or some other reference?}}
   
  +
It seems that the ''systemd-resolve'' solution stops working after some time.
Another solution is to create a {{ic|/etc/systemd/network/lxd.network}} that contains (replace x and y to match your bridge IP):
 
   
  +
Another solution is to use [[systemd-networkd]] with the following {{ic|lxd.network}} (replace {{ic|''x''}} and {{ic|''y''}} to match your bridge IP):
[Match]
 
Name=lxdbr0
 
[Network]
 
DNS=10.x.y.1
 
Domains=~lxd
 
IgnoreCarrierLoss=yes
 
[Address]
 
Address=10.x.y.1/24
 
Gateway=10.x.y.1
 
   
  +
{{hc|/etc/systemd/network/lxd.network|2=
And then [[enable]] and [[start]] {{ic|systemd-networkd.service}}.
 
  +
[Match]
  +
Name=lxdbr0
  +
  +
[Network]
  +
DNS=''10.x.y.1''
  +
Domains=~lxd
  +
IgnoreCarrierLoss=yes
  +
  +
[Address]
  +
Address=''10.x.y.1''/24
  +
Gateway=''10.x.y.1''
  +
}}
   
 
=== Use Wayland and Xorg applications ===
 
=== Use Wayland and Xorg applications ===
   
{{Note| Always consider security implications, as some of the described methods may weaken the seperation between container and host. }}
+
{{Note|Always consider security implications, as some of the described methods may weaken the seperation between container and host.}}
   
There are multiple methods to use GUI applications inside containers.
+
There are multiple methods to use GUI applications inside containers, you can find an overview in the [https://discuss.linuxcontainers.org/t/overview-gui-inside-containers/8767 official LXD forum].
 
You can find an overview in the official Forum of LXD: https://discuss.linuxcontainers.org/t/overview-gui-inside-containers/8767
 
   
==== Method 1: Use the host's Wayland or Xorg Server ====
+
The following method grants containers access to the host's sockets of Wayland (+Xwayland) or Xorg.
{{Note| Using Xorg might weaken the seperation between container and host, because Xorg allows applications to access other applications windows. So container applications might have access to host applications windows. <br>
 
Use Wayland instead (but be aware that Xorgs downsides also apply to XWayland).}}
 
   
  +
{{Note|Using Xorg might weaken the seperation between container and host, because Xorg allows applications to access other applications windows. So container applications might have access to host applications windows. Use Wayland instead (but be aware that Xorg downsides also apply to Xwayland).}}
'''Summary:''' In this method we grant containers access to the host's sockets of Wayland (+XWayland) or Xorg.
 
   
'''1. Add the following devices to a containers profile.'''
+
==== Add the following devices to a containers profile ====
   
See also: [https://linuxcontainers.org/lxd/docs/master/instances#device-types LXD-Documentation regarding Devices]
+
See also [https://linuxcontainers.org/lxd/docs/master/instances#device-types LXD documentation regarding devices].
   
 
General device for the GPU:
 
General device for the GPU:
167行目: 167行目:
 
type: gpu
 
type: gpu
   
{{Note| The path under "listen" is different, because /run and /tmp folders might be overridden, see: https://github.com/lxc/lxd/issues/4540 }}
+
{{Note|The path under "listen" is different, because {{ic|/run}} and {{ic|/tmp}} directories might be overridden, see https://github.com/lxc/lxd/issues/4540{{Dead link|2023|09|16|status=404}}.}}
   
Device for the Wayland Socket: <br>
+
Device for the Wayland socket:
  +
'''Notes:''' <br>
 
  +
{{Note|
 
* Adjust the Display (wayland-0) accordingly.
 
* Adjust the Display (wayland-0) accordingly.
* Add the folders in /mnt and /tmp inside the container, if they don't already exist.
+
* Add the directories in {{ic|/mnt}} and {{ic|/tmp}} inside the container, if they do not already exist.
  +
}}
   
 
Waylandsocket:
 
Waylandsocket:
184行目: 186行目:
 
mode: "0777"
 
mode: "0777"
 
type: proxy
 
type: proxy
 
   
Device for the Xorg (or XWayland) Socket: <br>
+
Device for the Xorg (or Xwayland) Socket:
  +
'''Note:''' Adjust the Display Number accordingly (for example X1 instead of X0).
 
  +
{{Note|Adjust the Display Number accordingly (for example X1 instead of X0).}}
   
 
Xsocket:
 
Xsocket:
199行目: 201行目:
 
mode: "0777"
 
mode: "0777"
 
type: proxy
 
type: proxy
 
 
   
'''2. Link the sockets to the right location inside the container.'''
+
==== Link the sockets to the right location inside the container ====
   
'''Note:''' These Scripts need to be run after each start of the container; you can automate this with systemd for example.
+
{{Note|These scripts need to be run after each start of the container; you can automate this with systemd for example.}}
   
Shell-Script to link the Wayland socket:
+
Shell script to link the Wayland socket:
   
 
#!/bin/sh
 
#!/bin/sh
212行目: 212行目:
 
ln -s /mnt/wayland1/wayland-0 /run/user/1000/wayland-0
 
ln -s /mnt/wayland1/wayland-0 /run/user/1000/wayland-0
   
Link the Xorg (or XWayland) socket:
+
Link the Xorg (or Xwayland) socket:
   
 
#!/bin/sh
 
#!/bin/sh
 
ln -s /mnt/xorg1/X0 /tmp/.X11-unix/X0
 
ln -s /mnt/xorg1/X0 /tmp/.X11-unix/X0
   
  +
==== Add environment variables to the users config inside the container ====
   
  +
{{Note|Adjust the Display Numbers and/or the filename (.profile) accordingly.}}
'''3. Add Environment variables to the users config inside the container.'''
 
 
'''Note:''' Adjust the Display Numbers and/or the filename (.profile) accordingly.
 
   
 
For Wayland:
 
For Wayland:
228行目: 227行目:
 
$ echo "export QT_QPA_PLATFORM=wayland" >> ~/.profile
 
$ echo "export QT_QPA_PLATFORM=wayland" >> ~/.profile
   
For Xorg (or XWayland):
+
For Xorg (or Xwayland):
   
$ echo "export DISPLAY=:0" >> .profile
+
$ echo "export DISPLAY=:0" >> ~/.profile
   
Reload the .profile:
+
Reload the {{ic|~/.profile}}:
   
$ . .profile
+
$ source ~/.profile
  +
  +
==== Install necessary software in the container ====
  +
  +
Necessary software needs to be added. For now, you can install an example GUI application; this will probably install all necessary packages as well.
  +
  +
==== Start GUI applications ====
  +
  +
Now, you should be able to start GUI applications inside the container (via terminal for example) and make them appear as a window on your hosts display.
  +
  +
You can try out ''glxgears'' for example.
   
 
=== Privileged containers ===
 
=== Privileged containers ===
   
  +
{{Note|
{{Note | Privileged containers are not isolated from the host! <br>
 
  +
* Privileged containers are not isolated from the host!
The root user in the container is the root user on the host. <br>
 
  +
* The root user in the container is the root user on the host.
Use unprivileged containers instead whenever possible. }}
 
  +
* Use unprivileged containers instead whenever possible.
  +
}}
   
 
If you want to set up a privileged container, you must provide the config key {{ic|1=security.privileged=true}}.
 
If you want to set up a privileged container, you must provide the config key {{ic|1=security.privileged=true}}.
259行目: 270行目:
 
...
 
...
 
}}
 
}}
  +
  +
=== Add a disk device ===
  +
  +
==== Read-Only ====
  +
  +
If you want to share a disk device from the host to a container, all you need to do is add a {{ic|disk}} device to your container. The virtual {{ic|disk}} device needs a name (only used internally in the LXC configuration file), a path on the host's filesystem pointing to the disk you want to mount, as well as a desired mountpoint on the container's filesystem.
  +
  +
$ lxc config device add ''containername'' ''virtualdiskname'' disk source=''/path/to/host/disk/'' path=''/path/to/mountpoint/on/container''
  +
  +
==== Read-Write (unprivileged container) ====
  +
  +
The preferred method for read/write access is to use the "shift" method included in LXD.
  +
  +
shift is based on Linux kernel functionality and available in two different versions:
  +
  +
* the most recent version is called "idmapped mounts" and is included in all upstream kernels >5.12 by default. So it is also included in the regular Arch Linux kernel ({{Pkg|linux}}).
  +
* the old version is called "shiftfs" and needs to be added manually to most kernels as a kernel module. It is available as a legacy version to support older kernels. You can take a look at this GitHub repo that uses the shiftfs kernel module from Ubuntu kernels: https://github.com/toby63/shiftfs-dkms
  +
  +
Shift should be available and activated by default on Arch with the regular Arch Linux kernel ({{Pkg|linux}}) and the {{Pkg|lxd}} package.
  +
  +
'''1. To check whether shift is available on your system''', run {{ic|lxc info}}
  +
  +
The first part of the output shows you:
  +
  +
{{bc| kernel_features:
  +
idmapped_mounts: "true"
  +
shiftfs: "false"
  +
}}
  +
  +
If either idmapped_mounts or shiftfs is true, then your kernel includes it already and you can use shift.
  +
If it is not true, you should check your kernel version and might try the "shiftfs" legacy version mentioned above.
  +
  +
The second part of the output shows you either:
  +
  +
{{bc| lxc_features:
  +
idmapped_mounts_v2: "true"
  +
}}
  +
  +
or:
  +
  +
{{bc| lxc_features:
  +
shiftfs: "true"
  +
}}
  +
  +
If either idmapped_mounts or shiftfs is true, then LXD has already enabled it.
  +
If it is not enabled, you must enable it first.
  +
  +
'''2. Usage'''
  +
  +
Then you can simply set the "shift" config key to "true" in the disk device options.
  +
See: [https://linuxcontainers.org/lxd/docs/master/reference/devices_disk/ LXD Documentation on disk devices]
  +
  +
See also: [https://discuss.linuxcontainers.org/t/share-folders-and-volumes-between-host-and-containers/7735 tutorial in the LXD forums]
  +
  +
=== Bash completion doesn't work ===
  +
  +
This workaround may fix the issue:
  +
  +
# ln -s /usr/share/bash-completion/completions/lxd /usr/share/bash-completion/completions/lxc
   
 
== トラブルシューティング ==
 
== トラブルシューティング ==

2023年10月8日 (日) 07:52時点における版

関連記事

LXD はコンテナ(LXC経由)および仮想マシン(QEMU経由)のマネージャー/ハイパーバイザーです。

必要なソフトウェア

lxd パッケージをインストールします。 あるいは、インスタンスを自動起動する場合など、lxd.service を直接、有効にすることもできます。

セットアップ

一般ユーザー用のコンテナ設定

一般ユーザー用のコンテナ(unprivileged containers)を使用することが推奨されます(違いについては Linux_Containers#Privileged_containers_or_unprivileged_containers を参照してください)。

このために、/etc/subuid/etc/subgid の両方を変更します(これらのファイルが存在しない場合は、作成してください)。コンテナで使用する uid / gid のマッピングを各ユーザーに設定する必要があります。以下の例は単に root ユーザー(および systemd システムユニット)のためのものです:

usermod コマンドを以下のように使用することもできます:

usermod -v 1000000-1000999999 -w 1000000-1000999999 root

または、上記のファイルを直接以下のように変更することもできます:

/etc/subuid
root:1000000:1000000000
/etc/subgid
root:1000000:1000000000

これで、すべてのコンテナはデフォルトで unprivileged として起動されます。

代替方法については、特権コンテナの設定方法 を参照してください。

LXD の設定

LXD を使うにはストレージプールと (インターネットを使いたい場合) ネットワークを設定する必要があります。以下のコマンドを root で実行してください:

# lxd init

非特権ユーザーとして LXD にアクセス

デフォルトでは LXD デーモンは lxd グループのユーザーからのアクセスを許可するので、ユーザーをグループに追加してください:

# usermod -a -G lxd <user>

使用方法

LXD consists of two parts:

  • the daemon (the lxd binary)
  • the client (the lxc binary)
ノート: lxc is not LXC; the naming is a bit confusing, you can read the forum post on comparing LXD vs LXC regarding the difference.

The client is used to control one or multiple daemon(s).

The client can also be used to control remote LXD servers.

Overview of commands

You can get an overview of all available commands by typing:

$ lxc

Create a container

You can create a container with lxc launch, for example:

$ lxc launch ubuntu:20.04

Container are based on images, that are downloaded from image servers or remote LXD servers.
You can see the list of already added servers with:

$ lxc remote list

You can list all images on a server with lxc image list, for example:

$ lxc image list images:

This will show you all images on one of the default servers: images.linuxcontainers.org

You can also search for images by adding terms like the distribution name:

$ lxc image list images:debian

Launch a container with an image from a specific server with:

$ lxc launch servername:imagename

For example:

$ lxc launch images:centos/8/amd64 centos

To create an amd64 Arch container:

$ lxc launch images:archlinux/current/amd64 arch

Create a virtual machine

Just add the flag --vm to lxc launch:

$ lxc launch ubuntu:20.04 --vm
ノート:
  • For now virtual machines support less features than containers (see Difference between containers and virtual machines for example).
  • Only cloud variants of the official images enable the lxd-agent out-of-the-box (which is needed for the usual lxc commands like lxc exec).
    You can search for cloud images with lxc image list images: cloud or lxc image list images: distribution-name cloud.
    If you use other images or encounter problems take a look at #lxd-agent_inside_a_virtual_machine.

Use and manage a container or VM

See Instance managament in the official Getting Started Guide of LXD.

Container/VM configuration (optional)

You can add various options to instances (containers and VMs).
See Configuration of instances in the official Advanced Guide of LXD for details.

ヒントとテクニック

Access the containers by name on the host

This assumes that you are using the default bridge that it is named lxdbr0 and that you are using systemd-resolved.

# systemd-resolve --interface lxdbr0 --set-domain '~lxd' --set-dns $(lxc network get lxdbr0 ipv4.address | cut -d / -f 1)

You can now access the containers by name:

$ ping containername.lxd

Other solution

この記事またはセクションの正確性には問題があります。
理由: When does systemd-resolve stop working? Is there a bug report or some other reference? (議論: トーク:LXD#)

It seems that the systemd-resolve solution stops working after some time.

Another solution is to use systemd-networkd with the following lxd.network (replace x and y to match your bridge IP):

/etc/systemd/network/lxd.network
[Match]
Name=lxdbr0

[Network]
DNS=10.x.y.1
Domains=~lxd
IgnoreCarrierLoss=yes

[Address]
Address=10.x.y.1/24
Gateway=10.x.y.1

Use Wayland and Xorg applications

ノート: Always consider security implications, as some of the described methods may weaken the seperation between container and host.

There are multiple methods to use GUI applications inside containers, you can find an overview in the official LXD forum.

The following method grants containers access to the host's sockets of Wayland (+Xwayland) or Xorg.

ノート: Using Xorg might weaken the seperation between container and host, because Xorg allows applications to access other applications windows. So container applications might have access to host applications windows. Use Wayland instead (but be aware that Xorg downsides also apply to Xwayland).

Add the following devices to a containers profile

See also LXD documentation regarding devices.

General device for the GPU:

mygpu:
   type: gpu
ノート: The path under "listen" is different, because /run and /tmp directories might be overridden, see https://github.com/lxc/lxd/issues/4540[リンク切れ 2023-09-16].

Device for the Wayland socket:

ノート:
  • Adjust the Display (wayland-0) accordingly.
  • Add the directories in /mnt and /tmp inside the container, if they do not already exist.
Waylandsocket:
    bind: container
    connect: unix:/run/user/1000/wayland-0
    listen: unix:/mnt/wayland1/wayland-0
    uid: "1000"
    gid: "1000"
    security.gid: "1000"
    security.uid: "1000"
    mode: "0777"
    type: proxy

Device for the Xorg (or Xwayland) Socket:

ノート: Adjust the Display Number accordingly (for example X1 instead of X0).
Xsocket:
    bind: container
    connect: unix:/tmp/.X11-unix/X0
    listen: unix:/mnt/xorg1/X0
    uid: "1000"
    gid: "1000"
    security.gid: "1000"
    security.uid: "1000"
    mode: "0777"
    type: proxy

Link the sockets to the right location inside the container

ノート: These scripts need to be run after each start of the container; you can automate this with systemd for example.

Shell script to link the Wayland socket:

#!/bin/sh
mkdir /run/user/1000
ln -s /mnt/wayland1/wayland-0 /run/user/1000/wayland-0

Link the Xorg (or Xwayland) socket:

#!/bin/sh
ln -s /mnt/xorg1/X0 /tmp/.X11-unix/X0

Add environment variables to the users config inside the container

ノート: Adjust the Display Numbers and/or the filename (.profile) accordingly.

For Wayland:

$ echo "export XDG_RUNTIME_DIR=/run/user/1000" >> ~/.profile
$ echo "export WAYLAND_DISPLAY=wayland-0" >> ~/.profile
$ echo "export QT_QPA_PLATFORM=wayland" >> ~/.profile

For Xorg (or Xwayland):

$ echo "export DISPLAY=:0" >> ~/.profile

Reload the ~/.profile:

$ source ~/.profile

Install necessary software in the container

Necessary software needs to be added. For now, you can install an example GUI application; this will probably install all necessary packages as well.

Start GUI applications

Now, you should be able to start GUI applications inside the container (via terminal for example) and make them appear as a window on your hosts display.

You can try out glxgears for example.

Privileged containers

ノート:
  • Privileged containers are not isolated from the host!
  • The root user in the container is the root user on the host.
  • Use unprivileged containers instead whenever possible.

If you want to set up a privileged container, you must provide the config key security.privileged=true.

Either during container creation:

$ lxc launch ubuntu:20.04 ubuntu -c security.privileged=true

Or for an already existing container, you may edit the configuration:

$ lxc config edit ubuntu
name: ubuntu
profiles:
- default
config:
  ...
  security.privileged: "true"
  ...

Add a disk device

Read-Only

If you want to share a disk device from the host to a container, all you need to do is add a disk device to your container. The virtual disk device needs a name (only used internally in the LXC configuration file), a path on the host's filesystem pointing to the disk you want to mount, as well as a desired mountpoint on the container's filesystem.

$ lxc config device add containername virtualdiskname disk source=/path/to/host/disk/ path=/path/to/mountpoint/on/container

Read-Write (unprivileged container)

The preferred method for read/write access is to use the "shift" method included in LXD.

shift is based on Linux kernel functionality and available in two different versions:

  • the most recent version is called "idmapped mounts" and is included in all upstream kernels >5.12 by default. So it is also included in the regular Arch Linux kernel (linux).
  • the old version is called "shiftfs" and needs to be added manually to most kernels as a kernel module. It is available as a legacy version to support older kernels. You can take a look at this GitHub repo that uses the shiftfs kernel module from Ubuntu kernels: https://github.com/toby63/shiftfs-dkms

Shift should be available and activated by default on Arch with the regular Arch Linux kernel (linux) and the lxd package.

1. To check whether shift is available on your system, run lxc info

The first part of the output shows you:

 kernel_features:
    idmapped_mounts: "true"
    shiftfs: "false"

If either idmapped_mounts or shiftfs is true, then your kernel includes it already and you can use shift. If it is not true, you should check your kernel version and might try the "shiftfs" legacy version mentioned above.

The second part of the output shows you either:

  lxc_features:
    idmapped_mounts_v2: "true"

or:

  lxc_features:
    shiftfs: "true"

If either idmapped_mounts or shiftfs is true, then LXD has already enabled it. If it is not enabled, you must enable it first.

2. Usage

Then you can simply set the "shift" config key to "true" in the disk device options. See: LXD Documentation on disk devices

See also: tutorial in the LXD forums

Bash completion doesn't work

This workaround may fix the issue:

# ln -s /usr/share/bash-completion/completions/lxd /usr/share/bash-completion/completions/lxc

トラブルシューティング

lxd-agent inside a virtual machine

Inside some virtual machine images the lxd-agent is not enabled by default.
In this case you have to enable it manually, for example by mounting a 9p network share. This requires console access with a valid user.

1. Login with lxc console:
Replace virtualmachine-name accordingly.

$ lxc console virtualmachine-name

Login as root:

ノート: On some systems you have to setup a root password first to be able to login as root.
You can use cloud-init for this for example.
$ su root

Mount the network share:

$ mount -t 9p config /mnt/

Go into the folder and run the install script (this will enable the lxd-agent inside the VM):

$ cd /mnt/
$ ./install.sh 

After sucessful install, reboot with:

$ reboot

Afterwards the lxd-agent is available and lxc exec should work.

カーネルコンフィグの確認

デフォルトで Arch Linux のカーネルは Linux Containers とフロントエンドの LXD が動くようにコンパイルされています。カスタムカーネルを使っている場合やカーネルオプションを変更している場合、LXD が動かない場合があります。コンテナを実行できるようにカーネルが設定されているか確認してください:

$ lxc-checkconfig

Resource limits are not applied when viewed from inside a container

Install lxcfs and start lxcfs.service.

lxd will need to be restarted. Enable lxcfs.service for the service to be started at boot time.

Starting a virtual machine fails

If you see the error: Error: Required EFI firmware settings file missing: /usr/share/ovmf/x64/OVMF_VARS.ms.fd

Arch Linux does not distribute secure boot signed ovmf firmware, to boot virtual machines you need to disable secure boot for the time being.

$ lxc launch ubuntu:18.04 test-vm --vm -c security.secureboot=false

This can also be added to the default profile by doing:

$ lxc profile set default security.secureboot=false

No IPv4 with systemd-networkd

Starting with version version 244.1, systemd detects if /sys is writable by containers. If it is, udev is automatically started and breaks IPv4 in unprivileged containers. See commit bf331d8 and discussion on linuxcontainers.

On containers created past 2020, there should already be a systemd.networkd.service override to work around this issue, create it if it is not:

/etc/systemd/system/systemd-networkd.service.d/lxc.conf
[Service]
BindReadOnlyPaths=/sys

You could also work around this issue by setting raw.lxc: lxc.mount.auto = proc:rw sys:ro in the profile of the container to ensure /sys is read-only for the entire container, although this may be problematic, as per the linked discussion above.

アンインストール

Stop and disable lxd.service and lxd.socket. Then uninstall the lxd package.

If you uninstalled the package without disabling the service, you might have a lingering broken symlink at /etc/systemd/system/multi-user.wants/lxd.service.

If you want to remove all data:

# rm -r /var/lib/lxd

If you used any of the example networking configuration, you should remove those as well.

参照