<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="ja">
	<id>https://wiki.archlinux.jp/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Pedgin</id>
	<title>ArchWiki - 利用者の投稿記録 [ja]</title>
	<link rel="self" type="application/atom+xml" href="https://wiki.archlinux.jp/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Pedgin"/>
	<link rel="alternate" type="text/html" href="https://wiki.archlinux.jp/index.php/%E7%89%B9%E5%88%A5:%E6%8A%95%E7%A8%BF%E8%A8%98%E9%8C%B2/Pedgin"/>
	<updated>2026-04-13T18:45:00Z</updated>
	<subtitle>利用者の投稿記録</subtitle>
	<generator>MediaWiki 1.44.3</generator>
	<entry>
		<id>https://wiki.archlinux.jp/index.php?title=LXD&amp;diff=28929</id>
		<title>LXD</title>
		<link rel="alternate" type="text/html" href="https://wiki.archlinux.jp/index.php?title=LXD&amp;diff=28929"/>
		<updated>2022-12-21T00:29:14Z</updated>

		<summary type="html">&lt;p&gt;Pedgin: /* Setup for unpriviledged containers */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:仮想化]]&lt;br /&gt;
[[en:LXD]]&lt;br /&gt;
{{Related articles start}}&lt;br /&gt;
{{Related|Linux Containers}}&lt;br /&gt;
{{Related articles end}}&lt;br /&gt;
&#039;&#039;&#039;[https://linuxcontainers.org/lxd/ LXD]&#039;&#039;&#039; はコンテナ(LXC経由)および仮想マシン([[QEMU]]経由)のマネージャー/ハイパーバイザーです。&lt;br /&gt;
&lt;br /&gt;
== セットアップ ==&lt;br /&gt;
=== 必要なソフトウェア ===&lt;br /&gt;
{{Pkg|lxd}} パッケージをインストールして、{{ic|lxd.service}} を[[systemd#ユニットを使う|有効]]にします。&lt;br /&gt;
&lt;br /&gt;
==== 他のインストール方法 ====&lt;br /&gt;
&lt;br /&gt;
{{AUR|snapd}} パッケージをインストールしてから以下のコマンドを実行することで [[snapd]] を使って LXD をインストールできます:&lt;br /&gt;
&lt;br /&gt;
 # snap install lxd&lt;br /&gt;
&lt;br /&gt;
=== Setup for unpriviledged containers ===&lt;br /&gt;
It is recommended to use unpriviledged containers (See [[Linux_Containers#Privileged_containers_or_unprivileged_containers]] for an explanation of the difference).&lt;br /&gt;
&lt;br /&gt;
For this, modify both {{ic|/etc/subuid}} and {{ic|/etc/subgid}} (if these files are not present, create them) to contain the mapping to the containerized uid/gid pairs for each user who shall be able to run the containers. The example below is simply for the root user (and systemd system unit):&lt;br /&gt;
&lt;br /&gt;
You can either use {{ic|usermod}} as follows:&lt;br /&gt;
&lt;br /&gt;
{{ic|usermod -v 1000000-1000999999 -w 1000000-1000999999 root}}&lt;br /&gt;
&lt;br /&gt;
Or modify the above mentioned files directly as follows:&lt;br /&gt;
&lt;br /&gt;
{{hc|/etc/subuid|&lt;br /&gt;
root:1000000:1000000000&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{hc|/etc/subgid|&lt;br /&gt;
root:1000000:1000000000&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Now, every container will be started {{ic|unprivileged}} by default.&lt;br /&gt;
&lt;br /&gt;
For the alternative, see [[#Privileged containers|howto set up privileged containers]].&lt;br /&gt;
&lt;br /&gt;
=== LXD の設定 ===&lt;br /&gt;
LXD を使うにはストレージプールと (インターネットを使いたい場合) ネットワークを設定する必要があります。以下のコマンドを root で実行してください:&lt;br /&gt;
 # lxd init&lt;br /&gt;
&lt;br /&gt;
=== 非特権ユーザーとして LXD にアクセス ===&lt;br /&gt;
&lt;br /&gt;
デフォルトでは LXD デーモンは {{ic|lxd}} グループのユーザーからのアクセスを許可するので、ユーザーをグループに追加してください:&lt;br /&gt;
&lt;br /&gt;
 # usermod -a -G lxd &amp;lt;user&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 使用方法 ==&lt;br /&gt;
&lt;br /&gt;
LXD consists of two parts:&lt;br /&gt;
* the daemon (the &#039;&#039;lxd&#039;&#039; binary)&lt;br /&gt;
* the client (the &#039;&#039;lxc&#039;&#039; binary)&lt;br /&gt;
&lt;br /&gt;
{{Note | lxc is not LXC; the naming is a bit confusing, you can read the [https://discuss.linuxcontainers.org/t/comparing-lxd-vs-lxc/24 forum post on comparing LXD vs LXC] regarding the difference.}}&lt;br /&gt;
&lt;br /&gt;
The client is used to control one or multiple daemon(s).&lt;br /&gt;
&lt;br /&gt;
The client can also be used to control remote LXD servers.&lt;br /&gt;
&lt;br /&gt;
=== Overview of commands ===&lt;br /&gt;
You can get an overview of all available commands by typing:&lt;br /&gt;
 &lt;br /&gt;
 $ lxc&lt;br /&gt;
&lt;br /&gt;
=== Create a container ===&lt;br /&gt;
You can create a container with {{ic| lxc launch}}, for example:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch ubuntu:20.04&lt;br /&gt;
&lt;br /&gt;
Container are based on images, that are downloaded from image servers or remote LXD servers. &amp;lt;br&amp;gt;&lt;br /&gt;
You can see the list of already added servers with:&lt;br /&gt;
&lt;br /&gt;
 $ lxc remote list&lt;br /&gt;
&lt;br /&gt;
You can list all images on a server with {{ic| lxc image list}}, for example:&lt;br /&gt;
&lt;br /&gt;
 $ lxc image list images:&lt;br /&gt;
&lt;br /&gt;
This will show you all images on one of the default servers: [https://images.linuxcontainers.org images.linuxcontainers.org]&lt;br /&gt;
&lt;br /&gt;
You can also search for images by adding terms like the distribution name:&lt;br /&gt;
&lt;br /&gt;
 $ lxc image list images:debian&lt;br /&gt;
&lt;br /&gt;
Launch a container with an image from a specific server with:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch servername:imagename&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch images:centos/8/amd64 centos&lt;br /&gt;
&lt;br /&gt;
To create an amd64 Arch container:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch images:archlinux/current/amd64 arch&lt;br /&gt;
&lt;br /&gt;
=== Create a virtual machine ===&lt;br /&gt;
Just add the flag {{ic|--vm}} to {{ic|lxc launch}}:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch ubuntu:20.04 --vm&lt;br /&gt;
&lt;br /&gt;
{{Note|&lt;br /&gt;
* For now virtual machines support less features than containers (see [https://linuxcontainers.org/lxd/advanced-guide/#difference-between-containers-and-virtual-machines Difference between containers and virtual machines] for example). &lt;br /&gt;
* Only {{ic|cloud}} variants of the official images enable the lxd-agent out-of-the-box (which is needed for the usual lxc commands like {{ic|lxc exec}}). &amp;lt;br&amp;gt; You can search for cloud images with {{ic|lxc image list images: cloud}} or {{ic|lxc image list images: distribution-name cloud}}. &amp;lt;br&amp;gt; If you use other images or encounter problems take a look at [[#lxd-agent_inside_a_virtual_machine]].&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
=== Use and manage a container or VM ===&lt;br /&gt;
&lt;br /&gt;
See [https://linuxcontainers.org/lxd/getting-started-cli/#instance-management Instance managament in the official Getting Started Guide of LXD].&lt;br /&gt;
&lt;br /&gt;
=== Container/VM configuration (optional) ===&lt;br /&gt;
&lt;br /&gt;
You can add various options to instances (containers and VMs). &amp;lt;br&amp;gt;&lt;br /&gt;
See [https://linuxcontainers.org/lxd/advanced-guide/#configuration-of-instances Configuration of instances in the official Advanced Guide of LXD] for details.&lt;br /&gt;
&lt;br /&gt;
== ヒントとテクニック ==&lt;br /&gt;
&lt;br /&gt;
=== Access the containers by name on the host ===&lt;br /&gt;
&lt;br /&gt;
This assumes that you are using the default bridge, that it is named lxdbr0 and that you are using [[systemd-resolved]].&lt;br /&gt;
&lt;br /&gt;
  # systemd-resolve --interface lxdbr0 --set-domain &#039;~lxd&#039; --set-dns $(lxc network get lxdbr0 ipv4.address | cut -d / -f 1)&lt;br /&gt;
  &lt;br /&gt;
You can now access the containers by name:&lt;br /&gt;
&lt;br /&gt;
  $ ping &#039;&#039;containername&#039;&#039;.lxd&lt;br /&gt;
&lt;br /&gt;
==== Other solution ====&lt;br /&gt;
&lt;br /&gt;
It seems that the systemd-resolve solution stops working after some time.&lt;br /&gt;
&lt;br /&gt;
Another solution is to create a {{ic|/etc/systemd/network/lxd.network}} that contains (replace x and y to match your bridge IP):&lt;br /&gt;
&lt;br /&gt;
  [Match]&lt;br /&gt;
  Name=lxdbr0&lt;br /&gt;
  [Network]&lt;br /&gt;
  DNS=10.x.y.1&lt;br /&gt;
  Domains=~lxd&lt;br /&gt;
  IgnoreCarrierLoss=yes&lt;br /&gt;
  [Address]&lt;br /&gt;
  Address=10.x.y.1/24&lt;br /&gt;
  Gateway=10.x.y.1&lt;br /&gt;
&lt;br /&gt;
And then [[enable]] and [[start]] {{ic|systemd-networkd.service}}.&lt;br /&gt;
&lt;br /&gt;
=== Use Wayland and Xorg applications ===&lt;br /&gt;
&lt;br /&gt;
{{Note| Always consider security implications, as some of the described methods may weaken the seperation between container and host. }}&lt;br /&gt;
&lt;br /&gt;
There are multiple methods to use GUI applications inside containers. &lt;br /&gt;
  &lt;br /&gt;
You can find an overview in the official Forum of LXD: https://discuss.linuxcontainers.org/t/overview-gui-inside-containers/8767&lt;br /&gt;
&lt;br /&gt;
==== Method 1: Use the host&#039;s Wayland or Xorg Server ====&lt;br /&gt;
{{Note| Using Xorg might weaken the seperation between container and host, because Xorg allows applications to access other applications windows. So container applications might have access to host applications windows. &amp;lt;br&amp;gt;&lt;br /&gt;
Use Wayland instead (but be aware that Xorgs downsides also apply to XWayland).}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Summary:&#039;&#039;&#039; In this method we grant containers access to the host&#039;s sockets of Wayland (+XWayland) or Xorg.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. Add the following devices to a containers profile.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
See also: [https://linuxcontainers.org/lxd/docs/master/instances#device-types LXD-Documentation regarding Devices]&lt;br /&gt;
&lt;br /&gt;
General device for the GPU:&lt;br /&gt;
&lt;br /&gt;
 mygpu:&lt;br /&gt;
    type: gpu&lt;br /&gt;
&lt;br /&gt;
{{Note| The path under &amp;quot;listen&amp;quot; is different, because /run and /tmp folders might be overridden, see: https://github.com/lxc/lxd/issues/4540 }}&lt;br /&gt;
&lt;br /&gt;
Device for the Wayland Socket: &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
* Adjust the Display (wayland-0) accordingly.&lt;br /&gt;
* Add the folders in /mnt and /tmp inside the container, if they don&#039;t already exist.&lt;br /&gt;
&lt;br /&gt;
 Waylandsocket:&lt;br /&gt;
     bind: container&lt;br /&gt;
     connect: unix:/run/user/1000/wayland-0&lt;br /&gt;
     listen: unix:/mnt/wayland1/wayland-0&lt;br /&gt;
     uid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     gid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     security.gid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     security.uid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     mode: &amp;quot;0777&amp;quot;&lt;br /&gt;
     type: proxy&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
Device for the Xorg (or XWayland) Socket: &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Adjust the Display Number accordingly (for example X1 instead of X0).&lt;br /&gt;
&lt;br /&gt;
 Xsocket:&lt;br /&gt;
     bind: container&lt;br /&gt;
     connect: unix:/tmp/.X11-unix/X0&lt;br /&gt;
     listen: unix:/mnt/xorg1/X0&lt;br /&gt;
     uid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     gid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     security.gid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     security.uid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     mode: &amp;quot;0777&amp;quot;&lt;br /&gt;
     type: proxy&lt;br /&gt;
     &lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. Link the sockets to the right location inside the container.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; These Scripts need to be run after each start of the container; you can automate this with systemd for example.&lt;br /&gt;
&lt;br /&gt;
Shell-Script to link the Wayland socket:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 mkdir /run/user/1000&lt;br /&gt;
 ln -s /mnt/wayland1/wayland-0 /run/user/1000/wayland-0&lt;br /&gt;
&lt;br /&gt;
Link the Xorg (or XWayland) socket:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 ln -s /mnt/xorg1/X0 /tmp/.X11-unix/X0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. Add Environment variables to the users config inside the container.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Adjust the Display Numbers and/or the filename (.profile) accordingly.&lt;br /&gt;
&lt;br /&gt;
For Wayland:&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;export XDG_RUNTIME_DIR=/run/user/1000&amp;quot; &amp;gt;&amp;gt; ~/.profile&lt;br /&gt;
 $ echo &amp;quot;export WAYLAND_DISPLAY=wayland-0&amp;quot; &amp;gt;&amp;gt; ~/.profile&lt;br /&gt;
 $ echo &amp;quot;export QT_QPA_PLATFORM=wayland&amp;quot; &amp;gt;&amp;gt; ~/.profile&lt;br /&gt;
&lt;br /&gt;
For Xorg (or XWayland):&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;export DISPLAY=:0&amp;quot; &amp;gt;&amp;gt; .profile&lt;br /&gt;
&lt;br /&gt;
Reload the .profile:&lt;br /&gt;
&lt;br /&gt;
 $ . .profile&lt;br /&gt;
&lt;br /&gt;
=== Privileged containers ===&lt;br /&gt;
&lt;br /&gt;
{{Note | Privileged containers are not isolated from the host! &amp;lt;br&amp;gt;&lt;br /&gt;
The root user in the container is the root user on the host. &amp;lt;br&amp;gt;&lt;br /&gt;
Use unprivileged containers instead whenever possible. }}&lt;br /&gt;
&lt;br /&gt;
If you want to set up a privileged container, you must provide the config key {{ic|1=security.privileged=true}}.&lt;br /&gt;
&lt;br /&gt;
Either during container creation:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch ubuntu:20.04 ubuntu -c security.privileged=true&lt;br /&gt;
&lt;br /&gt;
Or for an already existing container, you may edit the configuration:&lt;br /&gt;
&lt;br /&gt;
{{hc|$ lxc config edit ubuntu|&lt;br /&gt;
name: ubuntu&lt;br /&gt;
profiles:&lt;br /&gt;
- default&lt;br /&gt;
config:&lt;br /&gt;
  ...&lt;br /&gt;
  security.privileged: &amp;quot;true&amp;quot;&lt;br /&gt;
  ...&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== トラブルシューティング ==&lt;br /&gt;
&lt;br /&gt;
=== lxd-agent inside a virtual machine ===&lt;br /&gt;
&lt;br /&gt;
Inside some virtual machine images the {{ic|lxd-agent}} is not enabled by default. &amp;lt;br&amp;gt;&lt;br /&gt;
In this case you have to enable it manually, for example by mounting a {{ic|9p}} network share. This requires console access with a valid user.&lt;br /&gt;
&lt;br /&gt;
1. Login with {{ic|lxc console}}: &amp;lt;br&amp;gt;&lt;br /&gt;
Replace {{ic|virtualmachine-name}} accordingly.&lt;br /&gt;
&lt;br /&gt;
 $ lxc console virtualmachine-name&lt;br /&gt;
&lt;br /&gt;
Login as root:&lt;br /&gt;
{{Note | On some systems you have to setup a root password first to be able to login as root. &amp;lt;br&amp;gt; You can use [https://linuxcontainers.org/lxd/advanced-guide/#cloud-init cloud-init] for this for example.}}&lt;br /&gt;
&lt;br /&gt;
 $ su root&lt;br /&gt;
&lt;br /&gt;
Mount the network share:&lt;br /&gt;
&lt;br /&gt;
 $ mount -t 9p config /mnt/&lt;br /&gt;
&lt;br /&gt;
Go into the folder and run the install script (this will enable the lxd-agent inside the VM):&lt;br /&gt;
&lt;br /&gt;
 $ cd /mnt/&lt;br /&gt;
 $ ./install.sh &lt;br /&gt;
&lt;br /&gt;
After sucessful install, reboot with:&lt;br /&gt;
&lt;br /&gt;
 $ reboot&lt;br /&gt;
&lt;br /&gt;
Afterwards the {{ic|lxd-agent}} is available and {{ic|lxc exec}} should work.&lt;br /&gt;
&lt;br /&gt;
=== カーネルコンフィグの確認 ===&lt;br /&gt;
デフォルトで Arch Linux のカーネルは Linux Containers とフロントエンドの LXD が動くようにコンパイルされています。カスタムカーネルを使っている場合やカーネルオプションを変更している場合、LXD が動かない場合があります。コンテナを実行できるようにカーネルが設定されているか確認してください:&lt;br /&gt;
 $ lxc-checkconfig&lt;br /&gt;
&lt;br /&gt;
=== Resource limits are not applied when viewed from inside a container ===&lt;br /&gt;
&lt;br /&gt;
Install {{Pkg|lxcfs}} and [[start]] {{ic|lxcfs.service}}.&lt;br /&gt;
&lt;br /&gt;
lxd will need to be restarted. [[Enable]] {{ic|lxcfs.service}} for the service to be started at boot time.&lt;br /&gt;
&lt;br /&gt;
=== Starting a virtual machine fails ===&lt;br /&gt;
&lt;br /&gt;
If you see the error:  {{ic|Error: Required EFI firmware settings file missing: /usr/share/ovmf/x64/OVMF_VARS.ms.fd}}&lt;br /&gt;
&lt;br /&gt;
Arch Linux does not distribute secure boot signed ovmf firmware, to boot virtual machines you need to disable secure boot for the time being.&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch ubuntu:18.04 test-vm --vm -c security.secureboot=false&lt;br /&gt;
&lt;br /&gt;
This can also be added to the default profile by doing:&lt;br /&gt;
&lt;br /&gt;
 $ lxc profile set default security.secureboot=false&lt;br /&gt;
&lt;br /&gt;
=== No IPv4 with systemd-networkd ===&lt;br /&gt;
&lt;br /&gt;
Starting with version version 244.1, systemd detects if {{ic|/sys}} is writable by containers. If it is, udev is automatically started and breaks IPv4 in unprivileged containers. See [https://github.com/systemd/systemd-stable/commit/96d7083c5499b264ecebd6a30a92e0e8fda14cd5 commit bf331d8] and [https://discuss.linuxcontainers.org/t/no-ipv4-on-arch-linux-containers/6395 discussion on linuxcontainers].&lt;br /&gt;
&lt;br /&gt;
On containers created past 2020, there should already be a {{ic|systemd.networkd.service}} override to work around this issue, create it if it is not:&lt;br /&gt;
&lt;br /&gt;
{{hc|1=/etc/systemd/system/systemd-networkd.service.d/lxc.conf|2=&lt;br /&gt;
[Service]&lt;br /&gt;
BindReadOnlyPaths=/sys&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
You could also work around this issue by setting {{ic|1=raw.lxc: lxc.mount.auto = proc:rw sys:ro}} in the profile of the container to ensure {{ic|/sys}} is read-only for the entire container, although this may be problematic, as per the linked discussion above.&lt;br /&gt;
&lt;br /&gt;
== アンインストール ==&lt;br /&gt;
&lt;br /&gt;
[[Stop]] and disable {{ic|lxd.service}} and {{ic|lxd.socket}}. Then [[uninstall]] the {{Pkg|lxd}} package.&lt;br /&gt;
&lt;br /&gt;
If you uninstalled the package without disabling the service, you might have a lingering broken symlink at {{ic|/etc/systemd/system/multi-user.wants/lxd.service}}.&lt;br /&gt;
&lt;br /&gt;
If you want to remove all data:&lt;br /&gt;
&lt;br /&gt;
 # rm -r /var/lib/lxd&lt;br /&gt;
&lt;br /&gt;
If you used any of the example networking configuration, you should remove those as well.&lt;br /&gt;
&lt;br /&gt;
== 参照 ==&lt;br /&gt;
&lt;br /&gt;
* [https://lxd.readthedocs.io 公式ドキュメント]&lt;br /&gt;
* [https://linuxcontainers.org/lxd/ LXD 公式ホームページ]&lt;br /&gt;
* [https://github.com/lxc/lxd LXD の GitHub ページ]&lt;/div&gt;</summary>
		<author><name>Pedgin</name></author>
	</entry>
	<entry>
		<id>https://wiki.archlinux.jp/index.php?title=LXD&amp;diff=28926</id>
		<title>LXD</title>
		<link rel="alternate" type="text/html" href="https://wiki.archlinux.jp/index.php?title=LXD&amp;diff=28926"/>
		<updated>2022-12-21T00:22:17Z</updated>

		<summary type="html">&lt;p&gt;Pedgin: /* ヒントとテクニック */ insert === Privileged containers ===&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:仮想化]]&lt;br /&gt;
[[en:LXD]]&lt;br /&gt;
{{Related articles start}}&lt;br /&gt;
{{Related|Linux Containers}}&lt;br /&gt;
{{Related articles end}}&lt;br /&gt;
&#039;&#039;&#039;[https://linuxcontainers.org/lxd/ LXD]&#039;&#039;&#039; はコンテナ(LXC経由)および仮想マシン([[QEMU]]経由)のマネージャー/ハイパーバイザーです。&lt;br /&gt;
&lt;br /&gt;
== セットアップ ==&lt;br /&gt;
=== 必要なソフトウェア ===&lt;br /&gt;
{{Pkg|lxd}} パッケージをインストールして、{{ic|lxd.service}} を[[systemd#ユニットを使う|有効]]にします。&lt;br /&gt;
&lt;br /&gt;
==== 他のインストール方法 ====&lt;br /&gt;
&lt;br /&gt;
{{AUR|snapd}} パッケージをインストールしてから以下のコマンドを実行することで [[snapd]] を使って LXD をインストールできます:&lt;br /&gt;
&lt;br /&gt;
 # snap install lxd&lt;br /&gt;
&lt;br /&gt;
=== Setup for unpriviledged containers ===&lt;br /&gt;
It is recommended to use unpriviledged containers (See [[Linux_Containers#Privileged_containers_or_unprivileged_containers]] for an explanation of the difference).&lt;br /&gt;
&lt;br /&gt;
For this, modify both {{ic|/etc/subuid}} and {{ic|/etc/subgid}} (if these files are not present, create them) to contain the mapping to the containerized uid/gid pairs for each user who shall be able to run the containers. The example below is simply for the root user (and systemd system unit):&lt;br /&gt;
&lt;br /&gt;
You can either use {{ic|usermod}} as follows:&lt;br /&gt;
&lt;br /&gt;
{{ic|usermod -v 1000000-1000999999 -w 1000000-1000999999 root}}&lt;br /&gt;
&lt;br /&gt;
Or modify the above mentioned files directly as follows:&lt;br /&gt;
&lt;br /&gt;
{{hc|/etc/subuid|&lt;br /&gt;
root:1000000:1000000000&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{hc|/etc/subgid|&lt;br /&gt;
root:1000000:1000000000&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Now, every container will be started {{ic|unprivileged}} by default.&lt;br /&gt;
&lt;br /&gt;
For the alternative see [[#Priviledged_containers | howto set up priviledged containers]].&lt;br /&gt;
&lt;br /&gt;
=== LXD の設定 ===&lt;br /&gt;
LXD を使うにはストレージプールと (インターネットを使いたい場合) ネットワークを設定する必要があります。以下のコマンドを root で実行してください:&lt;br /&gt;
 # lxd init&lt;br /&gt;
&lt;br /&gt;
=== 非特権ユーザーとして LXD にアクセス ===&lt;br /&gt;
&lt;br /&gt;
デフォルトでは LXD デーモンは {{ic|lxd}} グループのユーザーからのアクセスを許可するので、ユーザーをグループに追加してください:&lt;br /&gt;
&lt;br /&gt;
 # usermod -a -G lxd &amp;lt;user&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 使用方法 ==&lt;br /&gt;
&lt;br /&gt;
LXD consists of two parts:&lt;br /&gt;
* the daemon (the &#039;&#039;lxd&#039;&#039; binary)&lt;br /&gt;
* the client (the &#039;&#039;lxc&#039;&#039; binary)&lt;br /&gt;
&lt;br /&gt;
{{Note | lxc is not LXC; the naming is a bit confusing, you can read the [https://discuss.linuxcontainers.org/t/comparing-lxd-vs-lxc/24 forum post on comparing LXD vs LXC] regarding the difference.}}&lt;br /&gt;
&lt;br /&gt;
The client is used to control one or multiple daemon(s).&lt;br /&gt;
&lt;br /&gt;
The client can also be used to control remote LXD servers.&lt;br /&gt;
&lt;br /&gt;
=== Overview of commands ===&lt;br /&gt;
You can get an overview of all available commands by typing:&lt;br /&gt;
 &lt;br /&gt;
 $ lxc&lt;br /&gt;
&lt;br /&gt;
=== Create a container ===&lt;br /&gt;
You can create a container with {{ic| lxc launch}}, for example:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch ubuntu:20.04&lt;br /&gt;
&lt;br /&gt;
Container are based on images, that are downloaded from image servers or remote LXD servers. &amp;lt;br&amp;gt;&lt;br /&gt;
You can see the list of already added servers with:&lt;br /&gt;
&lt;br /&gt;
 $ lxc remote list&lt;br /&gt;
&lt;br /&gt;
You can list all images on a server with {{ic| lxc image list}}, for example:&lt;br /&gt;
&lt;br /&gt;
 $ lxc image list images:&lt;br /&gt;
&lt;br /&gt;
This will show you all images on one of the default servers: [https://images.linuxcontainers.org images.linuxcontainers.org]&lt;br /&gt;
&lt;br /&gt;
You can also search for images by adding terms like the distribution name:&lt;br /&gt;
&lt;br /&gt;
 $ lxc image list images:debian&lt;br /&gt;
&lt;br /&gt;
Launch a container with an image from a specific server with:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch servername:imagename&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch images:centos/8/amd64 centos&lt;br /&gt;
&lt;br /&gt;
To create an amd64 Arch container:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch images:archlinux/current/amd64 arch&lt;br /&gt;
&lt;br /&gt;
=== Create a virtual machine ===&lt;br /&gt;
Just add the flag {{ic|--vm}} to {{ic|lxc launch}}:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch ubuntu:20.04 --vm&lt;br /&gt;
&lt;br /&gt;
{{Note|&lt;br /&gt;
* For now virtual machines support less features than containers (see [https://linuxcontainers.org/lxd/advanced-guide/#difference-between-containers-and-virtual-machines Difference between containers and virtual machines] for example). &lt;br /&gt;
* Only {{ic|cloud}} variants of the official images enable the lxd-agent out-of-the-box (which is needed for the usual lxc commands like {{ic|lxc exec}}). &amp;lt;br&amp;gt; You can search for cloud images with {{ic|lxc image list images: cloud}} or {{ic|lxc image list images: distribution-name cloud}}. &amp;lt;br&amp;gt; If you use other images or encounter problems take a look at [[#lxd-agent_inside_a_virtual_machine]].&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
=== Use and manage a container or VM ===&lt;br /&gt;
&lt;br /&gt;
See [https://linuxcontainers.org/lxd/getting-started-cli/#instance-management Instance managament in the official Getting Started Guide of LXD].&lt;br /&gt;
&lt;br /&gt;
=== Container/VM configuration (optional) ===&lt;br /&gt;
&lt;br /&gt;
You can add various options to instances (containers and VMs). &amp;lt;br&amp;gt;&lt;br /&gt;
See [https://linuxcontainers.org/lxd/advanced-guide/#configuration-of-instances Configuration of instances in the official Advanced Guide of LXD] for details.&lt;br /&gt;
&lt;br /&gt;
== ヒントとテクニック ==&lt;br /&gt;
&lt;br /&gt;
=== Access the containers by name on the host ===&lt;br /&gt;
&lt;br /&gt;
This assumes that you are using the default bridge, that it is named lxdbr0 and that you are using [[systemd-resolved]].&lt;br /&gt;
&lt;br /&gt;
  # systemd-resolve --interface lxdbr0 --set-domain &#039;~lxd&#039; --set-dns $(lxc network get lxdbr0 ipv4.address | cut -d / -f 1)&lt;br /&gt;
  &lt;br /&gt;
You can now access the containers by name:&lt;br /&gt;
&lt;br /&gt;
  $ ping &#039;&#039;containername&#039;&#039;.lxd&lt;br /&gt;
&lt;br /&gt;
==== Other solution ====&lt;br /&gt;
&lt;br /&gt;
It seems that the systemd-resolve solution stops working after some time.&lt;br /&gt;
&lt;br /&gt;
Another solution is to create a {{ic|/etc/systemd/network/lxd.network}} that contains (replace x and y to match your bridge IP):&lt;br /&gt;
&lt;br /&gt;
  [Match]&lt;br /&gt;
  Name=lxdbr0&lt;br /&gt;
  [Network]&lt;br /&gt;
  DNS=10.x.y.1&lt;br /&gt;
  Domains=~lxd&lt;br /&gt;
  IgnoreCarrierLoss=yes&lt;br /&gt;
  [Address]&lt;br /&gt;
  Address=10.x.y.1/24&lt;br /&gt;
  Gateway=10.x.y.1&lt;br /&gt;
&lt;br /&gt;
And then [[enable]] and [[start]] {{ic|systemd-networkd.service}}.&lt;br /&gt;
&lt;br /&gt;
=== Use Wayland and Xorg applications ===&lt;br /&gt;
&lt;br /&gt;
{{Note| Always consider security implications, as some of the described methods may weaken the seperation between container and host. }}&lt;br /&gt;
&lt;br /&gt;
There are multiple methods to use GUI applications inside containers. &lt;br /&gt;
  &lt;br /&gt;
You can find an overview in the official Forum of LXD: https://discuss.linuxcontainers.org/t/overview-gui-inside-containers/8767&lt;br /&gt;
&lt;br /&gt;
==== Method 1: Use the host&#039;s Wayland or Xorg Server ====&lt;br /&gt;
{{Note| Using Xorg might weaken the seperation between container and host, because Xorg allows applications to access other applications windows. So container applications might have access to host applications windows. &amp;lt;br&amp;gt;&lt;br /&gt;
Use Wayland instead (but be aware that Xorgs downsides also apply to XWayland).}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Summary:&#039;&#039;&#039; In this method we grant containers access to the host&#039;s sockets of Wayland (+XWayland) or Xorg.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. Add the following devices to a containers profile.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
See also: [https://linuxcontainers.org/lxd/docs/master/instances#device-types LXD-Documentation regarding Devices]&lt;br /&gt;
&lt;br /&gt;
General device for the GPU:&lt;br /&gt;
&lt;br /&gt;
 mygpu:&lt;br /&gt;
    type: gpu&lt;br /&gt;
&lt;br /&gt;
{{Note| The path under &amp;quot;listen&amp;quot; is different, because /run and /tmp folders might be overridden, see: https://github.com/lxc/lxd/issues/4540 }}&lt;br /&gt;
&lt;br /&gt;
Device for the Wayland Socket: &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
* Adjust the Display (wayland-0) accordingly.&lt;br /&gt;
* Add the folders in /mnt and /tmp inside the container, if they don&#039;t already exist.&lt;br /&gt;
&lt;br /&gt;
 Waylandsocket:&lt;br /&gt;
     bind: container&lt;br /&gt;
     connect: unix:/run/user/1000/wayland-0&lt;br /&gt;
     listen: unix:/mnt/wayland1/wayland-0&lt;br /&gt;
     uid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     gid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     security.gid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     security.uid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     mode: &amp;quot;0777&amp;quot;&lt;br /&gt;
     type: proxy&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
Device for the Xorg (or XWayland) Socket: &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Adjust the Display Number accordingly (for example X1 instead of X0).&lt;br /&gt;
&lt;br /&gt;
 Xsocket:&lt;br /&gt;
     bind: container&lt;br /&gt;
     connect: unix:/tmp/.X11-unix/X0&lt;br /&gt;
     listen: unix:/mnt/xorg1/X0&lt;br /&gt;
     uid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     gid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     security.gid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     security.uid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     mode: &amp;quot;0777&amp;quot;&lt;br /&gt;
     type: proxy&lt;br /&gt;
     &lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. Link the sockets to the right location inside the container.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; These Scripts need to be run after each start of the container; you can automate this with systemd for example.&lt;br /&gt;
&lt;br /&gt;
Shell-Script to link the Wayland socket:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 mkdir /run/user/1000&lt;br /&gt;
 ln -s /mnt/wayland1/wayland-0 /run/user/1000/wayland-0&lt;br /&gt;
&lt;br /&gt;
Link the Xorg (or XWayland) socket:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 ln -s /mnt/xorg1/X0 /tmp/.X11-unix/X0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. Add Environment variables to the users config inside the container.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Adjust the Display Numbers and/or the filename (.profile) accordingly.&lt;br /&gt;
&lt;br /&gt;
For Wayland:&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;export XDG_RUNTIME_DIR=/run/user/1000&amp;quot; &amp;gt;&amp;gt; ~/.profile&lt;br /&gt;
 $ echo &amp;quot;export WAYLAND_DISPLAY=wayland-0&amp;quot; &amp;gt;&amp;gt; ~/.profile&lt;br /&gt;
 $ echo &amp;quot;export QT_QPA_PLATFORM=wayland&amp;quot; &amp;gt;&amp;gt; ~/.profile&lt;br /&gt;
&lt;br /&gt;
For Xorg (or XWayland):&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;export DISPLAY=:0&amp;quot; &amp;gt;&amp;gt; .profile&lt;br /&gt;
&lt;br /&gt;
Reload the .profile:&lt;br /&gt;
&lt;br /&gt;
 $ . .profile&lt;br /&gt;
&lt;br /&gt;
=== Privileged containers ===&lt;br /&gt;
&lt;br /&gt;
{{Note | Privileged containers are not isolated from the host! &amp;lt;br&amp;gt;&lt;br /&gt;
The root user in the container is the root user on the host. &amp;lt;br&amp;gt;&lt;br /&gt;
Use unprivileged containers instead whenever possible. }}&lt;br /&gt;
&lt;br /&gt;
If you want to set up a privileged container, you must provide the config key {{ic|1=security.privileged=true}}.&lt;br /&gt;
&lt;br /&gt;
Either during container creation:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch ubuntu:20.04 ubuntu -c security.privileged=true&lt;br /&gt;
&lt;br /&gt;
Or for an already existing container, you may edit the configuration:&lt;br /&gt;
&lt;br /&gt;
{{hc|$ lxc config edit ubuntu|&lt;br /&gt;
name: ubuntu&lt;br /&gt;
profiles:&lt;br /&gt;
- default&lt;br /&gt;
config:&lt;br /&gt;
  ...&lt;br /&gt;
  security.privileged: &amp;quot;true&amp;quot;&lt;br /&gt;
  ...&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
== トラブルシューティング ==&lt;br /&gt;
&lt;br /&gt;
=== lxd-agent inside a virtual machine ===&lt;br /&gt;
&lt;br /&gt;
Inside some virtual machine images the {{ic|lxd-agent}} is not enabled by default. &amp;lt;br&amp;gt;&lt;br /&gt;
In this case you have to enable it manually, for example by mounting a {{ic|9p}} network share. This requires console access with a valid user.&lt;br /&gt;
&lt;br /&gt;
1. Login with {{ic|lxc console}}: &amp;lt;br&amp;gt;&lt;br /&gt;
Replace {{ic|virtualmachine-name}} accordingly.&lt;br /&gt;
&lt;br /&gt;
 $ lxc console virtualmachine-name&lt;br /&gt;
&lt;br /&gt;
Login as root:&lt;br /&gt;
{{Note | On some systems you have to setup a root password first to be able to login as root. &amp;lt;br&amp;gt; You can use [https://linuxcontainers.org/lxd/advanced-guide/#cloud-init cloud-init] for this for example.}}&lt;br /&gt;
&lt;br /&gt;
 $ su root&lt;br /&gt;
&lt;br /&gt;
Mount the network share:&lt;br /&gt;
&lt;br /&gt;
 $ mount -t 9p config /mnt/&lt;br /&gt;
&lt;br /&gt;
Go into the folder and run the install script (this will enable the lxd-agent inside the VM):&lt;br /&gt;
&lt;br /&gt;
 $ cd /mnt/&lt;br /&gt;
 $ ./install.sh &lt;br /&gt;
&lt;br /&gt;
After sucessful install, reboot with:&lt;br /&gt;
&lt;br /&gt;
 $ reboot&lt;br /&gt;
&lt;br /&gt;
Afterwards the {{ic|lxd-agent}} is available and {{ic|lxc exec}} should work.&lt;br /&gt;
&lt;br /&gt;
=== カーネルコンフィグの確認 ===&lt;br /&gt;
デフォルトで Arch Linux のカーネルは Linux Containers とフロントエンドの LXD が動くようにコンパイルされています。カスタムカーネルを使っている場合やカーネルオプションを変更している場合、LXD が動かない場合があります。コンテナを実行できるようにカーネルが設定されているか確認してください:&lt;br /&gt;
 $ lxc-checkconfig&lt;br /&gt;
&lt;br /&gt;
=== Resource limits are not applied when viewed from inside a container ===&lt;br /&gt;
&lt;br /&gt;
Install {{Pkg|lxcfs}} and [[start]] {{ic|lxcfs.service}}.&lt;br /&gt;
&lt;br /&gt;
lxd will need to be restarted. [[Enable]] {{ic|lxcfs.service}} for the service to be started at boot time.&lt;br /&gt;
&lt;br /&gt;
=== Starting a virtual machine fails ===&lt;br /&gt;
&lt;br /&gt;
If you see the error:  {{ic|Error: Required EFI firmware settings file missing: /usr/share/ovmf/x64/OVMF_VARS.ms.fd}}&lt;br /&gt;
&lt;br /&gt;
Arch Linux does not distribute secure boot signed ovmf firmware, to boot virtual machines you need to disable secure boot for the time being.&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch ubuntu:18.04 test-vm --vm -c security.secureboot=false&lt;br /&gt;
&lt;br /&gt;
This can also be added to the default profile by doing:&lt;br /&gt;
&lt;br /&gt;
 $ lxc profile set default security.secureboot=false&lt;br /&gt;
&lt;br /&gt;
=== No IPv4 with systemd-networkd ===&lt;br /&gt;
&lt;br /&gt;
Starting with version version 244.1, systemd detects if {{ic|/sys}} is writable by containers. If it is, udev is automatically started and breaks IPv4 in unprivileged containers. See [https://github.com/systemd/systemd-stable/commit/96d7083c5499b264ecebd6a30a92e0e8fda14cd5 commit bf331d8] and [https://discuss.linuxcontainers.org/t/no-ipv4-on-arch-linux-containers/6395 discussion on linuxcontainers].&lt;br /&gt;
&lt;br /&gt;
On containers created past 2020, there should already be a {{ic|systemd.networkd.service}} override to work around this issue, create it if it is not:&lt;br /&gt;
&lt;br /&gt;
{{hc|1=/etc/systemd/system/systemd-networkd.service.d/lxc.conf|2=&lt;br /&gt;
[Service]&lt;br /&gt;
BindReadOnlyPaths=/sys&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
You could also work around this issue by setting {{ic|1=raw.lxc: lxc.mount.auto = proc:rw sys:ro}} in the profile of the container to ensure {{ic|/sys}} is read-only for the entire container, although this may be problematic, as per the linked discussion above.&lt;br /&gt;
&lt;br /&gt;
== アンインストール ==&lt;br /&gt;
&lt;br /&gt;
[[Stop]] and disable {{ic|lxd.service}} and {{ic|lxd.socket}}. Then [[uninstall]] the {{Pkg|lxd}} package.&lt;br /&gt;
&lt;br /&gt;
If you uninstalled the package without disabling the service, you might have a lingering broken symlink at {{ic|/etc/systemd/system/multi-user.wants/lxd.service}}.&lt;br /&gt;
&lt;br /&gt;
If you want to remove all data:&lt;br /&gt;
&lt;br /&gt;
 # rm -r /var/lib/lxd&lt;br /&gt;
&lt;br /&gt;
If you used any of the example networking configuration, you should remove those as well.&lt;br /&gt;
&lt;br /&gt;
== 参照 ==&lt;br /&gt;
&lt;br /&gt;
* [https://lxd.readthedocs.io 公式ドキュメント]&lt;br /&gt;
* [https://linuxcontainers.org/lxd/ LXD 公式ホームページ]&lt;br /&gt;
* [https://github.com/lxc/lxd LXD の GitHub ページ]&lt;/div&gt;</summary>
		<author><name>Pedgin</name></author>
	</entry>
	<entry>
		<id>https://wiki.archlinux.jp/index.php?title=LXD&amp;diff=28924</id>
		<title>LXD</title>
		<link rel="alternate" type="text/html" href="https://wiki.archlinux.jp/index.php?title=LXD&amp;diff=28924"/>
		<updated>2022-12-21T00:11:20Z</updated>

		<summary type="html">&lt;p&gt;Pedgin: /* Setup for unpriviledged containers */ 英語版から転載（記事が古い）&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:仮想化]]&lt;br /&gt;
[[en:LXD]]&lt;br /&gt;
{{Related articles start}}&lt;br /&gt;
{{Related|Linux Containers}}&lt;br /&gt;
{{Related articles end}}&lt;br /&gt;
&#039;&#039;&#039;[https://linuxcontainers.org/lxd/ LXD]&#039;&#039;&#039; はコンテナ(LXC経由)および仮想マシン([[QEMU]]経由)のマネージャー/ハイパーバイザーです。&lt;br /&gt;
&lt;br /&gt;
== セットアップ ==&lt;br /&gt;
=== 必要なソフトウェア ===&lt;br /&gt;
{{Pkg|lxd}} パッケージをインストールして、{{ic|lxd.service}} を[[systemd#ユニットを使う|有効]]にします。&lt;br /&gt;
&lt;br /&gt;
==== 他のインストール方法 ====&lt;br /&gt;
&lt;br /&gt;
{{AUR|snapd}} パッケージをインストールしてから以下のコマンドを実行することで [[snapd]] を使って LXD をインストールできます:&lt;br /&gt;
&lt;br /&gt;
 # snap install lxd&lt;br /&gt;
&lt;br /&gt;
=== Setup for unpriviledged containers ===&lt;br /&gt;
It is recommended to use unpriviledged containers (See [[Linux_Containers#Privileged_containers_or_unprivileged_containers]] for an explanation of the difference).&lt;br /&gt;
&lt;br /&gt;
For this, modify both {{ic|/etc/subuid}} and {{ic|/etc/subgid}} (if these files are not present, create them) to contain the mapping to the containerized uid/gid pairs for each user who shall be able to run the containers. The example below is simply for the root user (and systemd system unit):&lt;br /&gt;
&lt;br /&gt;
You can either use {{ic|usermod}} as follows:&lt;br /&gt;
&lt;br /&gt;
{{ic|usermod -v 1000000-1000999999 -w 1000000-1000999999 root}}&lt;br /&gt;
&lt;br /&gt;
Or modify the above mentioned files directly as follows:&lt;br /&gt;
&lt;br /&gt;
{{hc|/etc/subuid|&lt;br /&gt;
root:1000000:1000000000&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{hc|/etc/subgid|&lt;br /&gt;
root:1000000:1000000000&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
Now, every container will be started {{ic|unprivileged}} by default.&lt;br /&gt;
&lt;br /&gt;
For the alternative see [[#Priviledged_containers | howto set up priviledged containers]].&lt;br /&gt;
&lt;br /&gt;
=== LXD の設定 ===&lt;br /&gt;
LXD を使うにはストレージプールと (インターネットを使いたい場合) ネットワークを設定する必要があります。以下のコマンドを root で実行してください:&lt;br /&gt;
 # lxd init&lt;br /&gt;
&lt;br /&gt;
=== 非特権ユーザーとして LXD にアクセス ===&lt;br /&gt;
&lt;br /&gt;
デフォルトでは LXD デーモンは {{ic|lxd}} グループのユーザーからのアクセスを許可するので、ユーザーをグループに追加してください:&lt;br /&gt;
&lt;br /&gt;
 # usermod -a -G lxd &amp;lt;user&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== 使用方法 ==&lt;br /&gt;
&lt;br /&gt;
LXD consists of two parts:&lt;br /&gt;
* the daemon (the &#039;&#039;lxd&#039;&#039; binary)&lt;br /&gt;
* the client (the &#039;&#039;lxc&#039;&#039; binary)&lt;br /&gt;
&lt;br /&gt;
{{Note | lxc is not LXC; the naming is a bit confusing, you can read the [https://discuss.linuxcontainers.org/t/comparing-lxd-vs-lxc/24 forum post on comparing LXD vs LXC] regarding the difference.}}&lt;br /&gt;
&lt;br /&gt;
The client is used to control one or multiple daemon(s).&lt;br /&gt;
&lt;br /&gt;
The client can also be used to control remote LXD servers.&lt;br /&gt;
&lt;br /&gt;
=== Overview of commands ===&lt;br /&gt;
You can get an overview of all available commands by typing:&lt;br /&gt;
 &lt;br /&gt;
 $ lxc&lt;br /&gt;
&lt;br /&gt;
=== Create a container ===&lt;br /&gt;
You can create a container with {{ic| lxc launch}}, for example:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch ubuntu:20.04&lt;br /&gt;
&lt;br /&gt;
Container are based on images, that are downloaded from image servers or remote LXD servers. &amp;lt;br&amp;gt;&lt;br /&gt;
You can see the list of already added servers with:&lt;br /&gt;
&lt;br /&gt;
 $ lxc remote list&lt;br /&gt;
&lt;br /&gt;
You can list all images on a server with {{ic| lxc image list}}, for example:&lt;br /&gt;
&lt;br /&gt;
 $ lxc image list images:&lt;br /&gt;
&lt;br /&gt;
This will show you all images on one of the default servers: [https://images.linuxcontainers.org images.linuxcontainers.org]&lt;br /&gt;
&lt;br /&gt;
You can also search for images by adding terms like the distribution name:&lt;br /&gt;
&lt;br /&gt;
 $ lxc image list images:debian&lt;br /&gt;
&lt;br /&gt;
Launch a container with an image from a specific server with:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch servername:imagename&lt;br /&gt;
&lt;br /&gt;
For example:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch images:centos/8/amd64 centos&lt;br /&gt;
&lt;br /&gt;
To create an amd64 Arch container:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch images:archlinux/current/amd64 arch&lt;br /&gt;
&lt;br /&gt;
=== Create a virtual machine ===&lt;br /&gt;
Just add the flag {{ic|--vm}} to {{ic|lxc launch}}:&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch ubuntu:20.04 --vm&lt;br /&gt;
&lt;br /&gt;
{{Note|&lt;br /&gt;
* For now virtual machines support less features than containers (see [https://linuxcontainers.org/lxd/advanced-guide/#difference-between-containers-and-virtual-machines Difference between containers and virtual machines] for example). &lt;br /&gt;
* Only {{ic|cloud}} variants of the official images enable the lxd-agent out-of-the-box (which is needed for the usual lxc commands like {{ic|lxc exec}}). &amp;lt;br&amp;gt; You can search for cloud images with {{ic|lxc image list images: cloud}} or {{ic|lxc image list images: distribution-name cloud}}. &amp;lt;br&amp;gt; If you use other images or encounter problems take a look at [[#lxd-agent_inside_a_virtual_machine]].&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
=== Use and manage a container or VM ===&lt;br /&gt;
&lt;br /&gt;
See [https://linuxcontainers.org/lxd/getting-started-cli/#instance-management Instance managament in the official Getting Started Guide of LXD].&lt;br /&gt;
&lt;br /&gt;
=== Container/VM configuration (optional) ===&lt;br /&gt;
&lt;br /&gt;
You can add various options to instances (containers and VMs). &amp;lt;br&amp;gt;&lt;br /&gt;
See [https://linuxcontainers.org/lxd/advanced-guide/#configuration-of-instances Configuration of instances in the official Advanced Guide of LXD] for details.&lt;br /&gt;
&lt;br /&gt;
== ヒントとテクニック ==&lt;br /&gt;
&lt;br /&gt;
=== Access the containers by name on the host ===&lt;br /&gt;
&lt;br /&gt;
This assumes that you are using the default bridge, that it is named lxdbr0 and that you are using [[systemd-resolved]].&lt;br /&gt;
&lt;br /&gt;
  # systemd-resolve --interface lxdbr0 --set-domain &#039;~lxd&#039; --set-dns $(lxc network get lxdbr0 ipv4.address | cut -d / -f 1)&lt;br /&gt;
  &lt;br /&gt;
You can now access the containers by name:&lt;br /&gt;
&lt;br /&gt;
  $ ping &#039;&#039;containername&#039;&#039;.lxd&lt;br /&gt;
&lt;br /&gt;
==== Other solution ====&lt;br /&gt;
&lt;br /&gt;
It seems that the systemd-resolve solution stops working after some time.&lt;br /&gt;
&lt;br /&gt;
Another solution is to create a {{ic|/etc/systemd/network/lxd.network}} that contains (replace x and y to match your bridge IP):&lt;br /&gt;
&lt;br /&gt;
  [Match]&lt;br /&gt;
  Name=lxdbr0&lt;br /&gt;
  [Network]&lt;br /&gt;
  DNS=10.x.y.1&lt;br /&gt;
  Domains=~lxd&lt;br /&gt;
  IgnoreCarrierLoss=yes&lt;br /&gt;
  [Address]&lt;br /&gt;
  Address=10.x.y.1/24&lt;br /&gt;
  Gateway=10.x.y.1&lt;br /&gt;
&lt;br /&gt;
And then [[enable]] and [[start]] {{ic|systemd-networkd.service}}.&lt;br /&gt;
&lt;br /&gt;
=== Use Wayland and Xorg applications ===&lt;br /&gt;
&lt;br /&gt;
{{Note| Always consider security implications, as some of the described methods may weaken the seperation between container and host. }}&lt;br /&gt;
&lt;br /&gt;
There are multiple methods to use GUI applications inside containers. &lt;br /&gt;
  &lt;br /&gt;
You can find an overview in the official Forum of LXD: https://discuss.linuxcontainers.org/t/overview-gui-inside-containers/8767&lt;br /&gt;
&lt;br /&gt;
==== Method 1: Use the host&#039;s Wayland or Xorg Server ====&lt;br /&gt;
{{Note| Using Xorg might weaken the seperation between container and host, because Xorg allows applications to access other applications windows. So container applications might have access to host applications windows. &amp;lt;br&amp;gt;&lt;br /&gt;
Use Wayland instead (but be aware that Xorgs downsides also apply to XWayland).}}&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Summary:&#039;&#039;&#039; In this method we grant containers access to the host&#039;s sockets of Wayland (+XWayland) or Xorg.&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;1. Add the following devices to a containers profile.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
See also: [https://linuxcontainers.org/lxd/docs/master/instances#device-types LXD-Documentation regarding Devices]&lt;br /&gt;
&lt;br /&gt;
General device for the GPU:&lt;br /&gt;
&lt;br /&gt;
 mygpu:&lt;br /&gt;
    type: gpu&lt;br /&gt;
&lt;br /&gt;
{{Note| The path under &amp;quot;listen&amp;quot; is different, because /run and /tmp folders might be overridden, see: https://github.com/lxc/lxd/issues/4540 }}&lt;br /&gt;
&lt;br /&gt;
Device for the Wayland Socket: &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Notes:&#039;&#039;&#039; &amp;lt;br&amp;gt;&lt;br /&gt;
* Adjust the Display (wayland-0) accordingly.&lt;br /&gt;
* Add the folders in /mnt and /tmp inside the container, if they don&#039;t already exist.&lt;br /&gt;
&lt;br /&gt;
 Waylandsocket:&lt;br /&gt;
     bind: container&lt;br /&gt;
     connect: unix:/run/user/1000/wayland-0&lt;br /&gt;
     listen: unix:/mnt/wayland1/wayland-0&lt;br /&gt;
     uid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     gid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     security.gid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     security.uid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     mode: &amp;quot;0777&amp;quot;&lt;br /&gt;
     type: proxy&lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
Device for the Xorg (or XWayland) Socket: &amp;lt;br&amp;gt;&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Adjust the Display Number accordingly (for example X1 instead of X0).&lt;br /&gt;
&lt;br /&gt;
 Xsocket:&lt;br /&gt;
     bind: container&lt;br /&gt;
     connect: unix:/tmp/.X11-unix/X0&lt;br /&gt;
     listen: unix:/mnt/xorg1/X0&lt;br /&gt;
     uid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     gid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     security.gid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     security.uid: &amp;quot;1000&amp;quot;&lt;br /&gt;
     mode: &amp;quot;0777&amp;quot;&lt;br /&gt;
     type: proxy&lt;br /&gt;
     &lt;br /&gt;
     &lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;2. Link the sockets to the right location inside the container.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; These Scripts need to be run after each start of the container; you can automate this with systemd for example.&lt;br /&gt;
&lt;br /&gt;
Shell-Script to link the Wayland socket:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 mkdir /run/user/1000&lt;br /&gt;
 ln -s /mnt/wayland1/wayland-0 /run/user/1000/wayland-0&lt;br /&gt;
&lt;br /&gt;
Link the Xorg (or XWayland) socket:&lt;br /&gt;
&lt;br /&gt;
 #!/bin/sh&lt;br /&gt;
 ln -s /mnt/xorg1/X0 /tmp/.X11-unix/X0&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;3. Add Environment variables to the users config inside the container.&#039;&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
&#039;&#039;&#039;Note:&#039;&#039;&#039; Adjust the Display Numbers and/or the filename (.profile) accordingly.&lt;br /&gt;
&lt;br /&gt;
For Wayland:&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;export XDG_RUNTIME_DIR=/run/user/1000&amp;quot; &amp;gt;&amp;gt; ~/.profile&lt;br /&gt;
 $ echo &amp;quot;export WAYLAND_DISPLAY=wayland-0&amp;quot; &amp;gt;&amp;gt; ~/.profile&lt;br /&gt;
 $ echo &amp;quot;export QT_QPA_PLATFORM=wayland&amp;quot; &amp;gt;&amp;gt; ~/.profile&lt;br /&gt;
&lt;br /&gt;
For Xorg (or XWayland):&lt;br /&gt;
&lt;br /&gt;
 $ echo &amp;quot;export DISPLAY=:0&amp;quot; &amp;gt;&amp;gt; .profile&lt;br /&gt;
&lt;br /&gt;
Reload the .profile:&lt;br /&gt;
&lt;br /&gt;
 $ . .profile&lt;br /&gt;
&lt;br /&gt;
== トラブルシューティング ==&lt;br /&gt;
&lt;br /&gt;
=== lxd-agent inside a virtual machine ===&lt;br /&gt;
&lt;br /&gt;
Inside some virtual machine images the {{ic|lxd-agent}} is not enabled by default. &amp;lt;br&amp;gt;&lt;br /&gt;
In this case you have to enable it manually, for example by mounting a {{ic|9p}} network share. This requires console access with a valid user.&lt;br /&gt;
&lt;br /&gt;
1. Login with {{ic|lxc console}}: &amp;lt;br&amp;gt;&lt;br /&gt;
Replace {{ic|virtualmachine-name}} accordingly.&lt;br /&gt;
&lt;br /&gt;
 $ lxc console virtualmachine-name&lt;br /&gt;
&lt;br /&gt;
Login as root:&lt;br /&gt;
{{Note | On some systems you have to setup a root password first to be able to login as root. &amp;lt;br&amp;gt; You can use [https://linuxcontainers.org/lxd/advanced-guide/#cloud-init cloud-init] for this for example.}}&lt;br /&gt;
&lt;br /&gt;
 $ su root&lt;br /&gt;
&lt;br /&gt;
Mount the network share:&lt;br /&gt;
&lt;br /&gt;
 $ mount -t 9p config /mnt/&lt;br /&gt;
&lt;br /&gt;
Go into the folder and run the install script (this will enable the lxd-agent inside the VM):&lt;br /&gt;
&lt;br /&gt;
 $ cd /mnt/&lt;br /&gt;
 $ ./install.sh &lt;br /&gt;
&lt;br /&gt;
After sucessful install, reboot with:&lt;br /&gt;
&lt;br /&gt;
 $ reboot&lt;br /&gt;
&lt;br /&gt;
Afterwards the {{ic|lxd-agent}} is available and {{ic|lxc exec}} should work.&lt;br /&gt;
&lt;br /&gt;
=== カーネルコンフィグの確認 ===&lt;br /&gt;
デフォルトで Arch Linux のカーネルは Linux Containers とフロントエンドの LXD が動くようにコンパイルされています。カスタムカーネルを使っている場合やカーネルオプションを変更している場合、LXD が動かない場合があります。コンテナを実行できるようにカーネルが設定されているか確認してください:&lt;br /&gt;
 $ lxc-checkconfig&lt;br /&gt;
&lt;br /&gt;
=== Resource limits are not applied when viewed from inside a container ===&lt;br /&gt;
&lt;br /&gt;
Install {{Pkg|lxcfs}} and [[start]] {{ic|lxcfs.service}}.&lt;br /&gt;
&lt;br /&gt;
lxd will need to be restarted. [[Enable]] {{ic|lxcfs.service}} for the service to be started at boot time.&lt;br /&gt;
&lt;br /&gt;
=== Starting a virtual machine fails ===&lt;br /&gt;
&lt;br /&gt;
If you see the error:  {{ic|Error: Required EFI firmware settings file missing: /usr/share/ovmf/x64/OVMF_VARS.ms.fd}}&lt;br /&gt;
&lt;br /&gt;
Arch Linux does not distribute secure boot signed ovmf firmware, to boot virtual machines you need to disable secure boot for the time being.&lt;br /&gt;
&lt;br /&gt;
 $ lxc launch ubuntu:18.04 test-vm --vm -c security.secureboot=false&lt;br /&gt;
&lt;br /&gt;
This can also be added to the default profile by doing:&lt;br /&gt;
&lt;br /&gt;
 $ lxc profile set default security.secureboot=false&lt;br /&gt;
&lt;br /&gt;
=== No IPv4 with systemd-networkd ===&lt;br /&gt;
&lt;br /&gt;
Starting with version version 244.1, systemd detects if {{ic|/sys}} is writable by containers. If it is, udev is automatically started and breaks IPv4 in unprivileged containers. See [https://github.com/systemd/systemd-stable/commit/96d7083c5499b264ecebd6a30a92e0e8fda14cd5 commit bf331d8] and [https://discuss.linuxcontainers.org/t/no-ipv4-on-arch-linux-containers/6395 discussion on linuxcontainers].&lt;br /&gt;
&lt;br /&gt;
On containers created past 2020, there should already be a {{ic|systemd.networkd.service}} override to work around this issue, create it if it is not:&lt;br /&gt;
&lt;br /&gt;
{{hc|1=/etc/systemd/system/systemd-networkd.service.d/lxc.conf|2=&lt;br /&gt;
[Service]&lt;br /&gt;
BindReadOnlyPaths=/sys&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
You could also work around this issue by setting {{ic|1=raw.lxc: lxc.mount.auto = proc:rw sys:ro}} in the profile of the container to ensure {{ic|/sys}} is read-only for the entire container, although this may be problematic, as per the linked discussion above.&lt;br /&gt;
&lt;br /&gt;
== アンインストール ==&lt;br /&gt;
&lt;br /&gt;
[[Stop]] and disable {{ic|lxd.service}} and {{ic|lxd.socket}}. Then [[uninstall]] the {{Pkg|lxd}} package.&lt;br /&gt;
&lt;br /&gt;
If you uninstalled the package without disabling the service, you might have a lingering broken symlink at {{ic|/etc/systemd/system/multi-user.wants/lxd.service}}.&lt;br /&gt;
&lt;br /&gt;
If you want to remove all data:&lt;br /&gt;
&lt;br /&gt;
 # rm -r /var/lib/lxd&lt;br /&gt;
&lt;br /&gt;
If you used any of the example networking configuration, you should remove those as well.&lt;br /&gt;
&lt;br /&gt;
== 参照 ==&lt;br /&gt;
&lt;br /&gt;
* [https://lxd.readthedocs.io 公式ドキュメント]&lt;br /&gt;
* [https://linuxcontainers.org/lxd/ LXD 公式ホームページ]&lt;br /&gt;
* [https://github.com/lxc/lxd LXD の GitHub ページ]&lt;/div&gt;</summary>
		<author><name>Pedgin</name></author>
	</entry>
	<entry>
		<id>https://wiki.archlinux.jp/index.php?title=%E3%82%AB%E3%83%BC%E3%83%8D%E3%83%AB/Arch_build_system&amp;diff=28921</id>
		<title>カーネル/Arch build system</title>
		<link rel="alternate" type="text/html" href="https://wiki.archlinux.jp/index.php?title=%E3%82%AB%E3%83%BC%E3%83%8D%E3%83%AB/Arch_build_system&amp;diff=28921"/>
		<updated>2022-12-20T04:44:43Z</updated>

		<summary type="html">&lt;p&gt;Pedgin: PKGBUILD へのパッチ内容を英文Wiki記述の2022年8月25日時点の記述に変更。&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;[[Category:カーネル]]&lt;br /&gt;
[[de:Eigenen Kernel erstellen]]&lt;br /&gt;
[[en:Kernels/Arch Build System]]&lt;br /&gt;
[[it:Kernels/Arch Build System]]&lt;br /&gt;
[[ru:Kernels/Arch Build System]]&lt;br /&gt;
[[zh-hans:Kernels/Arch Build System]]&lt;br /&gt;
[[Arch Build System]] を使うことで、公式の {{Pkg|linux}} パッケージをベースにカスタムカーネルを作成することができます。このコンパイル方法は全体のプロセスを自動化でき、よくテストされたパッケージに基づいています。PKGBUILD を編集することでカスタムカーネルの設定やパッチの追加が可能です。&lt;br /&gt;
&lt;br /&gt;
==材料の入手==&lt;br /&gt;
&lt;br /&gt;
[[makepkg]] を使用するため、以下のベストプラクティスに従ってください。例えば、makepkg を root あるいは sudo で実行することはできません。したがって、まずはホームディレクトリに {{ic|build}} ディレクトリを作成してください:&lt;br /&gt;
 $ mkdir build&lt;br /&gt;
 $ cd build/&lt;br /&gt;
&lt;br /&gt;
{{Pkg|asp}} パッケージと {{Grp|base-devel}} パッケージグループを [[インストール]] します。&lt;br /&gt;
&lt;br /&gt;
カスタマイズを開始するには、クリーンなカーネルが必要です。[[Arch Build System#Git を使って PKGBUILD ソースを取得|Git を使って PKGBUILD ソースを取得]] といくつかのファイルをビルドディレクトリにコピーしてください。&lt;br /&gt;
&lt;br /&gt;
 $ asp update linux&lt;br /&gt;
 $ asp export linux&lt;br /&gt;
&lt;br /&gt;
この時点で、ディレクトリツリーは以下のようになります（他にもいくつかファイルがあるかもしれません）&lt;br /&gt;
&lt;br /&gt;
{{bc|~/build/linux/-+&lt;br /&gt;
               +--config&lt;br /&gt;
               \__PKGBUILD&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
その後、必要なファイル (例:カスタム設定ファイル、パッチなど) をそれぞれのソースから取得します。&lt;br /&gt;
&lt;br /&gt;
==PKGBUILD の修正==&lt;br /&gt;
PKGBUILD {{ic|pkgbase}} をあなたのカスタムパッケージの名前に変えて下さい、例えば:&lt;br /&gt;
 pkgbase=linux-custom&lt;br /&gt;
&lt;br /&gt;
PKGBUILD によっては {{ic|linux.install}} の名前も {{ic|pkgbase}} にあわせて変更する必要があります (例: {{Pkg|linux-hardened}})。&lt;br /&gt;
&lt;br /&gt;
{{Warning|{{ic|linux}} を {{ic|provides}} 配列に &#039;&#039;&#039;追加しないでください&#039;&#039;&#039; カスタムカーネルは、そのカーネルに対して構築されたバイナリモジュールと互換性がないため、その依存関係を満たすことはできません。 同様に、同様の理由で、ヘッダーパッケージの {{ic|provides}} 配列に {{ic|linux-headers}} を追加しないでください。}}&lt;br /&gt;
&lt;br /&gt;
=== ドキュメントの作成を避ける ===&lt;br /&gt;
&lt;br /&gt;
長い [https://wiki.archlinux.jp/index.php/%E3%82%AB%E3%83%BC%E3%83%8D%E3%83%AB/%E3%82%B3%E3%83%B3%E3%83%91%E3%82%A4%E3%83%AB/Arch_Build_System#.E3.82.B3.E3.83.B3.E3.83.91.E3.82.A4.E3.83.AB コンパイル] 作業の大部分は、ドキュメントの作成に費やされています。 2022年8月25日の時点で、PKGBUILDへの次のパッチはその作成を回避します。&lt;br /&gt;
&lt;br /&gt;
{{bc|1=&lt;br /&gt;
63c63&lt;br /&gt;
&amp;lt;   make htmldocs all&lt;br /&gt;
---&lt;br /&gt;
&amp;gt;   make all&lt;br /&gt;
195c195&lt;br /&gt;
&amp;lt; pkgname=(&amp;quot;$pkgbase&amp;quot; &amp;quot;$pkgbase-headers&amp;quot; &amp;quot;$pkgbase-docs&amp;quot;)&lt;br /&gt;
---&lt;br /&gt;
&amp;gt; pkgname=(&amp;quot;$pkgbase&amp;quot; &amp;quot;$pkgbase-headers&amp;quot;)&lt;br /&gt;
}}&lt;br /&gt;
このパッチは行#63 と行#195 を変更します。 PKGBUILD ファイルが正しく適用されない場合は、手動で編集する必要がある場合があります。&lt;br /&gt;
&lt;br /&gt;
=== prepare() の変更 ===&lt;br /&gt;
&lt;br /&gt;
{{ic|prepare()}} 関数では、[[パッケージにパッチを適用#パッチの適用|パッチの適用]] やカーネルビルドの設定を変更することができます。&lt;br /&gt;
&lt;br /&gt;
もし、いくつかの設定を変更する必要があるなら、ソースの設定ファイルを編集することができます。&lt;br /&gt;
&lt;br /&gt;
また、GUI ツールを使ってオプションを調整することもできます。PKGBUILD の prepare() 関数に {{ic|make olddefconfig}} とコメントし、好きなツールを追加してください。&lt;br /&gt;
{{hc|PKGBUILD|&lt;br /&gt;
...&lt;br /&gt;
  msg2 &amp;quot;Setting config...&amp;quot;&lt;br /&gt;
  cp ../config .config&lt;br /&gt;
  #make olddefconfig&lt;br /&gt;
&lt;br /&gt;
  make nconfig # new CLI menu for configuration&lt;br /&gt;
  #make menuconfig # CLI menu for configuration&lt;br /&gt;
  #make xconfig # X-based configuration&lt;br /&gt;
  #make oldconfig # using old config from previous kernel version&lt;br /&gt;
  # ... or manually edit .config&lt;br /&gt;
  make prepare&lt;br /&gt;
...&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{Warning|systemd には特定のユースケース (例: UEFI) の場合や特定の systemd の機能 (例: bootchart) を使用するために、設定する必要があるカーネルコンフィグが多数存在します。正しく設定しないとシステムの調子がおかしくなったり全く使えなくなったりします。必須あるいは推奨されているカーネルコンフィグのリストは {{ic|/usr/share/doc/systemd/README}} に書かれています。コンパイルする前によく確認してください。必要なコンフィグはときどき変わっています。Arch では基本的に公式カーネルを使用することになっているので、要件が変わってもアナウンスはされません。新しいバージョンの systemd をインストールする前に、リリースノートをチェックして使用しているカスタムカーネルが新しい systemd の要件を満たしているか確認してください。}}&lt;br /&gt;
&lt;br /&gt;
===新しいチェックサムを生成===&lt;br /&gt;
[https://wiki.archlinux.jp/index.php/%E3%82%AB%E3%83%BC%E3%83%8D%E3%83%AB/%E3%82%B3%E3%83%B3%E3%83%91%E3%82%A4%E3%83%AB/Arch_Build_System#prepare.28.29_.E3.81.AE.E5.A4.89.E6.9B.B4 Changing prepare()] は、{{ic|$_srcname/.config}} への変更の可能性を示唆しています。 このパスはパッケージファイルのダウンロードが終了した場所ではないため、そのチェックサムは makepkg によってチェックされませんでした(実際の場所は {{ic|$_srcname/../../config}} をチェックしてください)&lt;br /&gt;
&lt;br /&gt;
makepkg を実行する前に、ダウンロードした {{ic|config}} を別の設定ファイルに置き換えた場合は、 [インストール] {{Pkg|pacman-contrib}} パッケージ。&lt;br /&gt;
次のコマンドを実行すると、新しいチェックサムが生成されます。&lt;br /&gt;
 $ updpkgsums&lt;br /&gt;
&lt;br /&gt;
== コンパイル ==&lt;br /&gt;
&lt;br /&gt;
普通のパッケージと同じビルドコマンド ({{ic|makepkg}}) を使ってカーネルをコンパイルします。&lt;br /&gt;
&lt;br /&gt;
カーネルパラメータの設定に (menuconfig などの) インタラクティブなプログラムを選んだ場合は、コンパイル中に設定を行ってください。&lt;br /&gt;
&lt;br /&gt;
 $ makepkg -s&lt;br /&gt;
&lt;br /&gt;
{{ic|-s}} パラメータによって xml やドキュメントなど最近のカーネルが必要とする依存パッケージがダウンロードされます。&lt;br /&gt;
&lt;br /&gt;
{{Note|&lt;br /&gt;
* カーネルソースは [https://www.kernel.org/signature.html#kernel-org-web-of-trust PGP 署名] が付いており、makepkg は署名を検証します。詳しくは [[Makepkg#署名チェック]]を見てください。&lt;br /&gt;
* マルチコアのシステムでは[[Makepkg#MAKEFLAGS|複数のコンパイルジョブを同時に実行する]]ことでコンパイル時間を大幅に削減することが可能です。}}&lt;br /&gt;
&lt;br /&gt;
==インストール==&lt;br /&gt;
&#039;&#039;makepkg&#039;&#039; が終わったら {{ic|linux.install}} ファイルの変数が変わっているのが見て取れるはずです。&lt;br /&gt;
&lt;br /&gt;
後は、pacman (もしくは pacman に代わるプログラム) で通常通りにパッケージをインストールするだけです。カスタムカーネルで必要となる ([[NVIDIA#カスタムカーネル|nvidia]] ドライバーをインストールするときなど) カーネルヘッダーを先にインストールすると良いでしょう:&lt;br /&gt;
 # pacman -U &#039;&#039;kernel-headers_package&#039;&#039;&lt;br /&gt;
 # pacman -U &#039;&#039;kernel_package&#039;&#039;&lt;br /&gt;
&lt;br /&gt;
==ブートローダー==&lt;br /&gt;
これであなたのカスタムカーネルのフォルダとファイルが作成されました、例: {{ic|/boot/vmlinuz-linux-test}}。あなたのカーネルをテストするには、[[ブートローダー]]の設定ファイルを更新してカスタムカーネルに対応する新しいエントリ (&#039;default&#039; や &#039;fallback&#039;) を追加してください。&#039;&#039;PKGBUILD の pkgbase&#039;&#039; でカーネルの名前を変更した場合は pacman でインストールする前に &#039;&#039;$build/pkg/kernel/etc&#039;&#039; の {{ic|initramfs.img}} の名前を変える必要があります。そうすれば標準のカーネルとカスタムカーネルを選択できるようになります。&lt;br /&gt;
&lt;br /&gt;
== アップデート ==&lt;br /&gt;
更新したい arch kernel source を持っていると仮定すると、そのための一つの方法は https://git.archlinux.org/linux.git を使うことです。以下では、カーネルソースの最上位ディレクトリを ~/build/linux/ とします。&lt;br /&gt;
&lt;br /&gt;
一般に、arch は2つのローカル git リポジトリを持つ arch kernel source を設定します。archlinux-linux/ にあるのは、git://git.archlinux.org/linux.git を指しているローカルの裸の git リポジトリです。もう一つは &#039;&#039;&#039;src/&#039;&#039;&#039;archlinux-linux/ にあり、最初のリポジトリから取り出します。可能なローカルパッチとビルドについては、 &#039;&#039;&#039;src/&#039;&#039;&#039;archlinux-linux/ を参照してください。 archlinux-linux/ と &#039;&#039;&#039;src/&#039;&#039;&#039;archlinux-linux/ ディレクトリの名前が異なることに注意してください。&lt;br /&gt;
&lt;br /&gt;
 $ cd ~/build/linux/archlinux-linux/&lt;br /&gt;
&lt;br /&gt;
この例では、archlinux-linux/ にローカルにインストールされた bare git リポジトリソースの HEAD が最初に指されています&lt;br /&gt;
{{bc|&lt;br /&gt;
$ git log --oneline --max-count 1 HEAD&lt;br /&gt;
4010b622f1d2 Merge branch &#039;dax-fix-5.3-rc3&#039; of git://git.kernel.org/pub/scm/linux/kernel/git/nvdimm/nvdimm}}&lt;br /&gt;
v5.2.5-arch1とv5.2.6-arch1の間のどこかにあります。&lt;br /&gt;
 $ git fetch --verbose&lt;br /&gt;
取得した新しいタグを出力するため、最新の archlinux タグである v5.2.7-arch1 をフェッチしたことがわかります。 新しいタグが取得されなかった場合、利用可能な新しい archlinux ソースはありません。&lt;br /&gt;
&lt;br /&gt;
これで、実際のビルドが行われる場所でソースを更新できます。&lt;br /&gt;
{{bc|&lt;br /&gt;
$ cd ~/build/linux/src/archlinux-linux/&lt;br /&gt;
$ git checkout master&lt;br /&gt;
$ git pull&lt;br /&gt;
$ git fetch --tags --verbose&lt;br /&gt;
$ git branch --verbose 5.2.7-arch1 v5.2.7-arch1&lt;br /&gt;
$ git checkout 5.2.7-arch1&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
次のような方法で状況が進んでいることを確認できます &lt;br /&gt;
{{bc|1=&lt;br /&gt;
$ git log --oneline 5.2.7-arch1 --max-count=7&lt;br /&gt;
13193bfc03d4 &#039;&#039;&#039;Arch Linux kernel v5.2.7-arch1&#039;&#039;&#039;&lt;br /&gt;
9475c6772d05 netfilter: nf_tabf676926c7f60les: fix module autoload for redir&lt;br /&gt;
498d650048f6 iwlwifi: Add support for SAR South Korea limitation&lt;br /&gt;
bb7293abdbc7 iwlwifi: mvm: disable TX-AMSDU on older NICs&lt;br /&gt;
f676926c7f60 ZEN: Add CONFIG for unprivileged_userns_clone&lt;br /&gt;
5e4e503f4f28 add sysctl to disallow unprivileged CLONE_NEWUSER by default&lt;br /&gt;
5697a9d3d55f &#039;&#039;&#039;Linux 5.2.7&#039;&#039;&#039;&lt;br /&gt;
}}&lt;br /&gt;
これは、ArchLinux カーネル v5.2.7-arch1 と Linux5.2.7 の間のいくつかの特定の archlinux パッチを示しています。 ここで重要なのは、 ArchLinux カーネル v5.2.7-arch1 と Linux5.2.7 です。 明らかに、他のバージョンには他のパッチがある可能性があります。そのため、 {{ic|--max-count}} の 7 を調整する必要があります。 同様に、f676926c7f60などのコミット識別子やカーネルバージョンは、他のバージョンでは異なります。&lt;br /&gt;
&lt;br /&gt;
最新の PKGBUILD、および archlinux カーネル構成ファイルは、 {{ic|asp}} コマンドで取得できます。 &lt;br /&gt;
&lt;br /&gt;
{{bc|&lt;br /&gt;
$ cd ~/build/linux/&lt;br /&gt;
$ asp update linux&lt;br /&gt;
$ asp export linux&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
{{note|{{ic|asp}} コマンドは、新しい archlinux ソースタグがあっても Linux ファイルを更新しない場合があります。 考えられる理由は、archlinuxlinux ファイルが archlinuxlinux ソースより遅れていることです。}}&lt;br /&gt;
ここで、 {{ic|~/build/linux/linux/*}} にある [https://wiki.archlinux.jp/index.php/Vim#.E3.83.95.E3.82.A1.E3.82.A4.E3.83.AB.E3.81.AE.E3.83.9E.E3.83.BC.E3.82.B8_.28vimdiff.29 Vim#Mergingfiles] を {{ic|~/build/linux/}} に配置する必要があります。 マージは手動で、または [https://wiki.archlinux.jp/index.php/%E3%82%A2%E3%83%97%E3%83%AA%E3%82%B1%E3%83%BC%E3%82%B7%E3%83%A7%E3%83%B3%E4%B8%80%E8%A6%A7#Comparison.2C_diff.2C_merge Comparison、diff、merge] を使用して行うこともできます。 [https://wiki.archlinux.jp/index.php?title=%E3%82%AB%E3%83%BC%E3%83%8D%E3%83%AB/%E3%82%B3%E3%83%B3%E3%83%91%E3%82%A4%E3%83%AB/Arch_Build_System&amp;amp;action=submit#prepare.28.29_.E3.81.AE.E5.A4.89.E6.9B.B4 #Changing prepare()] を確認し、 PKGBUILD::prepare() のシェルコマンドのすべてではないにしてもほとんどを手動で実行します。&lt;br /&gt;
&lt;br /&gt;
この時点で、 {{ic|makepkg--verifysource}} は成功するはずです。 [https://wiki.archlinux.jp/index.php?title=%E3%82%AB%E3%83%BC%E3%83%8D%E3%83%AB/%E3%82%B3%E3%83%B3%E3%83%91%E3%82%A4%E3%83%AB/Arch_Build_System&amp;amp;action=submit#.E3.82.B3.E3.83.B3.E3.83.91.E3.82.A4.E3.83.AB #コンパイル] の時には、 {{ic|makepkg}} --noextract オプションも追加してください、これによりソースが makepkg --nobuild によって抽出されたかのようにパッケージをビルドできるはずです。 [https://wiki.archlinux.jp/index.php?title=%E3%82%AB%E3%83%BC%E3%83%8D%E3%83%AB/%E3%82%B3%E3%83%B3%E3%83%91%E3%82%A4%E3%83%AB/Arch_Build_System&amp;amp;action=submit#.E3.82.A4.E3.83.B3.E3.82.B9.E3.83.88.E3.83.BC.E3.83.AB #インストール] に戻ります。&lt;br /&gt;
&lt;br /&gt;
=== クリーンアップ ===&lt;br /&gt;
マージした後、{{ic|~/build/linux/linux/}} を削除したい場合や。 {{ic|~/build/linux/src/archlinux}} は、より新しい更新がこの方法で行われた場合、{{ic|5.2.7-arch1}} の形式でブランチを蓄積します。 これらは以下でで削除することができます &lt;br /&gt;
&lt;br /&gt;
 $ cd ~/build/linux/src/archlinux&lt;br /&gt;
 $ git branch --delete --force --verbose 5.2.7-arch1&lt;br /&gt;
&lt;br /&gt;
==参照==&lt;br /&gt;
* https://www.kernel.org/doc/html/latest/kbuild/kconfig.html and the parent directory&lt;/div&gt;</summary>
		<author><name>Pedgin</name></author>
	</entry>
</feed>