「BeeGFS」の版間の差分

提供: ArchWiki
ナビゲーションに移動 検索に移動
(英語版より転載)
 
(一部飜訳)
9行目: 9行目:
 
{{Related articles end}}
 
{{Related articles end}}
   
  +
BeeGFS は、分散型、耐障害性、高度に設定可能で、良好なパフォーマンスと高い信頼性を持つことに焦点を当てたスケーラブルなネットワーク・ストレージプラットフォームです。BeeGFS は非常に設定可能であり、管理者はシステムのほぼすべての側面を制御できます。コマンドラインインターフェイスがクラスタの監視と制御に使用されます。
BeeGFS is a scalable network-storage platform with a focus on being distributed, resilient, highly configurable and having good performance and high reliability. BeeGFS is extremely configurable, with administrators being able to control virtually all aspects of the system. A command line interface is used to monitor and control the cluster.
 
   
From [[Wikipedia:BeeGFS|Wikipedia]]:
+
[[Wikipedia:BeeGFS|Wikipedia]] より:
  +
:BeeGFS(以前は FhGFS)は、高性能コンピューティングのために開発および最適化された並列ファイルシステムです。BeeGFS には、スケーラビリティと柔軟性のための分散メタデータアーキテクチャが含まれています。最も重要な側面はデータスループットです。BeeGFS は、Sven Breuner を中心としたチームによって、元々ドイツの [https://www.itwm.fraunhofer.de/en/departments/hpc.html Fraunhofer Center for High Performance Computing] で開発されました。その後、彼は 2014 年に設立された BeeGFS を維持し、プロフェッショナルサービスを提供するスピンオフ企業である [https://thinkparq.com/ ThinkParQ] の CEO になりました。
:BeeGFS (formerly FhGFS) is a parallel file system, developed and optimized for high-performance computing. BeeGFS includes a distributed metadata architecture for scalability and flexibility reasons. Its most important aspect is data throughput. BeeGFS was originally developed at the [https://www.itwm.fraunhofer.de/en/departments/hpc.html Fraunhofer Center for High Performance Computing] in Germany by a team around Sven Breuner, who later became the CEO of [https://thinkparq.com/ ThinkParQ], the spin-off company that was founded in 2014 to maintain BeeGFS and offer professional services.
 
   
From [https://beegfs.io/ BeeGFS.io]:
+
[https://beegfs.io/ BeeGFS.io] より:
  +
:BeeGFS は、パフォーマンスに強い焦点を当てて開発され、非常に簡単なインストールと管理を目指して設計された、業界をリードする並列クラスタファイルシステムです。I/O に集中したワークロードが問題である場合、BeeGFS が解決策です。
:BeeGFS is the leading parallel cluster file system, developed with a strong focus on performance and designed for very easy installation and management. If I/O intensive workloads are your problem, BeeGFS is the solution.
 
   
== Terminology ==
+
== 用語 ==
   
{{Tip|A full glossary is available in the [https://www.beegfs.io/wiki/PageIndex official documentation].}}
+
{{Tip|[https://www.beegfs.io/wiki/PageIndex 公式ドキュメント] に完全な用語集が用意されています。}}
   
 
{| class="wikitable"
 
{| class="wikitable"

2023年4月18日 (火) 13:46時点における版

この記事あるいはセクションで使われている用語や表現には問題が存在します。
議論: Some general style violations, sections do not use sentence case, duplicating of content (議論: トーク:BeeGFS#)

関連記事

BeeGFS は、分散型、耐障害性、高度に設定可能で、良好なパフォーマンスと高い信頼性を持つことに焦点を当てたスケーラブルなネットワーク・ストレージプラットフォームです。BeeGFS は非常に設定可能であり、管理者はシステムのほぼすべての側面を制御できます。コマンドラインインターフェイスがクラスタの監視と制御に使用されます。

Wikipedia より:

BeeGFS(以前は FhGFS)は、高性能コンピューティングのために開発および最適化された並列ファイルシステムです。BeeGFS には、スケーラビリティと柔軟性のための分散メタデータアーキテクチャが含まれています。最も重要な側面はデータスループットです。BeeGFS は、Sven Breuner を中心としたチームによって、元々ドイツの Fraunhofer Center for High Performance Computing で開発されました。その後、彼は 2014 年に設立された BeeGFS を維持し、プロフェッショナルサービスを提供するスピンオフ企業である ThinkParQ の CEO になりました。

BeeGFS.io より:

BeeGFS は、パフォーマンスに強い焦点を当てて開発され、非常に簡単なインストールと管理を目指して設計された、業界をリードする並列クラスタファイルシステムです。I/O に集中したワークロードが問題である場合、BeeGFS が解決策です。

用語

ヒント: 公式ドキュメント に完全な用語集が用意されています。
Node Type and Description Packages
Management Server (one node)
  • Manages configuration and group membership
  • Hostname or IP address must be known by other nodes at service start time
beegfs-mgmtdAUR
Metadata Server (at least one node)
  • Stores directory information and allocates file space on storage servers
beegfs-metaAUR
Storage Server (at least one node)
  • Stores raw file contents
beegfs-storageAUR
InfluxDB / Grafana based Monitoring Server (optional)
  • Continuous monitoring of servers
  • Live statistics
  • beegfs-admon (Java based administration and monitoring GUI), must not be installed on the same server
beegfs-monAUR
BeeGFS utilities for administrators
  • beegfs-ctl tool for command-line administration
  • beegfs-fsck tool for file system checking
  • Several small helper scripts such logging and DNS lookup functionality
beegfs-utilsAUR
BeeGFS Common beegfs-commonAUR
Client
  • Kernel module to mount the file system
  • Requires userspace helper daemon for logging and hostname resolution
beegfs-clientAUR

In addition to the free and open-source packages described here, BeeGFS also offers a number of Enterprise Features and Professional Support, which include:

  • High Availability
  • Quota Enforcement
  • Access Control Lists (ACLs)
  • Storage Pools
  • Burst buffer function with BeeOND
警告: Whilst the BeeGFS server components are userspace daemons, the client is a native kernel module. The latest version of BeeGFS v7.1.3 has support for Kernels up to 4.19.x. Hence there are a number of adhoc patches to the client source build files included in the beegfs-clientAUR PKGBUILD. This in turn may lead to instability with the client kernel module.

Installation

Example cluster deployment

The following hardware configuration will be used in this example:

Hostname IP Address Description
node01 192.168.0.1 Management Server and Monitoring (optional) Server
node02 192.168.0.2 Metadata Server
node03 192.168.0.3 Storage Server
node04 192.168.0.4 Client
ヒント: One if free to choose the option of using dedicated hosts for all BeeGFS services. BeeGFS allows running any combination of services (including client and storage/metadata service) on the same machine. Especially the management and mon daemons are not performance-critical and thus are typically not running on dedicated machines.

NTP client

Install and run a time synchronization client on all the nodes. See Time synchronization for details.

ノート: It is strongly recommended to synchronize the clocks on all cluster nodes to prevent clock drift (see System time#Time skew for details), which can degrade the performance of your cluster or stop it from functioning altogether. The official documentation recommends that nodes run some form of clock synchronization.

Management server

Install it with the package beegfs-mgmtdAUR on the management node 192.168.0.1.

The management service needs to know where it can store its data. It will only store some node information like connectivity data, so it will not require much storage space and its data access is not performance critical. Thus, this service is typically not running on a dedicated machine.

/etc/beegfs/beegfs-mgmtd
storeMgmtdDirectory = /mnt/beegfs/beegfs-mgmtd

Start/enable the beegfs-mgmtd@node01.service on the management node.

Monitoring server

Install the package beegfs-monAUR on the management/monitoring node 192.168.0.1, which collects statistics from the system and provides them to the user using a time series database InfluxDB. For visualization of the data beegfs-mon provides predefined Grafana panels that can be used out of the box.

Before running beegfs-mon, you need to edit the configuration file /etc/beegfs/beegfs-mon.conf. If you have everything installed on the same host, you only need to specify the management host:

/etc/beegfs/beegfs-mon.conf
sysMgmtHost = localhost
ヒント: If your InfluxDB is installed on another host, say the client for example or you need to use a different database port or name, you also need to modify the corresponding entries:
/etc/beegfs/beegfs-mon.conf
dbHostName = node04
dbHostPort = 9096
dbHostName = beegfs_mon_client

Start/enable the beegfs-mon@node01.service on the management/monitoring node.

Configuration of default Grafana panels

You can use the provided installation script for default InfluxDB and Grafana deployments on the same host.

# cd /etc/beegfs/grafana
# ./import-dashboards default

Accessing Grafana panels

Access the application on localhost, e.g.: http://127.0.0.1:3000 . Refer to Custom Grafana Panel Configuration for non-default installations and for the Reference to All Metrics monitored.

ノート: Different services on the same machine cannot share the same storage directory, so different directories have to be used, i.e. /mnt/beegfs/beegfs-mgmtd for management servers and /mnt/beegfs/beegfs-mon for monitoring servers.

Metadata server

Install the package beegfs-metaAUR on the metadata server(s), i.e. 192.168.0.2.

The metadata service needs to know where it can store its data and where the management service is running. Typically, one will have multiple metadata services running on different machines.

/etc/beegfs/beegfs-meta.conf
sysMgmtdHost = node01
storeMetaDirectory = /mnt/beegfs/beegfs-meta

Start/enable the beegfs-meta@node02.service on the metadata node.

Storage server

Install the package beegfs-storageAUR on the storage server(s), i.e. 192.168.0.3.

The storage service needs to know where it can store its data and how to reach the management server. Typically, one will have multiple storage services running on different machines and/or multiple storage targets (e.g. multiple RAID volumes) per storage service.

/etc/beegfs/beegfs-storage.conf
sysMgmtdHost = node01
storeStorageDirectory = /mnt/beegfs/beegfs-storage

Start/enable the beegfs-storage@node03.service on the storage node.

Client

Install the package beegfs-clientAUR on the client node, which will build the client Kernel module.

The client service needs to know where it can reach the management server.

/etc/beegfs/beegfs-client.conf
sysMgmtdHost = node01

The client service needs to know where it can mount the cluster storage, as well as the location of the client configuration file.

/etc/beegfs/beegfs-mount.conf
/mnt/beegfs/beegfs-mount /etc/beegfs/beegfs-client.conf

Load the Kernel module and its dependencies.

# modprobe beegfs

Start/enable the beegfs-helperd@node04.service on the client node.

Start/enable the beegfs-client.service on the client node.

Utilities

Install the package beegfs-utilsAUR.

ヒント: Best to install on either the management server or client node, or both. For the purposes of this example the client node is used.

Check connectivity

Check the detected network interfaces and transport protocols from a client node with the following commands:

# beegfs-ctl --listnodes --nodetype=mgmt --nicdetails 
  node01 [ID: 1]
    Ports: UDP: 8008; TCP: 8008
    Interfaces: 
    + enp0s31f6[ip addr: 192.168.0.1; type: TCP]
# beegfs-ctl --listnodes --nodetype=meta --nicdetails 
  node02 [ID: 2]
    Ports: UDP: 8005; TCP: 8005
    Interfaces: 
    + eno1[ip addr: 192.168.0.2; type: TCP]
# beegfs-ctl --listnodes --nodetype=storage --nicdetails 
  node03 [ID: 3]
    Ports: UDP: 8003; TCP: 8003
    Interfaces: 
    + eno1[ip addr: 192.168.0.3; type: TCP]
# beegfs-ctl --listnodes --nodetype=client --nicdetails 
  4E451-5DAEDCBF-node04 [ID: 4]
    Ports: UDP: 8004; TCP: 0
    Interfaces: 
    + wlo1[ip addr: 192.168.0.4; type: TCP]

Server tuning and advanced features

この記事またはセクションは加筆を必要としています。
理由: Empty sections (議論: トーク:BeeGFS#)

InfiniBand Support

ACLs

Storage Pools

Quota Enforcement

High Availability

See also