A standard-type cluster is the most typical deployment method for Ceph storage. It distributes data replicas across hard drives on different hosts, ensuring that if a single host fails, the data copies on other hosts can still maintain service availability.
Download the Alauda Container Platform Storage Essentials installation package corresponding to your platform architecture.
Upload the Alauda Container Platform Storage Essentials installation package using the Upload Packages mechanism.
Download the Alauda Build of Rook-Ceph installation package corresponding to your platform architecture.
Upload the Alauda Build of Rook-Ceph installation package using the Upload Packages mechanism.
At least 3 nodes are required in the storage cluster.
Each node must have at least 1 blank hard disk or 1 unformatted hard disk partition available.
The available hard disk capacity is recommended to be greater than 50 G.
If you are using an attached Kubernetes cluster with Containerd as the runtime component, please ensure that the LimitNOFILE parameter value in the /etc/systemd/system/containerd.service file is configured to 1048576 on all nodes of the cluster, to ensure successful deployment of distributed storage. For configuration instructions, please refer to Modifying Containerd Configuration Information.
Note: When upgrading from versions earlier than v3.10.2 to the current version, if you need to deploy Ceph distributed storage on your custom Kubernetes cluster with Containerd as the runtime component, you must also set the LimitNOFILE parameter value in the /etc/systemd/system/containerd.service file to 1048576 on all nodes of the cluster.
Creating Storage Service and Accessing Storage Service only support selecting one method.
Login, go to the Administrator page.
Click Marketplace > OperatorHub to enter the OperatorHub page.
Find the Alauda Container Platform Storage Essentials, click Install, and navigate to the Install Alauda Container Platform Storage Essentials page.
Configuration Parameters:
| Parameter | Recommended Configuration |
|---|---|
| Channel | The default channel is stable. |
| Installation Mode | Cluster: All namespaces in the cluster share a single Operator instance for creation and management, resulting in lower resource usage. |
| Installation Place | Select Recommended, Namespace only support acp-storage. |
| Upgrade Strategy | Manual: When there is a new version in the Operator Hub, manual confirmation is required to upgrade the Operator to the latest version. |
When utilizing the Selection Device method to add storage devices to your Ceph cluster, it is necessary to deploy the Alauda Build of LocalStorage Operator. This Operator is responsible for automatically discovering all hard disk devices across every node within the Kubernetes cluster and collecting comprehensive device information, thereby streamlining the storage integration process.
Login, go to the Administrator page.
Click Marketplace > OperatorHub to enter the OperatorHub page.
Find the Alauda Build of LocalStorage, click Install, and navigate to the Install Alauda Build of LocalStorage page.
Configuration Parameters:
| Parameter | Recommended Configuration |
|---|---|
| Channel | The default channel is stable. |
| Installation Mode | Cluster: All namespaces in the cluster share a single Operator instance for creation and management, resulting in lower resource usage. |
| Installation Place | Select Recommended, Namespace only support acp-storage. |
| Upgrade Strategy | Manual: When there is a new version in the Operator Hub, manual confirmation is required to upgrade the Operator to the latest version. |
Navigate to Administrator.
In the left sidebar, click Storage Management > Distributed Storage.
Click Configure Now.
In the Deploy Operator wizard page, click the Deploy Operator button at the bottom right.
When the page automatically advances to the next step, it indicates that the Operator has been deployed successfully.
If the deployment fails, please refer to the prompt on the interface Clean Up Deployed Information and Retry, and redeploy the Operator; if you wish to return to the distributed storage selection page, click Application Store, first uninstall the resources in the already deployed rook-operator, and then uninstall rook-operator.
In the Create Cluster wizard page, configure the relevant parameters and click the Create Cluster button at the bottom right.
| Parameter | Explanation |
|---|---|
| Cluster Type | Select Standard. |
| Device Class Type | Device classes are groupings of hard disks; you can customize device classes according to your storage needs, allocating different storage content to disks of varying performance.
|
| Device Class - Name | The name of the device class. When selecting Custom Device Class, the device class cannot use the following names: hdd, ssd, nvme. |
| Device Class - Storage Devices | To add storage devices to a device class, you can choose between the Selection Device and Input Device methods:
|
| Snapshot | When enabled, it supports creating PVC snapshots and using snapshots to configure new PVCs for quick backup and recovery of business data. If you did not enable snapshots when creating storage, you can still enable them as needed from the Operations section on the storage cluster details page. Note: Please ensure that you have deployed volume snapshot plugins for the current cluster before using. |
| Monitoring Alarm | When enabled, it will provide out-of-the-box monitoring metric collection and alerting capabilities, see Monitoring and Alarming. Note: If not enabled at this time, you will need to find alternative solutions for storage monitoring and alarms. For example, manually configuring monitoring dashboards and alert strategies in the operation and maintenance center. |
Click Advanced Configuration for advanced component configuration.
| Parameter | Explanation |
|---|---|
| Network Configuration |
|
| Optimization Parameters | Supports filling parameters in Ceph configuration file format; the system will overwrite the default parameters based on the provided content. Note: After first filling in or modifying initialization parameters, please click on the initialization parameters; successful initialization is required before a cluster can be created. |
| Component Fixed-point Deployment | You can deploy components to specified nodes; at least three nodes are required to ensure minimum availability. The components eligible for fixed-point deployment configuration include MON, MGR, MDS, RGW. |
When the page automatically advances to the next step, it indicates that the Ceph cluster has been deployed successfully.
If the creation fails, you may click to clean up Created Information or Retry to automatically clean up the resources and recreate the cluster, or manually clean up resources according to the documentation Distributed Storage Service Resource Cleanup.
In the Create Storage Pool wizard page, configure the relevant parameters and click the Create Storage Pool button at the bottom right.
| Parameter | Explanation |
|---|---|
| Storage Type |
|
| Replica Count | The larger the number of replicas, the higher the redundancy and data security; however, the utilization rate of storage will decrease. It is usually set to 3 to meet most needs. |
| Device Class | Uniformly classify types for the same type of device or disks of the same business logic, selecting from the device classes added in the previous step.
|
If it is object storage, you also need to configure the following parameters:
| Parameter | Explanation |
|---|---|
| Region | Specify the region where the storage pool is located. |
| Gateway Type | Default is S3 and cannot be modified. |
| Internal Port | Specify the port for internal access in the cluster. |
| External Access | Enabling/disabling external access will create/destroy Nodeport type Service. |
| Instance Count | The number of resource instances for object storage. |
When the page automatically advances to the next step, it indicates that the storage pool has been deployed successfully.
If the deployment fails, please refer to the interface prompts to check the core components, and then click Clean Up Created Information and Retry to recreate the storage pool.
Click Create Storage Pool. In the Details tab, you can view information about the created storage pool.
For details, please refer to Create Stretch Type Cluster.
For details, please refer to Cleanup Distributed Storage.