For more details about the Proxmox VE repositories, see Package repositories. Please follow exactly the upgrade guide: Upgrade from 5. Make sure that you have uploaded a valid subscription key to your Proxmox VE host.
Here is the howto for the CLI:. If you get any errors, your sources. Perform the following steps on all bare-metal and container nodes in the storage cluster, unless otherwise noted. For bare-metal Red Hat Ceph Storage nodes that cannot access the Internet during the installation, provide the software content by using the Red Hat Satellite server. For additional details, contact Red Hat Support.
You must follow these steps first on a node with Internet access:. Verify registry. If registry. This step must be done on all storage cluster nodes that access the local Docker registry.
For Red Hat Enterprise Linux 7, restart the docker service:. For all deployments, bare-metal or in containers :.
Register the node, and when prompted, enter the appropriate Red Hat Customer Portal credentials:. Disable the default software repositories, and enable the server and the extras repositories on the respective version of Red Hat Enterprise Linux:. Before you can install Red Hat Ceph Storage, you must choose an installation method.
Red Hat Ceph Storage supports two installation methods:. For Ceph Storage clusters with Ceph nodes that can connect directly to the internet, use Red Hat Subscription Manager to enable the required Ceph repository.
For Ceph Storage clusters where security measures preclude nodes from accessing the internet, install Red Hat Ceph Storage 4 from a single software build delivered as an ISO image, which will allow you to install local repositories.
By default, Red Hat Ceph Storage repositories are enabled by ceph-ansible on the respective nodes. To manually enable the repositories:. All Red Hat Ceph Storage nodes require a public network. You must have a network interface card configured to a public network where Ceph clients can reach Ceph monitors and Ceph OSD nodes. You might have a network interface card for a cluster network so that Ceph can conduct heart-beating, peering, replication, and recovery on a network separate from the public network.
Red Hat does not recommend using a single network interface card for both a public and private network. Do the following steps on all Red Hat Ceph Storage nodes in the storage cluster, as the root user. If it is set to no , the Ceph storage cluster might fail to peer on reboot.
The Monitor daemons use port and for communication within the Ceph storage cluster. The Ceph Manager ceph-mgr daemons use ports in range Consider colocating the ceph-mgr daemons with Ceph Monitors on same nodes. The Ceph Metadata Server nodes ceph-mds use port range Valid customer subscription. Log in to the Red Hat Customer Portal. Network interface card connected to the network.
Configuring a firewall for Red Hat Ceph Storage. One for communicating with clients and monitors over the public network One for sending data to other OSDs over a cluster network, if available; otherwise, over the public network One for exchanging heartbeat packets over a cluster network, if available; otherwise, over the public network.
Prerequisite Network hardware is connected. Procedure Run the following commands as the root user. Creating an Ansible user with sudo access. Prerequisite Having root or sudo access to all nodes in the storage cluster. Example ssh root mon Example adduser admin. Example passwd admin. Create an Ansible user with sudo access. Verifying the operating system version.
Registering Ceph nodes. Enabling Ceph software repositories. Configuring the network. Configuring a firewall. A firewall can increase the level of trust for a network. Creating an Ansible user. Creating the Ansible user is required on all Ceph nodes. Enabling password-less SSH.
Proxmox at Proxtalks in Frankfurt October 19, Proxmox VE celebrates its year anniversary April 16, Open-source email security solution Proxmox Mail Gateway 5. Proxmox VE 4. Proxmox Mail Gateway 4. Proxmox VE 3. September 15,
0コメント