-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Airshipctl Integration with Rook for Deployment of Ceph Cluster #30
Comments
This was moved from Airshipctl per the 8/26 Flight Plan call. Implementation has been done as a helm-release downstream in the lab. If access to the downstream implementation is required, please reach out in the Airshipit Slack/IRC. |
Please assign this issue to me. Thanks. |
@vs422h all yours! |
Please ensure that this is included in the airship-core type deployment |
This change adds functions for deploying the following chart: - rook-release/rook-ceph (version 1.5.8) Relates-To: #30 Change-Id: If1b089474b679823c23e38e531b7ed6d965cd756
@vs422h - Vladimir, is there an update? When do we think we can close this out? |
In addition to delivering the ROOK-Ceph operator, please enhance the gating to integrate the operator in treasuremap airship-core type. |
* Type catalog should contain only core services related to the deployment of the ceph cluster (monitors, osds, mgrs, etc) * Manifests to create pools, dashboards, cephfs - are moved to the function catalog. * Code related to the OpenStack deployment is removed * Dashboard is disabled by default, ingress controller is removed * Rook-operator version is upgraded to 1.5.9 to prevent incompatibility with pool quota settings * Fixed a minor bug in the site-level catalogue storage definition and in the replacement function * Added cleanup manifest for StorageCatalogue * Added airshipctl phase to deploy rook-operator * Implementation of the rook-ceph operator has been changed * Added the configuration for the csi driver images * Added overrides for ceph.conf * Added configuration for rook-operator and ceph images * Merge conflict resolution * Code standartization * Rename rook-ceph-crds -> rook-operator Relates-to: [WIP] Expects to deliver Rook/Ceph via 2 phases Relates-to: #30 Change-Id: I7ec7f756e742db1595143c2dfc6751b16fb25efb
The Airship Integration with Rook for Deployment of Ceph cluster has been implemented, tested upstream and downstream and merged into master branch. |
Closing per above. |
Problem description (if applicable)
Airshipctl integrates with Helm charts to deploy Ceph cluster. This implies that there is always heavy development needed to upgrade, maintain any ceph cluster. Also, due to the current structure, it make scaling the ceph cluster harder.
Proposed change
As we move towards more Cloud Native way to implement the various components, we need to consider that for the storage space as well. Rook is an Open Source Cloud Native Storage Orchestrator for k8s that provides a platform for storage solutions mainly Ceph. Rook aids in deployment and life cycle management of Ceph. It supports self-managing, self-scaling and self-healing storage services via the rook operator. Airship 2.0 can use Rook to implement the required Ceph cluster. This will enable a cloud native implementation and aid in upgrades of the ceph cluster.
Potential impacts
There may be potential issues when consuming an existing ceph cluster that has been deployed via Helm and converting that into a Rook Deployment.
The text was updated successfully, but these errors were encountered: