Skip to content

Commit

Permalink
ci: fix mdl related failures
Browse files Browse the repository at this point in the history
This commit address the issue-
#3448.

Signed-off-by: riya-singhal31 <rsinghal@redhat.com>
  • Loading branch information
riya-singhal31 authored and mergify[bot] committed Nov 17, 2022
1 parent d721ed6 commit 5396863
Show file tree
Hide file tree
Showing 24 changed files with 166 additions and 170 deletions.
24 changes: 12 additions & 12 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,18 +8,18 @@ Card](https://goreportcard.com/badge/github.com/ceph/ceph-csi)](https://goreport
[![CII Best Practices](https://bestpractices.coreinfrastructure.org/projects/5940/badge)](https://bestpractices.coreinfrastructure.org/projects/5940)

- [Ceph CSI](#ceph-csi)
- [Overview](#overview)
- [Project status](#project-status)
- [Known to work CO platforms](#known-to-work-co-platforms)
- [Support Matrix](#support-matrix)
- [Ceph-CSI features and available versions](#ceph-csi-features-and-available-versions)
- [CSI spec and Kubernetes version compatibility](#csi-spec-and-kubernetes-version-compatibility)
- [Ceph CSI Container images and release compatibility](#ceph-csi-container-images-and-release-compatibility)
- [Contributing to this repo](#contributing-to-this-repo)
- [Troubleshooting](#troubleshooting)
- [Weekly Bug Triage call](#weekly-bug-triage-call)
- [Dev standup](#dev-standup)
- [Contact](#contact)
- [Overview](#overview)
- [Project status](#project-status)
- [Known to work CO platforms](#known-to-work-co-platforms)
- [Support Matrix](#support-matrix)
- [Ceph-CSI features and available versions](#ceph-csi-features-and-available-versions)
- [CSI spec and Kubernetes version compatibility](#csi-spec-and-kubernetes-version-compatibility)
- [Ceph CSI Container images and release compatibility](#ceph-csi-container-images-and-release-compatibility)
- [Contributing to this repo](#contributing-to-this-repo)
- [Troubleshooting](#troubleshooting)
- [Weekly Bug Triage call](#weekly-bug-triage-call)
- [Dev standup](#dev-standup)
- [Contact](#contact)

This repo contains the Ceph
[Container Storage Interface (CSI)](https://github.com/container-storage-interface/)
Expand Down
70 changes: 35 additions & 35 deletions docs/ceph-csi-upgrade.md
Original file line number Diff line number Diff line change
@@ -1,39 +1,39 @@
# Ceph-csi Upgrade

- [Ceph-csi Upgrade](#ceph-csi-upgrade)
- [Pre-upgrade considerations](#pre-upgrade-considerations)
- [Snapshot-controller and snapshot crd](#snapshot-controller-and-snapshot-crd)
- [Snapshot API version support matrix](#snapshot-api-version-support-matrix)
- [Upgrading from v3.2 to v3.3](#upgrading-from-v32-to-v33)
- [Upgrading from v3.3 to v3.4](#upgrading-from-v33-to-v34)
- [Upgrading from v3.4 to v3.5](#upgrading-from-v34-to-v35)
- [Upgrading from v3.5 to v3.6](#upgrading-from-v35-to-v36)
- [Upgrading from v3.6 to v3.7](#upgrading-from-v36-to-v37)
- [Upgrading CephFS](#upgrading-cephfs)
- [1. Upgrade CephFS Provisioner resources](#1-upgrade-cephfs-provisioner-resources)
- [1.1 Update the CephFS Provisioner RBAC](#11-update-the-cephfs-provisioner-rbac)
- [1.2 Update the CephFS Provisioner deployment](#12-update-the-cephfs-provisioner-deployment)
- [2. Upgrade CephFS Nodeplugin resources](#2-upgrade-cephfs-nodeplugin-resources)
- [2.1 Update the CephFS Nodeplugin RBAC](#21-update-the-cephfs-nodeplugin-rbac)
- [2.2 Update the CephFS Nodeplugin daemonset](#22-update-the-cephfs-nodeplugin-daemonset)
- [2.3 Manual deletion of CephFS Nodeplugin daemonset pods](#23-manual-deletion-of-cephfs-nodeplugin-daemonset-pods)
- [Delete removed CephFS PSP, Role and RoleBinding](#delete-removed-cephfs-psp-role-and-rolebinding)
- [Upgrading RBD](#upgrading-rbd)
- [3. Upgrade RBD Provisioner resources](#3-upgrade-rbd-provisioner-resources)
- [3.1 Update the RBD Provisioner RBAC](#31-update-the-rbd-provisioner-rbac)
- [3.2 Update the RBD Provisioner deployment](#32-update-the-rbd-provisioner-deployment)
- [4. Upgrade RBD Nodeplugin resources](#4-upgrade-rbd-nodeplugin-resources)
- [4.1 Update the RBD Nodeplugin RBAC](#41-update-the-rbd-nodeplugin-rbac)
- [4.2 Update the RBD Nodeplugin daemonset](#42-update-the-rbd-nodeplugin-daemonset)
- [Delete removed RBD PSP, Role and RoleBinding](#delete-removed-rbd-psp-role-and-rolebinding)
- [Upgrading NFS](#upgrading-nfs)
- [5. Upgrade NFS Provisioner resources](#5-upgrade-nfs-provisioner-resources)
- [5.1 Update the NFS Provisioner RBAC](#51-update-the-nfs-provisioner-rbac)
- [5.2 Update the NFS Provisioner deployment](#52-update-the-nfs-provisioner-deployment)
- [6. Upgrade NFS Nodeplugin resources](#6-upgrade-nfs-nodeplugin-resources)
- [6.1 Update the NFS Nodeplugin RBAC](#61-update-the-nfs-nodeplugin-rbac)
- [6.2 Update the NFS Nodeplugin daemonset](#62-update-the-nfs-nodeplugin-daemonset)
- [CSI Sidecar containers consideration](#csi-sidecar-containers-consideration)
- [Pre-upgrade considerations](#pre-upgrade-considerations)
- [Snapshot-controller and snapshot crd](#snapshot-controller-and-snapshot-crd)
- [Snapshot API version support matrix](#snapshot-api-version-support-matrix)
- [Upgrading from v3.2 to v3.3](#upgrading-from-v32-to-v33)
- [Upgrading from v3.3 to v3.4](#upgrading-from-v33-to-v34)
- [Upgrading from v3.4 to v3.5](#upgrading-from-v34-to-v35)
- [Upgrading from v3.5 to v3.6](#upgrading-from-v35-to-v36)
- [Upgrading from v3.6 to v3.7](#upgrading-from-v36-to-v37)
- [Upgrading CephFS](#upgrading-cephfs)
- [1. Upgrade CephFS Provisioner resources](#1-upgrade-cephfs-provisioner-resources)
- [1.1 Update the CephFS Provisioner RBAC](#11-update-the-cephfs-provisioner-rbac)
- [1.2 Update the CephFS Provisioner deployment](#12-update-the-cephfs-provisioner-deployment)
- [2. Upgrade CephFS Nodeplugin resources](#2-upgrade-cephfs-nodeplugin-resources)
- [2.1 Update the CephFS Nodeplugin RBAC](#21-update-the-cephfs-nodeplugin-rbac)
- [2.2 Update the CephFS Nodeplugin daemonset](#22-update-the-cephfs-nodeplugin-daemonset)
- [2.3 Manual deletion of CephFS Nodeplugin daemonset pods](#23-manual-deletion-of-cephfs-nodeplugin-daemonset-pods)
- [Delete removed CephFS PSP, Role and RoleBinding](#delete-removed-cephfs-psp-role-and-rolebinding)
- [Upgrading RBD](#upgrading-rbd)
- [3. Upgrade RBD Provisioner resources](#3-upgrade-rbd-provisioner-resources)
- [3.1 Update the RBD Provisioner RBAC](#31-update-the-rbd-provisioner-rbac)
- [3.2 Update the RBD Provisioner deployment](#32-update-the-rbd-provisioner-deployment)
- [4. Upgrade RBD Nodeplugin resources](#4-upgrade-rbd-nodeplugin-resources)
- [4.1 Update the RBD Nodeplugin RBAC](#41-update-the-rbd-nodeplugin-rbac)
- [4.2 Update the RBD Nodeplugin daemonset](#42-update-the-rbd-nodeplugin-daemonset)
- [Delete removed RBD PSP, Role and RoleBinding](#delete-removed-rbd-psp-role-and-rolebinding)
- [Upgrading NFS](#upgrading-nfs)
- [5. Upgrade NFS Provisioner resources](#5-upgrade-nfs-provisioner-resources)
- [5.1 Update the NFS Provisioner RBAC](#51-update-the-nfs-provisioner-rbac)
- [5.2 Update the NFS Provisioner deployment](#52-update-the-nfs-provisioner-deployment)
- [6. Upgrade NFS Nodeplugin resources](#6-upgrade-nfs-nodeplugin-resources)
- [6.1 Update the NFS Nodeplugin RBAC](#61-update-the-nfs-nodeplugin-rbac)
- [6.2 Update the NFS Nodeplugin daemonset](#62-update-the-nfs-nodeplugin-daemonset)
- [CSI Sidecar containers consideration](#csi-sidecar-containers-consideration)

## Pre-upgrade considerations

Expand Down Expand Up @@ -226,10 +226,10 @@ For each node:

- Drain your application pods from the node
- Delete the CSI driver pods on the node
- The pods to delete will be named with a csi-cephfsplugin prefix and have a
- The pods to delete will be named with a csi-cephfsplugin prefix and have a
random suffix on each node. However, no need to delete the provisioner
pods: csi-cephfsplugin-provisioner-* .
- The pod deletion causes the pods to be restarted and updated automatically
- The pod deletion causes the pods to be restarted and updated automatically
on the node.

#### Delete removed CephFS PSP, Role and RoleBinding
Expand Down
11 changes: 7 additions & 4 deletions docs/ceph-mount-corruption.md
Original file line number Diff line number Diff line change
Expand Up @@ -77,13 +77,16 @@ following errors:

More details about the error codes can be found [here](https://www.gnu.org/software/libc/manual/html_node/Error-Codes.html)

For such mounts, The CephCSI nodeplugin returns volume_condition as abnormal for `NodeGetVolumeStats` RPC call.
For such mounts, The CephCSI nodeplugin returns volume_condition as
abnormal for `NodeGetVolumeStats` RPC call.

### kernel client recovery

Once a mountpoint corruption is detected, Below are the two methods to recover from it.
Once a mountpoint corruption is detected,
Below are the two methods to recover from it.

* Reboot the node where the abnormal volume behavior is observed.
* Scale down all the applications using the CephFS PVC on the node where abnormal mounts
are present. Once all the applications are deleted, scale up the application
* Scale down all the applications using the CephFS PVC
on the node where abnormal mounts are present.
Once all the applications are deleted, scale up the application
to remount the CephFS PVC to application pods.
8 changes: 4 additions & 4 deletions docs/cephfs-snapshot-backed-volumes.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,12 +21,12 @@ For provisioning new snapshot-backed volumes, following configuration must be
set for storage class(es) and their PVCs respectively:

* StorageClass:
* Specify `backingSnapshot: "true"` parameter.
* Specify `backingSnapshot: "true"` parameter.
* PersistentVolumeClaim:
* Set `storageClassName` to point to your storage class with backing
* Set `storageClassName` to point to your storage class with backing
snapshots enabled.
* Define `spec.dataSource` for your desired source volume snapshot.
* Set `spec.accessModes` to `ReadOnlyMany`. This is the only access mode that
* Define `spec.dataSource` for your desired source volume snapshot.
* Set `spec.accessModes` to `ReadOnlyMany`. This is the only access mode that
is supported by this feature.

### Mounting snapshots from pre-provisioned volumes
Expand Down
6 changes: 3 additions & 3 deletions docs/deploy-rbd.md
Original file line number Diff line number Diff line change
Expand Up @@ -220,9 +220,9 @@ possible to encrypt them with ceph-csi by using LUKS encryption.
* volume is attached to provisioner container
* on first time attachment
(no file system on the attached device, checked with blkid)
* passphrase is retrieved from selected KMS if KMS is in use
* device is encrypted with LUKS using a passphrase from K8s Secret or KMS
* image-meta updated to "encrypted" in Ceph
* passphrase is retrieved from selected KMS if KMS is in use
* device is encrypted with LUKS using a passphrase from K8s Secret or KMS
* image-meta updated to "encrypted" in Ceph
* passphrase is retrieved from selected KMS if KMS is in use
* device is open and device path is changed to use a mapper device
* mapper device is used instead of original one with usual workflow
Expand Down
4 changes: 2 additions & 2 deletions docs/design/proposals/cephfs-fscrypt.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,8 +19,8 @@ Work is in progress to add fscrypt support to CephFS for filesystem-level encryp

- [FSCrypt Kernel Documentation](https://www.kernel.org/doc/html/latest/filesystems/fscrypt.html)
- Management Tools
- [`fscrypt`](https://github.com/google/fscrypt)
- [`fscryptctl`](https://github.com/google/fscryptctl)
- [`fscrypt`](https://github.com/google/fscrypt)
- [`fscryptctl`](https://github.com/google/fscryptctl)
- [Ceph Feature Tracker: "Add fscrypt support to the kernel CephFS client"](https://tracker.ceph.com/issues/46690)
- [`fscrypt` design document](https://goo.gl/55cCrI)

Expand Down
6 changes: 3 additions & 3 deletions docs/design/proposals/clusterid-mapping.md
Original file line number Diff line number Diff line change
Expand Up @@ -79,13 +79,13 @@ volume is present in the pool.
## Problems with volumeID Replication

* The clusterID can be different
* as the clusterID is the namespace where rook is deployed, the Rook might
* as the clusterID is the namespace where rook is deployed, the Rook might
be deployed in the different namespace on a secondary cluster
* In standalone Ceph-CSI the clusterID is fsID and fsID is unique per
* In standalone Ceph-CSI the clusterID is fsID and fsID is unique per
cluster

* The poolID can be different
* PoolID which is encoded in the volumeID won't remain the same across
* PoolID which is encoded in the volumeID won't remain the same across
clusters

To solve this problem we need to have a new mapping between clusterID's and the
Expand Down
8 changes: 4 additions & 4 deletions docs/design/proposals/encrypted-pvc.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,10 +33,10 @@ requirement by using dm-crypt module through cryptsetup cli interface.
[here](https://wiki.archlinux.org/index.php/Dm-crypt/Device_encryption#Encrypting_devices_with_cryptsetup)
Functions to implement necessary interaction are implemented in a separate
`cryptsetup.go` file.
* LuksFormat
* LuksOpen
* LuksClose
* LuksStatus
* LuksFormat
* LuksOpen
* LuksClose
* LuksStatus

* `CreateVolume`: refactored to prepare for encryption (tag image that it
requires encryption later), before returning, if encrypted volume option is
Expand Down
6 changes: 3 additions & 3 deletions docs/design/proposals/encryption-with-vault-tokens.md
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,7 @@ Encryption Key (DEK) for PVC encryption:

- when creating the PVC the Ceph-CSI provisioner needs to store the Kubernetes
Namespace of the PVC in its metadata
- stores the `csi.volume.owner` (name of Tenant) in the metadata of the
- stores the `csi.volume.owner` (name of Tenant) in the metadata of the
volume and sets it as `rbdVolume.Owner`
- the Ceph-CSI node-plugin needs to request the Vault Token in the NodeStage
CSI operation and create/get the key for the PVC
Expand Down Expand Up @@ -87,8 +87,8 @@ Kubernetes and other Container Orchestration frameworks is tracked in
- configuration of the VaultTokenKMS can be very similar to VaultKMS for common
settings
- the configuration can override the defaults for each Tenant separately
- Vault Service connection details (address, TLS options, ...)
- name of the Kubernetes Secret that can be looked up per tenant
- Vault Service connection details (address, TLS options, ...)
- name of the Kubernetes Secret that can be looked up per tenant
- the configuration points to a Kubernetes Secret per Tenant that contains the
Vault Token
- the configuration points to an optional Kubernetes ConfigMap per Tenant that
Expand Down
2 changes: 1 addition & 1 deletion docs/design/proposals/intree-migrate.md
Original file line number Diff line number Diff line change
Expand Up @@ -126,4 +126,4 @@ at [CephFS in-tree migration KEP](https://github.com/kubernetes/enhancements/iss

[Tracker Issue in Ceph CSI](https://github.com/ceph/ceph-csi/issues/2509)

[In-tree storage plugin to CSI Driver Migration KEP](https://github.com/kubernetes/enhancements/issues/625)
[In-tree storage plugin to CSI Driver Migration KEP](https://github.com/kubernetes/enhancements/issues/625)
30 changes: 15 additions & 15 deletions docs/design/proposals/rbd-snap-clone.md
Original file line number Diff line number Diff line change
@@ -1,21 +1,21 @@
# Steps and RBD CLI commands for RBD snapshot and clone operations

- [Steps and RBD CLI commands for RBD snapshot and clone operations](#steps-and-rbd-cli-commands-for-rbd-snapshot-and-clone-operations)
- [Create a snapshot from PVC](#create-a-snapshot-from-pvc)
- [steps to create a snapshot](#steps-to-create-a-snapshot)
- [RBD CLI commands to create snapshot](#rbd-cli-commands-to-create-snapshot)
- [Create PVC from a snapshot (datasource snapshot)](#create-pvc-from-a-snapshot-datasource-snapshot)
- [steps to create a pvc from snapshot](#steps-to-create-a-pvc-from-snapshot)
- [RBD CLI commands to create clone from snapshot](#rbd-cli-commands-to-create-clone-from-snapshot)
- [Delete a snapshot](#delete-a-snapshot)
- [steps to delete a snapshot](#steps-to-delete-a-snapshot)
- [RBD CLI commands to delete a snapshot](#rbd-cli-commands-to-delete-a-snapshot)
- [Delete a Volume (PVC)](#delete-a-volume-pvc)
- [steps to delete a volume](#steps-to-delete-a-volume)
- [RBD CLI commands to delete a volume](#rbd-cli-commands-to-delete-a-volume)
- [Volume cloning (datasource pvc)](#volume-cloning-datasource-pvc)
- [steps to create a Volume from Volume](#steps-to-create-a-volume-from-volume)
- [RBD CLI commands to create a Volume from Volume](#rbd-cli-commands-to-create-a-volume-from-volume)
- [Create a snapshot from PVC](#create-a-snapshot-from-pvc)
- [steps to create a snapshot](#steps-to-create-a-snapshot)
- [RBD CLI commands to create snapshot](#rbd-cli-commands-to-create-snapshot)
- [Create PVC from a snapshot (datasource snapshot)](#create-pvc-from-a-snapshot-datasource-snapshot)
- [steps to create a pvc from snapshot](#steps-to-create-a-pvc-from-snapshot)
- [RBD CLI commands to create clone from snapshot](#rbd-cli-commands-to-create-clone-from-snapshot)
- [Delete a snapshot](#delete-a-snapshot)
- [steps to delete a snapshot](#steps-to-delete-a-snapshot)
- [RBD CLI commands to delete a snapshot](#rbd-cli-commands-to-delete-a-snapshot)
- [Delete a Volume (PVC)](#delete-a-volume-pvc)
- [steps to delete a volume](#steps-to-delete-a-volume)
- [RBD CLI commands to delete a volume](#rbd-cli-commands-to-delete-a-volume)
- [Volume cloning (datasource pvc)](#volume-cloning-datasource-pvc)
- [steps to create a Volume from Volume](#steps-to-create-a-volume-from-volume)
- [RBD CLI commands to create a Volume from Volume](#rbd-cli-commands-to-create-a-volume-from-volume)

This document outlines the command used to create RBD snapshot, delete RBD
snapshot, Restore RBD snapshot and Create new RBD image from existing RBD image.
Expand Down
14 changes: 7 additions & 7 deletions docs/design/proposals/rbd-volume-healer.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,16 +85,16 @@ Volume healer does the below,
NodeStage, NodeUnstage, NodePublish, NodeUnPublish operations. Hence none of
the operations happen in parallel.
- Any issues if the NodeUnstage is issued by kubelet?
- This can not be a problem as we take a lock at the Ceph-CSI level
- If the NodeUnstage success, Ceph-CSI will return StagingPath not found
- This can not be a problem as we take a lock at the Ceph-CSI level
- If the NodeUnstage success, Ceph-CSI will return StagingPath not found
error, we can then skip
- If the NodeUnstage fails with an operation already going on, in the next
- If the NodeUnstage fails with an operation already going on, in the next
NodeUnstage the volume gets unmounted
- What if the PVC is deleted?
- If the PVC is deleted, the volume attachment list might already get
- If the PVC is deleted, the volume attachment list might already get
refreshed and entry will be skipped/deleted at the healer.
- For any reason, If the request bails out with Error NotFound, skip the
- For any reason, If the request bails out with Error NotFound, skip the
PVC, assuming it might have deleted or the NodeUnstage might have already
happened.
- The Volume healer currently works with rbd-nbd, but the design can
accommodate other userspace mounters (may be ceph-fuse).
- The Volume healer currently works with rbd-nbd, but the design can
accommodate other userspace mounters (may be ceph-fuse).
Loading

0 comments on commit 5396863

Please sign in to comment.