-
Notifications
You must be signed in to change notification settings - Fork 39
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
* Type catalog should contain only core services related to the deployment of the ceph cluster (monitors, osds, mgrs, etc) * Manifests to create pools, dashboards, cephfs - are moved to the function catalog. * Code related to the OpenStack deployment is removed * Dashboard is disabled by default, ingress controller is removed * Rook-operator version is upgraded to 1.5.9 to prevent incompatibility with pool quota settings * Fixed a minor bug in the site-level catalogue storage definition and in the replacement function * Added cleanup manifest for StorageCatalogue * Added airshipctl phase to deploy rook-operator * Implementation of the rook-ceph operator has been changed * Added the configuration for the csi driver images * Added overrides for ceph.conf * Added configuration for rook-operator and ceph images * Merge conflict resolution * Code standartization * Rename rook-ceph-crds -> rook-operator Relates-to: [WIP] Expects to deliver Rook/Ceph via 2 phases Relates-to: #30 Change-Id: I7ec7f756e742db1595143c2dfc6751b16fb25efb
- Loading branch information
SIGUNOV, VLADIMIR (vs422h)
authored and
Stephen Taylor
committed
Apr 30, 2021
1 parent
cefc656
commit fd3f0d7
Showing
45 changed files
with
3,670 additions
and
60 deletions.
There are no files selected for viewing
75 changes: 75 additions & 0 deletions
75
manifests/function/rook-cluster/cephfs/base/filesystem.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,75 @@ | ||
################################################################################################################# | ||
# Create a filesystem with settings with replication enabled for a production environment. | ||
# A minimum of 3 OSDs on different nodes are required in this example. | ||
# kubectl create -f filesystem.yaml | ||
################################################################################################################# | ||
|
||
apiVersion: ceph.rook.io/v1 | ||
kind: CephFilesystem | ||
metadata: | ||
name: cephfs | ||
namespace: rook-ceph # namespace:cluster | ||
spec: | ||
# The metadata pool spec. Must use replication. | ||
metadataPool: | ||
replicated: | ||
size: 3 | ||
requireSafeReplicaSize: true | ||
parameters: | ||
# Inline compression mode for the data pool | ||
# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression | ||
compression_mode: none | ||
# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool | ||
# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size | ||
#target_size_ratio: ".5" | ||
# The list of data pool specs. Can use replication or erasure coding. | ||
# Whether to preserve filesystem after CephFilesystem CRD deletion | ||
preserveFilesystemOnDelete: true | ||
# The metadata service (mds) configuration | ||
metadataServer: | ||
|
||
# The affinity rules to apply to the mds deployment | ||
placement: | ||
# nodeAffinity: | ||
# requiredDuringSchedulingIgnoredDuringExecution: | ||
# nodeSelectorTerms: | ||
# - matchExpressions: | ||
# - key: role | ||
# operator: In | ||
# values: | ||
# - mds-node | ||
# topologySpreadConstraints: | ||
# tolerations: | ||
# - key: mds-node | ||
# operator: Exists | ||
# podAffinity: | ||
podAntiAffinity: | ||
requiredDuringSchedulingIgnoredDuringExecution: | ||
- labelSelector: | ||
matchExpressions: | ||
- key: app | ||
operator: In | ||
values: | ||
- rook-ceph-mds | ||
# topologyKey: kubernetes.io/hostname will place MDS across different hosts | ||
topologyKey: kubernetes.io/hostname | ||
preferredDuringSchedulingIgnoredDuringExecution: | ||
- weight: 100 | ||
podAffinityTerm: | ||
labelSelector: | ||
matchExpressions: | ||
- key: app | ||
operator: In | ||
values: | ||
- rook-ceph-mds | ||
# topologyKey: */zone can be used to spread MDS across different AZ | ||
# Use <topologyKey: failure-domain.beta.kubernetes.io/zone> in k8s cluster if your cluster is v1.16 or lower | ||
# Use <topologyKey: topology.kubernetes.io/zone> in k8s cluster is v1.17 or upper | ||
topologyKey: topology.kubernetes.io/zone | ||
# A key/value list of annotations | ||
annotations: | ||
# key: value | ||
# A key/value list of labels | ||
labels: | ||
# key: value | ||
|
2 changes: 2 additions & 0 deletions
2
manifests/function/rook-cluster/cephfs/base/kustomization.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
resources: | ||
- filesystem.yaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
apiVersion: ceph.rook.io/v1 | ||
kind: CephFilesystem | ||
metadata: | ||
name: cephfs | ||
namespace: rook-ceph # namespace:cluster | ||
spec: | ||
metadataServer: | ||
# The number of active MDS instances | ||
activeCount: 1 | ||
# Whether each active MDS instance will have an active standby with a warm metadata cache for faster failover. | ||
# If false, standbys will be available, but will not have a warm cache. | ||
activeStandby: true | ||
resources: | ||
# The requests and limits set here, allow the filesystem MDS Pod(s) to use half of one CPU core and 1 gigabyte of memory | ||
# limits: | ||
# cpu: "500m" | ||
# memory: "1024Mi" | ||
# requests: | ||
# cpu: "500m" | ||
# memory: "1024Mi" | ||
# priorityClassName: my-priority-class |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,20 @@ | ||
apiVersion: ceph.rook.io/v1 | ||
kind: CephFilesystem | ||
metadata: | ||
name: cephfs | ||
namespace: rook-ceph # namespace:cluster | ||
spec: | ||
dataPools: | ||
- failureDomain: host | ||
replicated: | ||
size: 3 | ||
# Disallow setting pool with replica 1, this could lead to data loss without recovery. | ||
# Make sure you're *ABSOLUTELY CERTAIN* that is what you want | ||
requireSafeReplicaSize: true | ||
parameters: | ||
# Inline compression mode for the data pool | ||
# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression | ||
compression_mode: none | ||
# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool | ||
# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size | ||
target_size_ratio: ".5" |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
resources: | ||
- ./base | ||
patchesStrategicMerge: | ||
- cephfs-pool.yaml | ||
- cephfs-mds.yaml |
19 changes: 19 additions & 0 deletions
19
manifests/function/rook-cluster/dashboard/base/external-dashboard.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,19 @@ | ||
apiVersion: v1 | ||
kind: Service | ||
metadata: | ||
name: rook-ceph-mgr-dashboard | ||
namespace: rook-ceph # namespace:cluster | ||
labels: | ||
app: rook-ceph-mgr | ||
rook_cluster: rook-ceph # namespace:cluster | ||
spec: | ||
ports: | ||
- name: dashboard | ||
port: 7000 | ||
protocol: TCP | ||
targetPort: 7000 | ||
selector: | ||
app: rook-ceph-mgr | ||
rook_cluster: rook-ceph | ||
sessionAffinity: None | ||
type: NodePort |
2 changes: 2 additions & 0 deletions
2
manifests/function/rook-cluster/dashboard/base/kustomization.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
resources: | ||
- external-dashboard.yaml |
2 changes: 2 additions & 0 deletions
2
manifests/function/rook-cluster/dashboard/http/kustomization.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
resources: | ||
- ../base |
2 changes: 2 additions & 0 deletions
2
manifests/function/rook-cluster/pools/base/kustomization.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,2 @@ | ||
resources: | ||
- pool.yaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,65 @@ | ||
################################################################################################################# | ||
# Create a Ceph pool with settings for replication in production environments. A minimum of 3 OSDs on | ||
# different hosts are required in this example. | ||
# kubectl create -f pool.yaml | ||
################################################################################################################# | ||
|
||
apiVersion: ceph.rook.io/v1 | ||
kind: CephBlockPool | ||
metadata: | ||
name: "pool" | ||
namespace: rook-ceph # namespace:cluster | ||
spec: | ||
# The failure domain will spread the replicas of the data across different failure zones | ||
# Default value is host. Could be osd or rack, depending on your crushmap | ||
failureDomain: host | ||
# For a pool based on raw copies, specify the number of copies. A size of 1 indicates no redundancy. | ||
replicated: | ||
size: 3 | ||
# Disallow setting pool with replica 1, this could lead to data loss without recovery. | ||
# Make sure you're *ABSOLUTELY CERTAIN* that is what you want | ||
requireSafeReplicaSize: true | ||
# The number for replicas per failure domain, the value must be a divisor of the replica count. If specified, the most common value is 2 for stretch clusters, where the replica count would be 4. | ||
# replicasPerFailureDomain: 2 | ||
# The name of the failure domain to place further down replicas | ||
# subFailureDomain: host | ||
# Ceph CRUSH root location of the rule | ||
# For reference: https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/#types-and-buckets | ||
#crushRoot: my-root | ||
# The Ceph CRUSH device class associated with the CRUSH replicated rule | ||
# For reference: https://docs.ceph.com/docs/nautilus/rados/operations/crush-map/#device-classes | ||
#deviceClass: my-class | ||
# Enables collecting RBD per-image IO statistics by enabling dynamic OSD performance counters. Defaults to false. | ||
# For reference: https://docs.ceph.com/docs/master/mgr/prometheus/#rbd-io-statistics | ||
# enableRBDStats: true | ||
# Set any property on a given pool | ||
# see https://docs.ceph.com/docs/master/rados/operations/pools/#set-pool-values | ||
parameters: | ||
# Inline compression mode for the data pool | ||
# Further reference: https://docs.ceph.com/docs/nautilus/rados/configuration/bluestore-config-ref/#inline-compression | ||
compression_mode: none | ||
# gives a hint (%) to Ceph in terms of expected consumption of the total cluster capacity of a given pool | ||
# for more info: https://docs.ceph.com/docs/master/rados/operations/placement-groups/#specifying-expected-pool-size | ||
#target_size_ratio: ".5" | ||
mirroring: | ||
enabled: false | ||
# mirroring mode: pool level or per image | ||
# for more details see: https://docs.ceph.com/docs/master/rbd/rbd-mirroring/#enable-mirroring | ||
mode: image | ||
# specify the schedule(s) on which snapshots should be taken | ||
# snapshotSchedules: | ||
# - interval: 24h # daily snapshots | ||
# startTime: 14:00:00-05:00 | ||
# reports pool mirroring status if enabled | ||
statusCheck: | ||
mirror: | ||
disabled: false | ||
interval: 60s | ||
# quota in bytes and/or objects, default value is 0 (unlimited) | ||
# see https://docs.ceph.com/en/latest/rados/operations/pools/#set-pool-quotas | ||
# quotas: | ||
# maxSize: "10Gi" # valid suffixes include K, M, G, T, P, Ki, Mi, Gi, Ti, Pi | ||
# maxObjects: 1000000000 # 1 billion objects | ||
# A key/value list of annotations | ||
annotations: | ||
# key: value |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,12 @@ | ||
apiVersion: ceph.rook.io/v1 | ||
kind: CephBlockPool | ||
metadata: | ||
name: pool | ||
namespace: rook-ceph | ||
spec: | ||
failureDomain: host | ||
replicated: | ||
size: 2 | ||
quotas: | ||
maxSize: "10Gi" # valid suffixes include K, M, G, T, P, Ki, Mi, Gi, Ti, Pi | ||
maxObjects: 1000000000 # 1 billion objects |
5 changes: 5 additions & 0 deletions
5
manifests/function/rook-cluster/pools/data/kustomization.yaml
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
resources: | ||
- ../base | ||
namePrefix: data- | ||
patchesStrategicMerge: | ||
- data-pool.yaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
resources: | ||
- ./rbd | ||
- ./data |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
resources: | ||
- ../base | ||
namePrefix: rbd- | ||
patchesStrategicMerge: | ||
- rbd-pool.yaml |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,13 @@ | ||
apiVersion: ceph.rook.io/v1 | ||
kind: CephBlockPool | ||
metadata: | ||
name: "pool" | ||
namespace: rook-ceph # namespace:cluster | ||
spec: | ||
failureDomain: host | ||
replicated: | ||
size: 3 | ||
quotas: | ||
maxSize: "0" # valid suffixes include K, M, G, T, P, Ki, Mi, Gi, Ti, Pi, eg: "10Gi" | ||
# "0" means no quotas. Since rook 1.5.9 you must use string as a value's type | ||
maxObjects: 0 # 1000000000 = billion objects, 0 means no quotas |
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,2 @@ | ||
apiVersion: kustomize.config.k8s.io/v1beta1 | ||
kind: Kustomization | ||
resources: | ||
- namespace.yaml | ||
- helmrepository.yaml | ||
- helmrelease.yaml | ||
- upstream |
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.