You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
This picture suggests use of Metac as a library to be used by the logic implementing K8s controller(s).
Deployment here refers to a Kubernetes Deployment
Deployment will have a container image that is supposed to implement one or more K8s controllers
Deployment image will use Metac as a library
Config implies the configuration used by this Deployment image
Config consists of a part of Metac controller schema
Since this image will handle the reconcile i.e. sync, no need to specify any hooks that Metac typically uses when Metac is used as a standalone container
Above yaml is actually a controller specific config that this deployment will make use of to get the current (last observed) state of following resources:
This observed state of the declared resources is received directly into some pre-defined function i.e. hook (reconcile function) as argument(s). All this is possible without the need to write logic around K8s Informers, work queues, code generation, etc. This reconcile function can now manipulate the state and send back the manipulated state as the response.
Feeding the K8s observed state of requested resources & applying the updated resources against K8s cluster is handled auto-magically by the Metac code which is used as a library by this deployment. The only thing this controller code needs to bother is validating if resources are already in their desired states or need any changes. In other words, deals with just the business logic.
On a side note, above abstracts dealing with huge number of Kubernetes resources. However, this is not sufficient in the long run, since above will load lot of resources that need to be filtered later within the reconcile hook. A better config might look something like below:
watch:
apiVersion: ddp.mayadata.io/v1resource: storagesattachments:
- apiVersion: v1resource: persistentvolumeclaims # filter if claimed by watch or fetch all
- apiVersion: storage.k8s.io/v1resource: persistentvolumes # filter if claimed by watch or fetch all
- apiVersion: v1resource: nodes # filter if claimed by watch or fetch all
- apiVersion: csi.k8s.io/v1alpha1resource: csinodes # filter if claimed by watch or fetch all
- apiVersion: csi.k8s.io/v1alpha1resource: volumeattachments # filter if claimed by watch or fetch all
Note that there is no change to config, but perhaps above filtering should happen internally by Metac library.
Concluding remarks!!!
🤔 Can we make use of lot of selectors to filter the watch & its attachments?
Yes. Metac will add multiple selectors like labelSelector, annotationSelector, nameSelector, namespaceSelector & so on. However, a controller should be designed with single responsibility in mind. In other words, when the config section grows with lot of attachments, it will be appropriate to re-design the config i.e. break it into multiple smaller configs, feed them to the deployment image and implement similar number of reconcile functions. After all all of them will work to get to the desired state.
The text was updated successfully, but these errors were encountered:
This picture suggests use of Metac as a library to be used by the logic implementing K8s controller(s).
Config
Config to the deployment can look like below:
Above yaml is actually a controller specific config that this deployment will make use of to get the current (last observed) state of following resources:
This observed state of the declared resources is received directly into some pre-defined function i.e. hook (reconcile function) as argument(s). All this is possible without the need to write logic around K8s Informers, work queues, code generation, etc. This reconcile function can now manipulate the state and send back the manipulated state as the response.
Feeding the K8s observed state of requested resources & applying the updated resources against K8s cluster is handled auto-magically by the Metac code which is used as a library by this deployment. The only thing this controller code needs to bother is validating if resources are already in their desired states or need any changes. In other words, deals with just the business logic.
On a side note, above abstracts dealing with huge number of Kubernetes resources. However, this is not sufficient in the long run, since above will load lot of resources that need to be filtered later within the reconcile hook. A better config might look something like below:
Note that there is no change to config, but perhaps above filtering should happen internally by Metac library.
Concluding remarks!!!
🤔 Can we make use of lot of selectors to filter the watch & its attachments?
Yes. Metac will add multiple selectors like
labelSelector
,annotationSelector
,nameSelector
,namespaceSelector
& so on. However, a controller should be designed with single responsibility in mind. In other words, when the config section grows with lot of attachments, it will be appropriate to re-design the config i.e. break it into multiple smaller configs, feed them to the deployment image and implement similar number of reconcile functions. After all all of them will work to get to the desired state.The text was updated successfully, but these errors were encountered: