Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Use Metac as a library #11

Closed
AmitKumarDas opened this issue Sep 12, 2019 · 2 comments
Closed

Use Metac as a library #11

AmitKumarDas opened this issue Sep 12, 2019 · 2 comments
Assignees
Labels
config deploy How Metac can be deployed gctl Generic Controller library Use Metac as a library sidecar Deploy Metac as a sidecar

Comments

@AmitKumarDas
Copy link
Owner

AmitKumarDas commented Sep 12, 2019

as-deployment

This picture suggests use of Metac as a library to be used by the logic implementing K8s controller(s).

  • Deployment here refers to a Kubernetes Deployment
  • Deployment will have a container image that is supposed to implement one or more K8s controllers
  • Deployment image will use Metac as a library
  • Config implies the configuration used by this Deployment image
  • Config consists of a part of Metac controller schema
  • Since this image will handle the reconcile i.e. sync, no need to specify any hooks that Metac typically uses when Metac is used as a standalone container

Config

Config to the deployment can look like below:

watch:
  apiVersion: ddp.mayadata.io/v1
  resource: storages
attachments:
- apiVersion: v1
  resource: persistentvolumeclaims
- apiVersion: storage.k8s.io/v1
  resource: persistentvolumes
- apiVersion: v1
  resource: nodes
- apiVersion: csi.k8s.io/v1alpha1
  resource: csinodes
- apiVersion: csi.k8s.io/v1alpha1
  resource: volumeattachments

Above yaml is actually a controller specific config that this deployment will make use of to get the current (last observed) state of following resources:

- storages,
- persistentvolumeclaims,
- persistentvolumes,
- nodes,
- csinodes,
- volumeattachments

This observed state of the declared resources is received directly into some pre-defined function i.e. hook (reconcile function) as argument(s). All this is possible without the need to write logic around K8s Informers, work queues, code generation, etc. This reconcile function can now manipulate the state and send back the manipulated state as the response.

Feeding the K8s observed state of requested resources & applying the updated resources against K8s cluster is handled auto-magically by the Metac code which is used as a library by this deployment. The only thing this controller code needs to bother is validating if resources are already in their desired states or need any changes. In other words, deals with just the business logic.

reconcile-hook

On a side note, above abstracts dealing with huge number of Kubernetes resources. However, this is not sufficient in the long run, since above will load lot of resources that need to be filtered later within the reconcile hook. A better config might look something like below:

watch:
  apiVersion: ddp.mayadata.io/v1
  resource: storages
attachments:
- apiVersion: v1
  resource: persistentvolumeclaims # filter if claimed by watch or fetch all
- apiVersion: storage.k8s.io/v1
  resource: persistentvolumes # filter if claimed by watch or fetch all
- apiVersion: v1
  resource: nodes # filter if claimed by watch or fetch all
- apiVersion: csi.k8s.io/v1alpha1
  resource: csinodes # filter if claimed by watch or fetch all
- apiVersion: csi.k8s.io/v1alpha1
  resource: volumeattachments # filter if claimed by watch or fetch all

Note that there is no change to config, but perhaps above filtering should happen internally by Metac library.

Concluding remarks!!!

🤔 Can we make use of lot of selectors to filter the watch & its attachments?

Yes. Metac will add multiple selectors like labelSelector, annotationSelector, nameSelector, namespaceSelector & so on. However, a controller should be designed with single responsibility in mind. In other words, when the config section grows with lot of attachments, it will be appropriate to re-design the config i.e. break it into multiple smaller configs, feed them to the deployment image and implement similar number of reconcile functions. After all all of them will work to get to the desired state.

@AmitKumarDas AmitKumarDas added deploy How Metac can be deployed library Use Metac as a library labels Sep 12, 2019
@AmitKumarDas
Copy link
Owner Author

Adding more thoughts

  • How about multiple sidecars (container images) each with a single controller responsibility?
    • Each of these sidecars will reconcile based on its own controller config
  • How about one or more init containers?
    • Each init container will install CRDs, ConfigMaps, Secrets, and so on

@AmitKumarDas AmitKumarDas added the sidecar Deploy Metac as a sidecar label Sep 13, 2019
@AmitKumarDas
Copy link
Owner Author

Metac supports being imported as a library. Some of the examples that use metac as a library can be found here:

@AmitKumarDas AmitKumarDas added config gctl Generic Controller labels Nov 13, 2019
@AmitKumarDas AmitKumarDas self-assigned this Nov 13, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
config deploy How Metac can be deployed gctl Generic Controller library Use Metac as a library sidecar Deploy Metac as a sidecar
Projects
None yet
Development

No branches or pull requests

1 participant