Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

docs: add EDOT colletor kube-stack Helm values #5822

Draft
wants to merge 12 commits into
base: main
Choose a base branch
from

Conversation

rogercoll
Copy link
Contributor

@rogercoll rogercoll commented Oct 18, 2024

What does this PR do?

Adds a sample configuration to be used when deploying the OpenTelemetry Kube Stack Helm Chart.

Why is it important?

This configuration will be used during the Elastic K8s Onboarding experience starting from 8.16. The EDOT collector is the central piece of the whole Chart deployment, if we provide an invalid image or collector's configuration, the deployment will fail.

The file is currently hosted in the elastic/opentelemetry repository, but this files needs to be versioned as it is tied to the stack's (elastic-agent) release version. The elastic/opentelemetry repository does not have a release strategy (neither backport) yet, thus, and being the EDOT collector the backbone of the deployment, we think that hosting this file in this repository would be more appropriate.

Checklist

  • My code follows the style guidelines of this project
  • I have commented my code, particularly in hard-to-understand areas: TODO task listed to do during the 8.16 FF phase
  • I have made corresponding changes to the documentation
  • I have made corresponding change to the default configuration files
  • I have added tests that prove my fix is effective or that my feature works
  • I have added an entry in ./changelog/fragments using the changelog tool
  • I have added an integration test or an E2E test

Disruptive User Impact

No

How to test this PR locally

Added a README.md file an installation section.

Related issues

Questions to ask yourself

  • How are we going to support this in production?
  • How are we going to measure its adoption?
  • How are we going to debug this?
  • What are the metrics I should take care of?
  • ...

Copy link
Contributor

mergify bot commented Oct 18, 2024

This pull request does not have a backport label. Could you fix it @rogercoll? 🙏
To fixup this pull request, you need to add the backport labels for the needed
branches, such as:

  • backport-./d./d is the label to automatically backport to the 8./d branch. /d is the digit

Copy link
Contributor

mergify bot commented Oct 18, 2024

backport-v8.x has been added to help with the transition to the new branch 8.x.
If you don't need it please use backport-skip label and remove the backport-8.x label.

@mergify mergify bot added the backport-8.x Automated backport to the 8.x branch with mergify label Oct 18, 2024
@rogercoll rogercoll added the backport-8.16 Automated backport with mergify label Oct 18, 2024
@rogercoll rogercoll marked this pull request as ready for review October 18, 2024 07:12
@rogercoll rogercoll requested a review from a team as a code owner October 18, 2024 07:12
@pierrehilbert pierrehilbert added the Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team label Oct 18, 2024
@elasticmachine
Copy link
Contributor

Pinging @elastic/elastic-agent-control-plane (Team:Elastic-Agent-Control-Plane)

rogercoll and others added 6 commits October 18, 2024 10:02
Co-authored-by: Andrew Gizas <andreas.gkizas@elastic.co>
Co-authored-by: Andrew Gizas <andreas.gkizas@elastic.co>
Co-authored-by: Andrew Gizas <andreas.gkizas@elastic.co>
Co-authored-by: Andrew Gizas <andreas.gkizas@elastic.co>
Co-authored-by: Andrew Gizas <andreas.gkizas@elastic.co>
Co-authored-by: Andrew Gizas <andreas.gkizas@elastic.co>
```
$ helm repo add open-telemetry https://open-telemetry.github.io/opentelemetry-helm-charts
$ helm repo update
$ helm upgrade --install --namespace opentelemetry-operator-system opentelemetry-kube-stack open-telemetry/opentelemetry-kube-stack --values ./resources/kubernetes/operator/helm/values.yaml --version 0.3.0
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would update the path to values.yaml relative to what will be in this folder
So not ./resources/kubernetes/operator/helm/values.yaml rather /deploy/helm ....

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, I think I would prefer to keep it relative to the README's directory: 89cd2a6

Co-authored-by: Andrew Gizas <andreas.gkizas@elastic.co>
@gizas
Copy link
Contributor

gizas commented Oct 18, 2024

nit: Dont have strong preference, wondering if we need the extra subfolder kube-stack.
Now is : deploy/helm/edot-collector/kube-stack/

@rogercoll
Copy link
Contributor Author

nit: Dont have strong preference, wondering if we need the extra subfolder kube-stack. Now is : deploy/helm/edot-collector/kube-stack/

Not a strong opinion neither, maybe having the extra directory will be easier for us to add extra Helm Charts. Open to any suggestion

@rogercoll
Copy link
Contributor Author

@eedugon Adding you as reviewer in case you want to share your feedback about the values path or anything related to the PR. Thanks!

@jlind23 jlind23 requested review from pkoutsovasilis and swiatekm and removed request for andrzej-stencel and michel-laterman October 18, 2024 11:14
Copy link
Contributor

@pkoutsovasilis pkoutsovasilis left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rogercoll while I am still reading through this PR, I have the following questions on top of my head:

  • since this is another "deployment artifact" of elastic-agent an integration test should be part of this PR that tests at least that the agent pod is getting deployment and get healthy; is there any that I missed?!
  • how do you plan to keep the agent version updated in the values.yaml?!

@rogercoll
Copy link
Contributor Author

since this is another "deployment artifact" of elastic-agent an integration test should be part of this PR that tests at least that the agent pod is getting deployment and get healthy; is there any that I missed?!

Totally agree, is there any example I can look into?

how do you plan to keep the agent version updated in the values.yaml?!

I was thinking on Renovate. Not very familiar with the elastic-agent code, I will take look

@pkoutsovasilis
Copy link
Contributor

Totally agree, is there any example I can look into?

You can look here I believe that with a few minor changes you can definitely test the EDOT helm chart

I was thinking on Renovate. Not very familiar with the elastic-agent code, I will take look

To be honest I am not super familiar with Renovate, maybe now we could the reverse of the above and you give me a PR/code link so I can have a look and get more familiar with it? 🙂

@rogercoll
Copy link
Contributor Author

To be honest I am not super familiar with Renovate, maybe now we could the reverse of the above and you give me a PR/code link so I can have a look and get more familiar with it? 🙂

https://github.com/open-telemetry/opentelemetry-demo/blob/main/renovate.json5#L113
In the shared file, Renovate is configured to find the defined container images using regular expressions and later on create a PR to bump the version if needed.

Is there any other tool already available to do that? How PRs like #4592 are created?

@rogercoll rogercoll marked this pull request as draft October 18, 2024 14:40
defaultCRConfig:
image:
repository: "docker.elastic.co/beats/elastic-agent"
tag: "8.16.0-SNAPSHOT"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To @pkoutsovasilis' point, I'd either comment this out or put a placeholder here, and assume implicitly that the agent version from the git revision applies. Otherwise we'll need to keep updating it via tooling on every active branch, and it's going to be very annoying. Even in this PR, it should be 8.17.0 - there's already a separate branch for 8.16.

Not sure how much of a problem this will be for Kibana. Maybe they can just fill the stack version in here?

Copy link

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backport-8.x Automated backport to the 8.x branch with mergify backport-8.16 Automated backport with mergify Team:Elastic-Agent-Control-Plane Label for the Agent Control Plane team
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants