Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

External-dns project scope #1923

Closed
ytsarev opened this issue Jan 15, 2021 · 21 comments · Fixed by #2157
Closed

External-dns project scope #1923

ytsarev opened this issue Jan 15, 2021 · 21 comments · Fixed by #2157
Assignees
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@ytsarev
Copy link
Member

ytsarev commented Jan 15, 2021

Recently we faced the unclarity in the scope of the project.

We faced the situation when

  1. Main project maintainers reasonably try to keep the scope focus to the original external-dns use case. That is, the creation of A/CNAME ecords from associated Service and Ingress objects, see After update from 0.7.4 to 0.7.5 external-dns try to delete ns record #1895 (comment)

  2. Meanwhile community frequently demands external-dns to support more flexible cases with extended DNS record type support and CRD like in CRD unable to create DNSEndpoint of record types other than A or CNAME #1647

It already led to ambiguity and unexpected issues like #1895 which was introduced by adding NS type support #1813

The current issue was mitigated by keeping extended functionality under the flag in #1915

We need to clarify and agree on the future strategy of external-dns functionality extension.

Practically it nails down to the question if we should add support of new DNS record types and/or if this support should be disabled by default with associated flag control.

Currently, there is visible demand for SRV and TXT record extension:

@ytsarev ytsarev added the kind/support Categorizes issue or PR as a support question. label Jan 15, 2021
@ytsarev
Copy link
Member Author

ytsarev commented Jan 15, 2021

/assign @Raffo @njuettner

@Raffo
Copy link
Contributor

Raffo commented Jan 17, 2021

Thanks for the issue @ytsarev . I'm gonna chat about this with @njuettner on our weekly meeting. I'd love to see the community chip in in this, we can also extend it to a video chat in which we discuss it, I'd be open to that.

@Raffo
Copy link
Contributor

Raffo commented Jan 20, 2021

Me, @njuettner and @linki chatted about this issue and while we did not reach an agreement on how to move forward with a decision, I thought I would share an update on the topic:

  • we did restate that the original goal of the project is to solve a simple use case of CNAME and A records.
  • we did acknowledge that there is interest in different types of record and how this clashes with the default TXT registry.
  • we discussed having a way to force the noop registry as the default in case we want to support records.
  • we considered if it would make sense to have other projects that are designed to do this job (i.e. https://github.com/octodns/octodns ) do this and keep ExternalDNS simple.
  • we acknowledged that there are already CRDs for this job, for example what GCP has with their config connector: https://github.com/GoogleCloudPlatform/k8s-config-connector/tree/master/samples/resources/dnsrecordset

Again no final decision, we will be thinking this further. Further input and comments on the points above are clearly welcome.

@danopia
Copy link

danopia commented Jan 20, 2021

Hi, I'm offering a voice here as someone who expected too much from external-dns. The CRD source seemed like a really powerful step up when I came across it. In particular I wanted external-dns to satisfy cert-manager challenges & to create A/AAAA records pointing to a CDN. It took a week or two for me to realize the CRD source was a real over-promise. If external-dns wants to block expanding scope then I think that documenting into the CRD source would help a fair bit since it provides the widest gate.

As record types are currently limited here, I ended up writing an experimental external-dns-like loop [1] with a similar TXT registry. I added a list of managed record types to the registry TXT records so that, for example, the program is happy to own only A/AAAA/MX for the domain apex & at the same time manage NS records for a subdomain. This granularity was a relatively small addition that covers a fair amount of ground.

BTW, I'm not sure that even A/CNAME is enough, I'd argue that A/AAAA/CNAME would be most forward thinking. There's definitely ongoing efforts around dualstack kubernetes that would appreciate that. Currently external-dns has a tendency to generate invalid A records when it stumbles across IPv6 addresses and fail to do anything. eg #1887 #1812

@ytsarev
Copy link
Member Author

ytsarev commented Jan 20, 2021

@Raffo what is conceptually wrong with a current flagged approach you recently introduced?

We can keep additional record support optional and only people who require advanced external-dns operations will enable them.

Personally I heavily rely on DNSEndpoint CRD and NS record support in my project and if it goes away I will have no other choice but to fork external-dns.

Hopefully, we can avoid that.

@Raffo
Copy link
Contributor

Raffo commented Jan 21, 2021

@danopia will take a look at your AAAA PR and your project to see if it can inform where we need to take ExternalDNS. One thing that I mentioned when talking with the other maintainers is that I would love to avoid generating many forks as this is exactly the type of situation that we wanted to avoid when the project was created. The issue now is really having capacity to maintain the project.

@Raffo what is conceptually wrong with a current flagged approach you recently introduced?

@ytsarev There's nothing wrong with that, I'd love to try to understand how we can evolve the project while avoiding significant slips like the one that happened with 0.7.5. Adding complexity will obviously make those regression more probable.

To be clear: at the moment we are not planning to remove any functionality.

@ytsarev
Copy link
Member Author

ytsarev commented Jan 21, 2021

@Raffo great, but if we add the next DNS record type support (as requested in #1647 ) is it ok to proceed with the flagged approach at the moment?

@Raffo
Copy link
Contributor

Raffo commented Jan 21, 2021

I would love to wait for another loop with the other maintainers before merging more record types.

@ytsarev
Copy link
Member Author

ytsarev commented Jan 21, 2021

got it, makes sense, will watch this discussion closely

@haslersn
Copy link

haslersn commented Mar 1, 2021

How does the TXT registry work with multiple record types? Is this documented somewhere?

Without knowing how it currently works, the following is the way I imagine would be reasonable: For each RRSet type <type>, prepend the prefix <type>. to the TXT prefix. For instance, if I use --txt-prefix=_heritage, the registry for RRSets of name <name> and type <type> would be under

<type>._heritage.<name> 300 IN TXT "heritage=external-dns,..."

gerhard added a commit to thechangelog/changelog.com that referenced this issue Mar 3, 2021
Includes DNSEndpoint support, but read the comments in the template.yml
for gotchas. TL;DR only A & CNAME records are supported. AAAA is
most missed.

re kubernetes-sigs/external-dns#1923
re kubernetes-sigs/external-dns#1887

Signed-off-by: Gerhard Lazu <gerhard@lazu.co.uk>
@morremeyer
Copy link
Contributor

@Raffo I have one question regarding the following:

* we did restate that the original goal of the project is to solve a simple use case of CNAME and A records.

Is not stating AAAA records an oversight here?

@Raffo
Copy link
Contributor

Raffo commented Mar 31, 2021

How does the TXT registry work with multiple record types? Is this documented somewhere?

@haslersn it doesn't, that's the problem. We could expand it as you proposed.

Is not stating AAAA records an oversight here?

@morremeyer yes, totally, AAAA need to be supported.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jun 29, 2021
@danopia
Copy link

danopia commented Jul 1, 2021

Perhaps this sort of meta ticket should be excluded from staleness?

@k0da k0da mentioned this issue Jul 6, 2021
1 task
@k0da
Copy link
Contributor

k0da commented Jul 6, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Jul 6, 2021
@k0da
Copy link
Contributor

k0da commented Jul 6, 2021

@Raffo I implemented @haslersn suggestion in #2157

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle stale
  • Mark this issue or PR as rotten with /lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 4, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Close this issue or PR with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Nov 3, 2021
@k8s-ci-robot k8s-ci-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Nov 3, 2021
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue.

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues and PRs according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue or PR with /reopen
  • Mark this issue or PR as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k0da
Copy link
Contributor

k0da commented Apr 19, 2022

As this is implemented, it opens a way to handle whatever record type theoretically

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

10 participants