Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

'Nodes' source assumes all discovered IPs are IPv4 #1875

Closed
danopia opened this issue Nov 28, 2020 · 7 comments · Fixed by #3588
Closed

'Nodes' source assumes all discovered IPs are IPv4 #1875

danopia opened this issue Nov 28, 2020 · 7 comments · Fixed by #3588
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.

Comments

@danopia
Copy link

danopia commented Nov 28, 2020

What would you like to be added:

The node.go source only knows how to make A endpoints: https://github.com/kubernetes-sigs/external-dns/blob/master/source/node.go#L129

The source should be extended to recognize IPv6 addresses and add them as AAAA endpoints instead of A.

I looked at providing a refactor but struggled to handle the code as-written, it wants to only think about one type of endpoint. This is the basic gist of the tests that should pass:

		{
			"node with both IPv4 and IPv6 addresses results in both A and AAAA records",
			"",
			"",
			"node1",
			[]v1.NodeAddress{{Type: v1.NodeInternalIP, Address: "2.3.4.5"}, {Type: v1.NodeInternalIP, Address: "2::5"}},
			map[string]string{},
			map[string]string{},
			[]*endpoint.Endpoint{
				{RecordType: "A", DNSName: "node1", Targets: endpoint.Targets{"2.3.4.5"}},
				{RecordType: "AAAA", DNSName: "node1", Targets: endpoint.Targets{"2::5"}},
			},
			false,
		},
		{
			"node with only IPv6 addresses results in only AAAA records",
			"",
			"",
			"node1",
			[]v1.NodeAddress{{Type: v1.NodeInternalIP, Address: "1::1"}, {Type: v1.NodeInternalIP, Address: "2::5"}},
			map[string]string{},
			map[string]string{},
			[]*endpoint.Endpoint{
				{RecordType: "AAAA", DNSName: "node1", Targets: endpoint.Targets{"1::1", "2::5"}},
			},
			false,
		},

Why is this needed:

I have geographically diverse nodes and external-dns's nodes source made for a pretty sweet Kubernetes-native dynamic DNS solution. Unfortunately I found a limit when I added a dual-stack node. (Even more important because the relevant ISP rotates the customer's IPv6 prefix every few days)

time="2020-11-28T21:08:34Z" level=info msg="Change zone: myzone batch #0"
time="2020-11-28T21:08:34Z" level=info msg="Del records: chicago.my.zone. A [8.8.8.8] 300"
time="2020-11-28T21:08:34Z" level=info msg="Del records: chicago.my.zone. TXT [\"heritage=external-dns,external-dns/owner=nodes-usc1f\"] 300"
time="2020-11-28T21:08:34Z" level=info msg="Add records: chicago.my.zone. A [8.8.8.8 2001::1] 300"
time="2020-11-28T21:08:34Z" level=info msg="Add records: chicago.my.zone. TXT [\"heritage=external-dns,external-dns/owner=nodes-usc1f\"] 300"
time="2020-11-28T21:08:34Z" level=error msg="googleapi: Error 400: Invalid value for 'entity.change.additions[0].rrdata[1]': '2001::1', invalid"
@danopia danopia added the kind/feature Categorizes issue or PR as related to a new feature. label Nov 28, 2020
@danopia
Copy link
Author

danopia commented Nov 29, 2020

Well, I wrote my intended sloppy patch on node.go (linked in above activity) and it satisfies my test cases but apparently external-dns has larger issues with IPv6 preparedness:

  1. AAAA just isn't planned by default... https://github.com/kubernetes-sigs/external-dns/blob/master/plan/plan.go#L245
  2. When planning multiple endpoints for one dns name; the planner just takes the "less" endpoint and rolls with it

So I suppose this is somewhat blocked on overall AAAA support in external-dns. I tried also patching AAAA into the plannable record passlist and now IPv6-only nodes work correctly keep re-adding the AAAA over and over, while dual-stack ones simply pick one family to use.

Also, related, another source that hardcodes A: #1812

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 27, 2021
@unixfox
Copy link

unixfox commented Feb 27, 2021

/remove-lifecycle stale

@k8s-ci-robot k8s-ci-robot removed the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Feb 27, 2021
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label May 28, 2021
@morremeyer
Copy link
Contributor

/lifecycle frozen

this is part of the IPv6 efforts and should therefore not be autoclosed.

@k8s-ci-robot k8s-ci-robot added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 28, 2021
@samip5
Copy link
Contributor

samip5 commented Sep 15, 2021

Could this get some more traction as the IPv6 support in Kubernetes is moving to stable in the next release?

@djh00t
Copy link

djh00t commented Apr 21, 2023

Is there any news on this feature?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness.
Projects
None yet
7 participants