Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

While using tailscale, uninstall should clean routes in table 52 that was added on startup. #7772

Closed
ShylajaDevadiga opened this issue Jun 13, 2023 · 2 comments
Assignees
Labels
kind/bug Something isn't working
Milestone

Comments

@ShylajaDevadiga
Copy link
Contributor

Environmental Info:
K3s Version:
k3s version v1.27.2+k3s-b66a1183

Node(s) CPU architecture, OS, and Version:
SLES15 SP3

Cluster Configuration:
Single-node or multi-node

Describe the bug:
After installing k3s with tailscale, pods are in CrashLoopBackOff state due to leaving old obsolete routes in the table.

Steps To Reproduce:
Install multiple clusters using default route

Expected behavior:
routes in table 52 should be removed while uninstalling to avoid network issues.

Actual behavior:
routes in table 52 are present after uninstall

Additional context / logs:
After uninstalling k3s

ip route show table 52
10.42.0.0/24 dev tailscale0 
10.42.1.0/24 dev tailscale0 
10.42.2.0/24 dev tailscale0 
10.42.3.0/24 dev tailscale0 
10.50.0.0/24 dev tailscale0 
1.2.3.4 dev tailscale0 
1.2.3.5 dev tailscale0 
1.2.3.6 dev tailscale0 
1.2.3.7 dev tailscale0 
1.2.3.8 dev tailscale0 
1.2.3.9 dev tailscale0 
1.2.3.10 dev tailscale0 
@ShylajaDevadiga
Copy link
Contributor Author

While uninstalling on k3s version v1.27.3-rc1+k3s1, routes in table 52 seem to persist and tailscale interface continues to have an IP

$ k3s -v
-bash: /usr/local/bin/k3s: No such file or directory

$ ip route show table 52
10.42.0.0/24 dev tailscale0 
1.2.3.4 dev tailscale0 
1.2.3.5 dev tailscale0 
1.2.3.6 dev tailscale0 
1.2.3.7 dev tailscale0 
$ ip a|grep tailscale
3: tailscale0: <POINTOPOINT,MULTICAST,NOARP,UP,LOWER_UP> mtu 1280 qdisc fq_codel state UNKNOWN group default qlen 500
    inet <REDACTED>/32 scope global tailscale0
$ tailscale status --json
{
  "Version": "1.42.0-t3a83d61ec-g6702f39bf",
...

@manuelbuil
Copy link
Contributor

This is related to an incorrect scope of the tailscale key which makes all nodes share the same tailscale network, even if nodes belong to different k3s clusters. As a consequence, their subnets get pushed in table 52 as soon as the tailscale client logs in

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working
Projects
Archived in project
Development

No branches or pull requests

2 participants