Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

kwt net fails to start, dns doesnt resolve #21

Open
cameronbraid opened this issue Mar 13, 2020 · 7 comments
Open

kwt net fails to start, dns doesnt resolve #21

cameronbraid opened this issue Mar 13, 2020 · 7 comments

Comments

@cameronbraid
Copy link
Contributor

kwt version
Client Version: 0.0.6

Succeeded

Running kwt never reaches the "ForwardingProxy: Ready" log line as in the README

sudo -E kwt net start --debug
02:37:21PM: debug: KubeSubnets: Finished fetching pods (53) and services (29) in 29.67472ms
02:37:21PM: debug: ReconnSSHClient: Trying to reconnect SSH client
02:37:21PM: info: KubeEntryPoint: Creating networking client secret 'kwt-net-ssh-key' in namespace 'default'...
02:37:21PM: info: KubeEntryPoint: Creating networking host secret 'kwt-net-host-key' in namespace 'default'...
02:37:21PM: info: KubeEntryPoint: Creating networking pod 'kwt-net' in namespace 'default'
02:37:21PM: info: KubeEntryPoint: Waiting for networking pod 'kwt-net' in namespace 'default' to start...
02:37:21PM: debug: KubePortForward: Starting port forwarding
02:37:21PM: debug: KubePortForward: out: Forwarding from 127.0.0.1:44959 -> 2048

02:37:21PM: debug: KubePortForward: err: 
02:37:21PM: debug: ReconnSSHClient: Reconnected SSH client
02:37:21PM: info: dns.FailoverRecursorPool: Starting with '127.0.0.1:53'
02:37:21PM: debug: dns.DomainsMux: Updating DNS domain handlers: map[cluster.local.:kube-dns]
02:37:21PM: info: dns.DomainsMux: Registering cluster.local.->kube-dns
02:37:21PM: debug: dns.DNSOSCache: Skipping clearing of OS DNS cache
02:37:21PM: debug: dns.DomainsMux: Updating DNS domain handlers: map[cluster.local.:kube-dns]
02:37:21PM: info: TCPProxy: Started proxy on 127.0.0.1:45955
02:37:21PM: info: UDPProxy: Started proxy on 127.0.0.1:40387
02:37:21PM: info: dns.Server: Started DNS server on 127.0.0.1:37265 (TCP) and 127.0.0.1:38123 (UDP)
02:37:21PM: debug: OsCmdExecutor: Running 'iptables -w -L -t nat'
02:37:24PM: debug: SSHClient: Sending keepalive: false [] %!s(<nil>)
02:37:27PM: debug: SSHClient: Sending keepalive: false [] %!s(<nil>)
02:37:30PM: debug: SSHClient: Sending keepalive: false [] %!s(<nil>)
02:37:33PM: debug: SSHClient: Sending keepalive: false [] %!s(<nil>)
02:37:36PM: debug: SSHClient: Sending keepalive: false [] %!s(<nil>)
02:37:39PM: debug: SSHClient: Sending keepalive: false [] %!s(<nil>)
02:37:42PM: debug: SSHClient: Sending keepalive: false [] %!s(<nil>)
02:37:45PM: debug: SSHClient: Sending keepalive: false [] %!s(<nil>)
02:37:48PM: debug: SSHClient: Sending keepalive: false [] %!s(<nil>)
02:37:51PM: debug: dns.DomainsMux: Updating DNS domain handlers: map[cluster.local.:kube-dns]
02:37:51PM: debug: SSHClient: Sending keepalive: false [] %!s(<nil>)
02:37:54PM: debug: SSHClient: Sending keepalive: false [] %!s(<nil>)
02:37:57PM: debug: SSHClient: Sending keepalive: false [] %!s(<nil>)

also dns lookups fail

> dig whoami.demo.svc.cluster.local

; <<>> DiG 9.11.5-P4-5.1ubuntu2.1-Ubuntu <<>> whoami.demo.svc.cluster.local
;; global options: +cmd
;; Got answer:
;; WARNING: .local is reserved for Multicast DNS
;; You are currently testing what happens when an mDNS query is leaked to DNS
;; ->>HEADER<<- opcode: QUERY, status: SERVFAIL, id: 21784
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;whoami.demo.svc.cluster.local.	IN	A

;; Query time: 0 msec
;; SERVER: 127.0.0.1#53(127.0.0.1)
;; WHEN: Fri Mar 13 14:42:05 AEDT 2020
;; MSG SIZE  rcvd: 58
> kwt net service -n demo

Services in namespace 'demo'

Name                  Internal DNS                                 Cluster IP    Ports  
netshoot-headless     netshoot-headless.demo.svc.cluster.local     None          80/tcp  
whoami                whoami.demo.svc.cluster.local                10.103.36.93  80/tcp  
whoami-external-name  whoami-external-name.demo.svc.cluster.local  -             -  

3 services

Succeeded

There are no logs in the kwt-net pod

> kubectl -n default get pod -o wide
NAME      READY   STATUS    RESTARTS   AGE   IP            NODE                 NOMINATED NODE   READINESS GATES
kwt-net   1/1     Running   0          24m   10.244.0.13   kind-control-plane   <none>           <none>
> kubectl -n default logs kwt-net
<blank>
@cameronbraid
Copy link
Contributor Author

I can get dig to work like :

dig whoami.demo.svc.cluster.local @127.0.0.1 -p 58748
...
whoami.demo.svc.cluster.local. 0 IN	A	10.103.36.93
...

@cameronbraid
Copy link
Contributor Author

I think its the following command that is not finishing :

iptables -w -L -t nat

If I run this it goes really slowly from

Chain POSTROUTING (policy ACCEPT)
target     prot opt source               destination         

then each few seconds it outputs ONE line like :

MASQUERADE  all  --  172.21.0.0/16        anywhere            
MASQUERADE  all  --  172-0-0-0.lightspeed.brhmal.sbcglobal.net/16  anywhere            
MASQUERADE  all  --  172.18.0.0/16        anywhere            

If I run iptables -w -L -t nat -n I get the list almost instantly

Is there a workaround, or do the sources of kwt need to change to include the numeric argument for iptables ?

@cameronbraid
Copy link
Contributor Author

that gets me further !

sudo -E (pwd)/kwt net start
03:15:00PM: info: KubeEntryPoint: Creating networking client secret 'kwt-net-ssh-key' in namespace 'default'...
03:15:00PM: info: KubeEntryPoint: Creating networking host secret 'kwt-net-host-key' in namespace 'default'...
03:15:00PM: info: KubeEntryPoint: Creating networking pod 'kwt-net' in namespace 'default'
03:15:00PM: info: KubeEntryPoint: Waiting for networking pod 'kwt-net' in namespace 'default' to start...
03:15:00PM: info: dns.FailoverRecursorPool: Starting with '127.0.0.1:53'
03:15:00PM: info: dns.DomainsMux: Registering cluster.local.->kube-dns
03:15:00PM: info: TCPProxy: Started proxy on 127.0.0.1:37669
03:15:00PM: info: UDPProxy: Started proxy on 127.0.0.1:59131
03:15:00PM: info: dns.Server: Started DNS server on 127.0.0.1:34605 (TCP) and 127.0.0.1:51127 (UDP)
03:15:00PM: info: ForwardingProxy: Forwarding subnets: 10.244.1.5/14, 10.96.0.1/14, 10.103.36.93/14, 10.106.88.227/14, 10.110.20.239/14
03:15:00PM: info: ForwardingProxy: Ready!

Though I still cant ping pods, or dns resolve without targeting the kwt dns server directly

@cameronbraid
Copy link
Contributor Author

I managed to get dns to work if I run the following iptables command

iptables -w -t nat -A kwt-tcp-39901-output -j REDIRECT --dest 127.0.0.1/32 -p tcp --dport 53 --to-ports 33585

The same as the command that kwt ran, omitting the -m ttl ! --ttl 42 -m owner ! --gid-owner 1 args

I dont know what they do, so I dont think I can progress any further

@cameronbraid
Copy link
Contributor Author

Hrm, well, now its working so not sure what's going on there.

Would you accept a PR to add the '-n' arg ?

@cameronbraid
Copy link
Contributor Author

#22

@cppforlife
Copy link
Contributor

omitting the -m ttl ! --ttl 42 -m owner ! --gid-owner 1. I dont know what they do, so I dont think I can progress any further

hmm from i recall this was added to avoid kwt catching traffic coming out of kwt.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants