Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deeplink does not work after auth redirect due to the use of URL fragment based routing #1360

Closed
jieyu opened this issue May 6, 2020 · 19 comments
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.

Comments

@jieyu
Copy link

jieyu commented May 6, 2020

We put tekton dashboards behind a reverse proxy that performs OIDC based authentication flow. When a new user clicks a link to a pipeline run from the github commit status (e.g., https://<cluster_url>/tekton/#/namespaces/default/pipelineruns/xxx), the proxy will perform OIDC login flow. Once the OIDC login finishes and succeeds, the browser will redirected to the original URL. However, after the auth redirection, the fragment part of the original deeplink is dropped (i.e., https://<cluster_url>/tekton/). Thus, the users will always be routed to the main page, not the specific pipelinerun view.

Expected behavior

After auth redirection, the user should be taken to the pipelinerun view

Actual behavior

After auth redirection, the user always sees the main page

Steps to reproduce the problem

Put dashboard behind a reverse proxy (e.g. envoy) that perform OIDC login flow for authentication.

Environment

  • Kubernetes Platform: Kubernetes 1.16.8
  • Tekton Pipelines version: v0.11.1
  • Tekton Triggers version:
  • Tekton Dashboard version: 0.6.1

Additional Info

Using latest chrome

@AlanGreene
Copy link
Member

Which browser(s) are you seeing this behaviour in? I've seen problems in some browsers in the past not properly preserving the fragment on redirects, depending on number or redirects, number of domains involved, etc.

@a-roberts have you seen anything like this with the oauth-proxy on OpenShift?


Some background on the use of URL fragment for our routing... There are a number of reasons we use fragment based routing instead of path based, including the fact that we can't reliably determine the root path for the dashboard since the same deployment could be accessed in a number of different ways:

  • kubectl proxy e.g. localhost:8001/api/v1/namespaces/tekton-pipelines/services/tekton-dashboard:http/proxy/
  • kubectl port-forward e.g. localhost:9097/
  • using an ingress / route with optional path e.g. <cluster_url>/tekton
  • ...

Using the URL fragment provides a reliable mechanism to support client side routing in all of these scenarios.

@jieyu
Copy link
Author

jieyu commented May 6, 2020

@AlanGreene I am using chrome

Update: I can also reproduce the same issue using Safari

@AlanGreene
Copy link
Member

AlanGreene commented May 6, 2020

Can you reproduce this using another application (even just a simple http server with a static page) behind your OIDC proxy in place of the dashboard? This would help to narrow down where the cause lies.

I suspect one of the redirects in your login flow includes its own fragment (including state, token, or similar) which would override the original fragment on the target URL, but it would be good to confirm this so we can understand the issue and determine what options might be available.

I'll try to take a closer look at this tomorrow too.

@jieyu
Copy link
Author

jieyu commented May 6, 2020

I launched a simple nginx server behind the proxy (Ingress) and put it under /nginx. And then, I typed the following URL in the browser:

https://<cluster_url>/nginx/#foobar

Once the redirect finishes, i can see that the #foobar part is dropped

https://<cluster_url>/nginx/

I'll try to see if I can reproduce using a different oidc proxy

@jieyu
Copy link
Author

jieyu commented May 7, 2020

xref: elastic/kibana#18392 (comment)

I think this issue in kibana is related to this problem

@a-roberts
Copy link
Member

@a-roberts have you seen anything like this with the oauth-proxy on OpenShift?

No, we've been testing as we go on OpenShift, including with the pull request monitor task for the webhooks extension, since OpenShift 3.11 and we include a PipelineRun link in the comment that works 🤔

@jieyu
Copy link
Author

jieyu commented May 7, 2020

@a-roberts so the link works with unauthenticated users (meaning it requires a full oidc connect login flow)?

@jieyu
Copy link
Author

jieyu commented May 8, 2020

I was able to reproduce this using another proxy (ambassador's oauth2 integration) with auth0 as IDP. Here are the steps:

  • Launch a kubernetes cluster (we're using 1.16.8) on AWS
  • Install ambassador ingress
$ helm install --name ambassador --namespace ambassador datawire/ambassador
---
apiVersion: getambassador.io/v2
kind: Filter
metadata:
  name: auth-filter
  namespace: ambassador
spec:
  OAuth2:
    authorizationURL: https://jie.auth0.com
    extraAuthorizationParameters:
      audience: https://jie.auth0.com/api/v2/
    clientID: 6Qj7QeKPH7jPZs5K6AmrlHXzSVAY0STb
    secret: SOME_SECRET_HERE
    protectedOrigins:
    - origin: https://ae7723ac3d24841f6872adc4085d31b2-664452685.us-west-2.elb.amazonaws.com
---
apiVersion: getambassador.io/v2
kind: FilterPolicy
metadata:
  name: auth-filter-policy
  namespace: ambassador
spec:
  rules:
  - host: ae7723ac3d24841f6872adc4085d31b2-664452685.us-west-2.elb.amazonaws.com
    path: /
    filters:
    - name: auth-filter
      arguments:
        scopes:
        - "openid"
  • Install an ingress for the tekton dashboard like the following (assuming it's already installed)
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
  annotations:
    kubernetes.io/ingress.class: ambassador
  name: tekton-ingress
spec:
  rules:
  - http:
      paths:
      - backend:
          serviceName: tekton-dashboard
          servicePort: 9097
        path: /
  • Then, use a deep link to access a particular page. It'll trigger oidc login flow. After login, the URL fragment is lost

tekton-deeplink

@AlanGreene
Copy link
Member

AlanGreene commented May 11, 2020

If you can customise the start page for the oauth flow you could persist the URL fragment in a cookie or sessionStorage (it should still be available in the URL on the start page before the oauth redirects), then on the callback page read the value back and redirect to the dashboard with the fragment appended.

@jieyu
Copy link
Author

jieyu commented May 11, 2020

@AlanGreene hum, how does that work if i provide a link directly from the github commit status that the users will click for their PRs? I guess i'll have those links link to a webapp that perform the segment persistence and redirect using javascript?

This still feels hacky. @AlanGreene any idea how much work it'll be to switch out the fragment based routing? (e.g., to solve the root path issue, we could introduce more flags allowing users to specify the root path, similar to what Grafana does).

@AlanGreene
Copy link
Member

AlanGreene commented May 11, 2020

If the user is already authenticated they should be taken directly to the correct page, no changes needed.

If they're not authenticated they should be directed to the start page of your oauth flow (with the fragment still in the URL), this is the same behaviour as today. The choices available from this point depend on your particular setup and what config or other changes you're allowed / willing to make in your environment.

Here's a simplified flow (ignoring the redirects for the actual authentication piece in the middle) describing where you would make changes:
image

At Note 2, depending on your particular setup you may be able to customize the behaviour responsible for generating the callback URL or you may be able to inject a page that retrieves the value you persisted earlier then redirect to the actual desired location.


Switching the Dashboard from fragment-based routing is not straightforward. The client code must be aware of the root path so that it can perform its routing correctly. The client cannot reliably determine the root path on its own, so this would need to be provided by config. However, bear in mind that the value of the root path needed by the client code for a single Dashboard deployment varies depending on how it is accessed:

  • kubectl proxy, root path: /api/v1/namespaces/tekton-pipelines/services/tekton-dashboard:http/proxy/
  • kubectl port-forward ..., root path: /
  • ingress, root path: depends on the ingress

The back end code would also need to be updated to handle all client routes and serve up the correct client bundle. We would also need to ensure we don't have any conflicts between client routes and APIs.

We already provide multiple build variants (OpenShift vs. not, read-only vs. read-write), and multiple flags / environment properties to modify behaviour, so any changes that add additional complexity to this need to be handled very carefully so we don't break the experience for existing users or increase the overhead and maintenance cost of all the different permutations that need to be tested.

@eddycharly
Copy link
Member

Isn’t it possible to use a special maker like /—/ in the url and parse that ?
Something like https://domain/—/path ?

@AlanGreene
Copy link
Member

I'm generally not a fan of introducing and relying on magic values in URLs as they often introduce problems of their own.

Just as a point of comparison, the Kubernetes dashboard also uses hash-based routing for similar reasons and has the same issue with oauth / oidc providers.

@tekton-robot
Copy link
Contributor

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Contributor

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.
If this issue is safe to close now please do so with /close.

/lifecycle rotten

Send feedback to tektoncd/plumbing.

@tekton-robot
Copy link
Contributor

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.
If this issue is safe to close now please do so with /close.

/lifecycle stale

Send feedback to tektoncd/plumbing.

@tekton-robot tekton-robot added the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Aug 14, 2020
@tekton-robot tekton-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Aug 14, 2020
@tekton-robot
Copy link
Contributor

@tekton-robot: Closing this issue.

In response to this:

Rotten issues close after 30d of inactivity.
Reopen the issue with /reopen.
Mark the issue as fresh with /remove-lifecycle rotten.

/close

Send feedback to tektoncd/plumbing.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@bdwyertech
Copy link

bdwyertech commented Jul 26, 2023

kubernetes/dashboard#5777

/reopen

@AlanGreene
Copy link
Member

Hi @bdwyertech, thanks for your interest in this issue. I'm not sure kubernetes/dashboard#5777 is relevant here as that fix was specifically for the Kubernetes Dashboard's own login flow. They still have an open issue related to the OIDC login flow for the same reasons outlined above in this issue.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale.
Projects
None yet
Development

No branches or pull requests

6 participants