Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[TiDB Dashboard] error.api.metrics.prometheus_not_found #2639

Closed
olegchorny opened this issue Jun 7, 2020 · 3 comments
Closed

[TiDB Dashboard] error.api.metrics.prometheus_not_found #2639

olegchorny opened this issue Jun 7, 2020 · 3 comments
Assignees
Labels
status/WIP Issue/PR is being worked on

Comments

@olegchorny
Copy link

Bug Report

What version of Kubernetes are you using?
1.17.3

What version of TiDB Operator are you using?
1.1.0
What storage classes exist in the Kubernetes cluster and what are used for PD/TiKV pods?
standard-csi

What's the status of the TiDB cluster pods?

first-discovery-65d4d4d948-j9jbk                               1/1     Running   0          12m
first-monitor-6db7ff4986-r26vl                                 3/3     Running   0          98m
first-pd-0                                                     1/1     Running   0          98m
first-tidb-0                                                   2/2     Running   0          97m
first-tikv-0                                                   1/1     Running   0          98m

What did you do?

helm install tidb pingcap/tidb-operator --namespace=tidb-admin --version=v1.1.0 -f ./tidb-operator/values-tidb-operator.yaml && kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator
---
apiVersion: pingcap.com/v1alpha1
kind: TidbMonitor
metadata:
  name: first
spec:
  clusters:
  - name: first
  prometheus:
    baseImage: prom/prometheus
    version: v2.11.1
    service:
      type: NodePort
  grafana:
    baseImage: grafana/grafana
    version: 6.0.1
  initializer:
    baseImage: pingcap/tidb-monitor-initializer
    version: v3.1.0
  reloader:
    baseImage: pingcap/tidb-monitor-reloader
    version: v1.0.1
  imagePullPolicy: IfNotPresent
---
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
  name: first
spec:
  version: v4.0.0
  timezone: UTC
  pvReclaimPolicy: Delete
  pd:
    baseImage: pingcap/pd
    replicas: 1
    requests:
      storage: "1Gi"
    config: {}
  tikv:
    baseImage: pingcap/tikv
    replicas: 1
    requests:
      storage: "4Gi"
    config: {}
  tidb:
    baseImage: pingcap/tidb
    replicas: 1
    service:
      type: ClusterIP
    config: {}

What did you expect to see?
QPS and Latency dashboards on the Overview page instead of error.api.metrics.prometheus_not_found error.

What did you see instead?
image

As I see, according to #2331, everything should be fine, I even tried to leverage nightly builds, but still didn't manage to make it work. Probably, I need to provide the URL of prometheus endpoint somehow, but can't find this option, and my API calls are not successful:

full_text: "error.api.metrics.prometheus_not_found↵ at github.com/pingcap-incubator/tidb-dashboard/pkg/apiserver/metrics.(*Service).queryHandler()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/pingcap-incubator/tidb-dashboard@v0.0.0-20200604095604-967424d77384/pkg/apiserver/metrics/metrics.go:92↵ at github.com/gin-gonic/gin.(*Context).Next()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/gin-gonic/gin@v1.5.0/context.go:147↵ at github.com/appleboy/gin-jwt/v2.(*GinJWTMiddleware).middlewareImpl()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/appleboy/gin-jwt/v2@v2.6.3/auth_jwt.go:393↵ at github.com/appleboy/gin-jwt/v2.(*GinJWTMiddleware).MiddlewareFunc.func1()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/appleboy/gin-jwt/v2@v2.6.3/auth_jwt.go:355↵ at github.com/gin-gonic/gin.(*Context).Next()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/gin-gonic/gin@v1.5.0/context.go:147↵ at github.com/pingcap-incubator/tidb-dashboard/pkg/apiserver/utils.MWHandleErrors.func1()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/pingcap-incubator/tidb-dashboard@v0.0.0-20200604095604-967424d77384/pkg/apiserver/utils/error.go:46↵ at github.com/gin-gonic/gin.(*Context).Next()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/gin-gonic/gin@v1.5.0/context.go:147↵ at github.com/gin-contrib/gzip.Gzip.func2()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/gin-contrib/gzip@v0.0.1/gzip.go:47↵ at github.com/gin-gonic/gin.(*Context).Next()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/gin-gonic/gin@v1.5.0/context.go:147↵ at github.com/gin-gonic/gin.RecoveryWithWriter.func1()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/gin-gonic/gin@v1.5.0/recovery.go:83↵ at github.com/gin-gonic/gin.(*Context).Next()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/gin-gonic/gin@v1.5.0/context.go:147↵ at github.com/gin-gonic/gin.(*Engine).handleHTTPRequest()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/gin-gonic/gin@v1.5.0/gin.go:403↵ at github.com/gin-gonic/gin.(*Engine).ServeHTTP()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/gin-gonic/gin@v1.5.0/gin.go:364↵ at github.com/pingcap-incubator/tidb-dashboard/pkg/apiserver.(*Service).handler()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/pingcap-incubator/tidb-dashboard@v0.0.0-20200604095604-967424d77384/pkg/apiserver/apiserver.go:184↵ at net/http.HandlerFunc.ServeHTTP()↵	/usr/local/go/src/net/http/server.go:2007↵ at github.com/pingcap-incubator/tidb-dashboard/pkg/utils.(*ServiceStatus).NewStatusAwareHandler.func1()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/github.com/pingcap-incubator/tidb-dashboard@v0.0.0-20200604095604-967424d77384/pkg/utils/service_status.go:79↵ at net/http.HandlerFunc.ServeHTTP()↵	/usr/local/go/src/net/http/server.go:2007↵ at net/http.(*ServeMux).ServeHTTP()↵	/usr/local/go/src/net/http/server.go:2387↵ at go.etcd.io/etcd/embed.(*accessController).ServeHTTP()↵	/home/jenkins/agent/workspace/uild_pd_multi_branch_release-4.0/go/pkg/mod/go.etcd.io/etcd@v0.5.0-alpha.5.0.20191023171146-3cf2f69b5738/embed/serve.go:359↵ at net/http.serverHandler.ServeHTTP()↵	/usr/local/go/src/net/http/server.go:2802↵ at net/http.(*conn).serve()↵	/usr/local/go/src/net/http/server.go:1890↵ at runtime.goexit()↵	/usr/local/go/src/runtime/asm_amd64.s:1357"
@olegchorny olegchorny changed the title [TiDB Dasboard] error.api.metrics.prometheus_not_found [TiDB Dashboard] error.api.metrics.prometheus_not_found Jun 7, 2020
@DanielZhangQD
Copy link
Contributor

@Yisaer Please help check this issue, thanks!

@cofyc cofyc added the status/WIP Issue/PR is being worked on label Jun 8, 2020
@Yisaer
Copy link
Contributor

Yisaer commented Jun 8, 2020

@olegchorny Hi, could you print the result of kubectl get tidbmonitor first -n <namespace> -oyaml

@Yisaer
Copy link
Contributor

Yisaer commented Jun 8, 2020

It seems the dashboard function ability didn't carry to the 1.1 branch due to the sre-bot failure. That's why 1.1.0 couldn't see the dashboard metrics while master could. We will release it in the next minor version.

ref #2483

@Yisaer Yisaer closed this as completed Jun 8, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
status/WIP Issue/PR is being worked on
Projects
None yet
Development

No branches or pull requests

4 participants