Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Make frontend drain traffic time configurable #3934

Merged
merged 4 commits into from
Feb 10, 2023
Merged
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 4 additions & 5 deletions service/frontend/service.go
Original file line number Diff line number Diff line change
Expand Up @@ -334,19 +334,18 @@ func (s *Service) Stop() {

// initiate graceful shutdown:
// 1. Fail rpc health check, this will cause client side load balancer to stop forwarding requests to this node
// 2. wait for failure detection time
// 2. wait for 10 seconds failure detection time
// 3. stop taking new requests by returning InternalServiceError
// 4. Wait for a second
// 4. Wait for X second
// 5. Stop everything forcefully and return

requestDrainTime := util.Min(time.Second, s.config.ShutdownDrainDuration())
failureDetectionTime := util.Max(0, s.config.ShutdownDrainDuration()-requestDrainTime)
requestDrainTime := util.Max(time.Second, s.config.ShutdownDrainDuration())

logger.Info("ShutdownHandler: Updating gRPC health status to ShuttingDown")
s.healthServer.Shutdown()

logger.Info("ShutdownHandler: Waiting for others to discover I am unhealthy")
time.Sleep(failureDetectionTime)
time.Sleep(10 * time.Second)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we have another dynamic config for this? this actually seems like the one that's more dependent on the environment (external health check frequency). the requestDrainTime can be fixed to 5s or 10s since we use 5s or 10s timeout on rpcs

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

+1 for having a separate knob for this.

The timeout is from client which can be quite long I think? Or may be I misunderstood something?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

sure. Will add a different config.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My understanding is during this sleep, any rpcs that end up here will still be handled, but we expect some external system to do a health check, notice the "shutting down" response, and adjust its state to stop sending rpcs here. That timeout is controlled by the load balancer.

The second sleep is when we stop accepting rpcs, but continue processing ones that have already come in. That one depends on how long we expect our operations to take, which we have more control over. For long-polls, we can just fail and let them get retried. For everything else, if it takes more than 10s something is probably going wrong, so it seems okay to fail. But no harm in making that configurable too


s.handler.Stop()
s.operatorHandler.Stop()
Expand Down