Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add history size and continue-as-new suggestion #178

Merged
merged 3 commits into from
May 21, 2022
Merged

Conversation

dnr
Copy link
Member

@dnr dnr commented Apr 29, 2022

What changed?
Adding two fields to WorkflowTaskStartedEventAttributes that can be exposed by SDKs, to allow long-running workflows to make better choices about when to continue-as-new.

Why?
Long-running workflows eventually need to call continue-as-new to limit history size (both in event count and in bytes), otherwise they will be automatically terminated by the server. Currently there's no guidance provided to workflows about when to do this. Even knowing that they should do it after 10,000 events, that information isn't exposed to workflow code.

The server has history size thresholds (soft and hard limits) in dynamic config and can use those to figure out a suggestion that will be applicable for many cases. In other cases, the workflow code might want to know the exact history size (in events or bytes) and make its own determinate. This adds two fields for the suggestion and the raw history size in bytes. (The history size in events is just the event id of this event.)

See temporalio/temporal#1114 and temporalio/temporal#2726

temporal/api/history/v1/message.proto Outdated Show resolved Hide resolved
@@ -163,6 +163,13 @@ message WorkflowTaskStartedEventAttributes {
string identity = 2;
// TODO: ? Appears unused?
string request_id = 3;
// True if this workflow should continue-as-new soon because its history size (in
// either events or bytes) is getting large.
bool suggest_continue_as_new = 4;
Copy link
Member

@cretz cretz May 6, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I am not convinced this is valuable. Before merging this, can we discuss what the actual values to drive this might be in the server? Like 80% of max events or count? 50%? 20% (as suggested in description w/ 10k)? That's a large swing.

Also Max has suggested atomic snapshotting at some point which would obviate the need for this. Are we sure we want it as an event field? Can the SDK derive it or is its dependence on a server config the reason we need it here?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We had a quick meeting and came to a rough consensus:

  • yes, the server should send a boolean suggestion (in addition to total size in bytes and count)
  • the suggestion should be based on values from dynamic config so that an operator can tune it
  • the sdk should expose all three values so users can either write while !shouldContinueAsNew { ... }, or else use the raw values to make decisions

For defaults, I'll throw out suggestions of 2000 events or 2MB size, but we can discuss this on the implementation PR. Of course, workflows with large payloads will want to use the raw values.

Atomic snapshotting sounds great but I think that's a ways off and this is a quick win. The downsides of the SDK deriving it are that it wouldn't be tunable by an operator in one place, and we'd have to use markers to preserve determinism. This is essentially a fixed marker implemented by the server, if I understand it.

If no other objections, I'll merge this soon.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 I was hoping we could know the defaults here just in case there isn't a reasonable default. I doubt I personally will ever use this in workflows vs just doing my own checks, but maybe others will.

No objections from me.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants