Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enable Continuous auto kill #14831

Merged
merged 17 commits into from
Aug 23, 2023
Merged

Conversation

zachjsh
Copy link
Contributor

@zachjsh zachjsh commented Aug 15, 2023

Description

This change enables the KillUnusedSegments coordinator duty to be scheduled continuously. Things that prevented this, or made this difficult before were the following:

  1. If scheduled at fast enough rate, the duty would find the same intervals to kill for the same datasources, while kill tasks submitted for those same datasources and intervals were already underway, thus wasting task slots on duplicated work.

  2. The task resources used by auto kill were previously unbounded. Each duty run period, if unused
    segments were found for any datasource, a kill task would be submitted to kill them.

This pr solves for both of these issues:

  1. The duty keeps track of the end time of the last interval found when killing unused segments for each datasource, in a in memory map. The end time for each datasource, if found, is used as the start time lower bound, when searching for unused intervals for that same datasource. Each duty run, we remove any datasource keys from this map that are no longer found to match datasources in the system, or in whitelist, and also remove a datasource entry, if there is found to be no unused segments for the datasource, which happens when we fail to find an interval which includes unused segments. Removing the datasource entry from the map, allows for searching for unusedSegments in the datasource from the beginning of time once again

  2. The unbounded task resource usage can be mitigated with coordinator dynamic config added as part of ba957a9

Operators can configure continous auto kill by providing coordinator runtime properties similar to the following:

druid.coordinator.period.indexingPeriod=PT60S
druid.coordinator.kill.period=PT60S

And providing sensible limits to the killTask usage via coordinator dynamic properties.

Release Note

  • The value of druid.coordinator.kill.period can now be as low as druid.coordinator.period.indexingPeriod. It was earlier required to be strictly greater than the indexingPeriod.
  • The leader Coordinator now keeps track of the last submitted kill task for a given datasource to avoid submitting duplicate kill tasks.

This PR has:

  • been self-reviewed.
  • added documentation for new or modified features or behaviors.
  • a release note entry in the PR description.
  • added Javadocs for most classes and all non-trivial methods. Linked related entities via Javadoc links.
  • added or updated version, license, or notice information in licenses.yaml
  • added comments explaining the "why" and the intent of the code wherever would not be obvious for an unfamiliar reader.
  • added unit tests or modified existing tests to cover new code paths, ensuring the threshold for code coverage is met.
  • added integration tests.
  • been tested in a test Druid cluster.

@maytasm
Copy link
Contributor

maytasm commented Aug 16, 2023

Personally, I think #12599 is a must have before enabling continuous auto kill. Otherwise one tiny mistake in your loadRule can result in data being permanently deleted from s3 right away.

Copy link
Contributor

@suneet-s suneet-s left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have not had a chance to look through the tests yet, but this is some early feedback.

@@ -853,7 +853,7 @@ These Coordinator static configurations can be defined in the `coordinator/runti
|`druid.coordinator.load.timeout`|The timeout duration for when the Coordinator assigns a segment to a Historical process.|PT15M|
|`druid.coordinator.kill.pendingSegments.on`|Boolean flag for whether or not the Coordinator clean up old entries in the `pendingSegments` table of metadata store. If set to true, Coordinator will check the created time of most recently complete task. If it doesn't exist, it finds the created time of the earliest running/pending/waiting tasks. Once the created time is found, then for all dataSources not in the `killPendingSegmentsSkipList` (see [Dynamic configuration](#dynamic-configuration)), Coordinator will ask the Overlord to clean up the entries 1 day or more older than the found created time in the `pendingSegments` table. This will be done periodically based on `druid.coordinator.period.indexingPeriod` specified.|true|
|`druid.coordinator.kill.on`|Boolean flag for whether or not the Coordinator should submit kill task for unused segments, that is, permanently delete them from metadata store and deep storage. If set to true, then for all whitelisted dataSources (or optionally all), Coordinator will submit tasks periodically based on `period` specified. A whitelist can be set via dynamic configuration `killDataSourceWhitelist` described later.<br /><br />When `druid.coordinator.kill.on` is true, segments are eligible for permanent deletion once their data intervals are older than `druid.coordinator.kill.durationToRetain` relative to the current time. If a segment's data interval is older than this threshold at the time it is marked unused, it is eligible for permanent deletion immediately after being marked unused.|false|
|`druid.coordinator.kill.period`|How often to send kill tasks to the indexing service. Value must be greater than `druid.coordinator.period.indexingPeriod`. Only applies if kill is turned on.|P1D (1 Day)|
|`druid.coordinator.kill.period`| How often to send kill tasks to the indexing service. Value must be greater `druid.coordinator.period.indexingPeriod` if set. Only applies if kill is turned on. | NULL (indexing period is used) |
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change should be release noted. Can you update the PR description to make sure it gets picked up in the release notes please.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks done.

return new KillUnusedSegments(segmentsMetadataManager, overlordClient, config);
} else {
if (killUnusedSegmentsDutyFromCustomGroups.size() > 1) {
log.warn(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you throw an exception here instead. This is a misconfiguration, and it would be better to fail startup so the operator can fix the config issue.

In the exception message can you add the names of the custom groups where the duty is configured to make it easy for the operator to fix the config issue.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done.

Comment on lines +74 to +79

/**
* Used to keep track of the last interval end time that was killed for each
* datasource.
*/
private final Map<String, DateTime> datasourceToLastKillIntervalEnd;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure if this approach is the best way. If the leader changes, the in-memory map is essentially wiped out when a new leader comes online, so it is possible to re-schedule an interval that is already in the process of being killed. So 2 questions on this

  1. What is the impact if the same interval is scheduled to be killed twice? Can we test / validate that nothing bad happens? If the task just waits on a lock and then succeeds as there is no data to delete, I think that is reasonable.
  2. Is it possible to get the list of intervals that are being killed from the kill task payloads? Would that put a lot of stress on the master nodes if we polled the task payloads?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

testing revealed that 1. is what happens, so thought it was no the end of the world that we still have potential of submitting duplicate tasks.

@@ -144,6 +144,12 @@ Optional<Iterable<DataSegment>> iterateAllUsedNonOvershadowedSegmentsForDatasour
*/
List<Interval> getUnusedSegmentIntervals(String dataSource, DateTime maxEndTime, int limit);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a reason not to get rid of this function from the interface? It looks like there are no callers other than the KillUnusedSegmentsDuty

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

done

Copy link
Contributor

@suneet-s suneet-s left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Personally, I think #12599 is a must have before enabling continuous auto kill. Otherwise one tiny mistake in your loadRule can result in data being permanently deleted from s3 right away.

I think this patch is actually just providing the ability to schedule kill tasks continuously. By default kill tasks run every indexing period, which is every 30 minutes. I agree #12599 is good to have, but I do not think it needs to block this change. Hopefully, both will be merged soon enough, that the order of merging these changes won't matter too much.

Copy link
Contributor

@kfaraz kfaraz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As discussed with @zachjsh , I am not sure see I see the appeal of making KillUnusedSegments a custom duty. The only motivation seems to be to be able to have very low periods of scheduling kill tasks.

My primary argument against the current approach is the concept of having the kill duty be continuous while also needing to specify a period for it. If we want it to be truly continuous, users should only be required to enable/disable kill and not have to specify any period for it.

As a compromise, we could just tie kill tasks to the indexingPeriod (typically 30 to 60 min) for the time being as that is frequent enough for the purposes of cleaning up unused segments. In the future, if we want even more frequent kill tasks, we could simply tie it up to the coordinator period itself (typically 1 min) (which might be a little too aggressive but it's a thought).

cc: @suneet-s

final long currentTimeMillis = System.currentTimeMillis();
if (lastKillTime + period > currentTimeMillis) {
if (null != period && (lastKillTime + period > currentTimeMillis)) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we have already validated that period is not null in the constructor.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

removed

Preconditions.checkArgument(
this.period > config.getCoordinatorIndexingPeriod().getMillis(),
null == this.period || this.period > config.getCoordinatorIndexingPeriod().getMillis(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Null handling and validation can be done in the DruidCoordinatorConfig itself.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

now checking against the dutyPeriod which is not always the same as indexing period. Think this needs to be done here, consequently.

Copy link
Contributor

@ektravel ektravel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Reviewed changes to docs/configuration/index.md.

@vtlim
Copy link
Member

vtlim commented Aug 21, 2023

Docs LGTM

Copy link
Contributor

@kfaraz kfaraz left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the changes, @zachjsh ! The changes look good to me.

DateTime maxEndTime,
int limit,
DateTime maxUsedFlagLastUpdatedTime
);
DateTime maxUsedFlagLastUpdatedTime);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: style

Suggested change
DateTime maxUsedFlagLastUpdatedTime);
DateTime maxUsedFlagLastUpdatedTime
);

@zachjsh zachjsh merged commit 0c76df1 into apache:master Aug 23, 2023
74 checks passed
@zachjsh zachjsh deleted the continuous-auto-kill branch August 23, 2023 13:23
@LakshSingla LakshSingla added this to the 28.0 milestone Oct 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

7 participants