-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable environment overrides for job clusters #658
Conversation
While they are a slice, we can identify a job cluster by its job cluster key. A job definition with multiple job clusters with the same key is always invalid. We can therefore merge definitions with the same key into one. This is compatible with how environment overrides are applied; merging a slice means appending to it. The override will end up in the job cluster slice of the original, which gives us a deterministic way to merge them.
LGTM! To clarify: if I don’t specify job cluster in resources section but only in environment overrides, bundle would still work, correct? |
@andrewnester That is correct; the sections are appended before doing this type of processing. |
CLI: * Always resolve .databrickscfg file ([#659](#659)). Bundles: * Add internal tag for bundle fields to be skipped from schema ([#636](#636)). * Log the bundle root configuration file if applicable ([#657](#657)). * Execute paths without the .tmpl extension as templates ([#654](#654)). * Enable environment overrides for job clusters ([#658](#658)). * Merge artifacts and resources block with overrides enabled ([#660](#660)). * Locked terraform binary version to <= 1.5.5 ([#666](#666)). * Return better error messages for invalid JSON schema types in templates ([#661](#661)). * Use custom prompter for bundle template inputs ([#663](#663)). * Add map and pair helper functions for bundle templates ([#665](#665)). * Correct name for force acquire deploy flag ([#656](#656)). * Confirm that override with a zero value doesn't work ([#669](#669)). Internal: * Consolidate functions in libs/git ([#652](#652)). * Upgraded Go version to 1.21 ([#664](#664)).
CLI: * Always resolve .databrickscfg file ([#659](#659)). Bundles: * Add internal tag for bundle fields to be skipped from schema ([#636](#636)). * Log the bundle root configuration file if applicable ([#657](#657)). * Execute paths without the .tmpl extension as templates ([#654](#654)). * Enable environment overrides for job clusters ([#658](#658)). * Merge artifacts and resources block with overrides enabled ([#660](#660)). * Locked terraform binary version to <= 1.5.5 ([#666](#666)). * Return better error messages for invalid JSON schema types in templates ([#661](#661)). * Use custom prompter for bundle template inputs ([#663](#663)). * Add map and pair helper functions for bundle templates ([#665](#665)). * Correct name for force acquire deploy flag ([#656](#656)). * Confirm that override with a zero value doesn't work ([#669](#669)). Internal: * Consolidate functions in libs/git ([#652](#652)). * Upgraded Go version to 1.21 ([#664](#664)).
@pietern Do you think it would be possible to apply the same logic to the tasks of a job? resources:
jobs:
foo:
name: job
tasks:
- task_key: key
new_cluster:
spark_version: 13.3.x-scala2.12
environments:
development:
resources:
jobs:
foo:
tasks:
- task_key: key
new_cluster:
node_type_id: i3.xlarge
num_workers: 1
staging:
resources:
jobs:
foo:
tasks:
- task_key: key
new_cluster:
node_type_id: i3.2xlarge
num_workers: 4 I never coded in GO, but if you say it would be exactly the same as your changes (only applied to the task element), I could try to do it. |
## Changes Follow up for #658 When a job definition has multiple job tasks using the same key, it's considered invalid. Instead we should combine those definitions with the same key into one. This is consistent with environment overrides. This way, the override ends up in the original job tasks, and we've got a clear way to put them all together. ## Tests Added unit tests
@pietern , yes, I saw it the other day. Amazing, looking forward to try it in the upcoming releases 💯 |
## Changes Follow up for #658 When a job definition has multiple job tasks using the same key, it's considered invalid. Instead we should combine those definitions with the same key into one. This is consistent with environment overrides. This way, the override ends up in the original job tasks, and we've got a clear way to put them all together. ## Tests Added unit tests
Changes
While they are a slice, we can identify a job cluster by its job cluster key. A job definition with multiple job clusters with the same key is always invalid. We can therefore merge definitions with the same key into one. This is compatible with how environment overrides are applied; merging a slice means appending to it. The override will end up in the job cluster slice of the original, which gives us a deterministic way to merge them.
Since the alternative is an invalid configuration, this doesn't change behavior.
Tests
New test coverage.