Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Databricks provider not found when redeploying #1286

Closed
jeduardo90 opened this issue Mar 14, 2024 · 14 comments · Fixed by #1292
Closed

Databricks provider not found when redeploying #1286

jeduardo90 opened this issue Mar 14, 2024 · 14 comments · Fixed by #1292
Labels
DABs DABs related issues Question Further information is requested

Comments

@jeduardo90
Copy link

When I try to deploy Asset Bundles, I am able to do it successfully for the first time.

However, when I try to do it for the second time without the .bundle folder locally, I get the following error messages:

Error while loading schemas for plugin components: Failed to obtain provider
schema: Could not load the schema for provider
registry.terraform.io/databricks/databricks: failed to instantiate provider
"registry.terraform.io/databricks/databricks" to obtain schema: unavailable

When I change the provider in the tfstate file in the target workspace to registry.terraform.io/providers/databricks/databricks, I am able to deploy successfully, but instead of updating the jobs I have created on the first deploy, it creates new ones with exactly the same names.

How can I deploy the same bundle multiple times without having the .bundle folder (that gets generated on deployment runtime)?

@jeduardo90 jeduardo90 changed the title Databricks provider not found when deploying Databricks provider not found when redeploying Mar 14, 2024
@andrewnester andrewnester added Question Further information is requested DABs DABs related issues labels Mar 14, 2024
@andrewnester
Copy link
Contributor

Hi @jeduardo90 It's not entirely clear what the issue you're running into hence a few clarifying questions.

  1. I believe you're refering to .databricks folder, correct?

  2. I am able to do it successfully for the first time.

Do you do this locally on your machine without .databricks folder present, correct?

  1. when I try to do it for the second time without the .bundle folder locally

You do this on the same machine as the first time, correct?

  1. registry.terraform.io/databricks/databricks: failed to instantiate provider
    "registry.terraform.io/databricks/databricks" to obtain schema: unavailable

This error might mean you have stale TF cache or firewall issues. Could you double check this?

  1. When I change the provider in the tfstate file in the target workspace

TF state is internal detail of DABs and subject to change, any manual changes to these files can lead to unexpected behaviour, so we recommend not to change this state manually

  1. Which version of CLI you use?

@jeduardo90
Copy link
Author

jeduardo90 commented Mar 14, 2024

Hi @andrewnester, thanks for the quick reply!

I believe you're refering to .databricks folder, correct?

Yes, you are correct, the .databricks folder.

Do you do this locally on your machine without .databricks folder present, correct?

I do this both locally and on an Azure DevOps agent, getting both the same results. The only moment in which I am successfully able to redeploy is when I have the databricks folder from the first deployment created in the same folder as the asset bundle configuration. I can't have the .databricks folder generated from the dev deployment in the repo when I'm deploying to different environment workspaces.

This error might mean you have stale TF cache or firewall issues. Could you double check this?

I have no firewall issues, but not sure about stale TF cache. Where is this cache stored?

TF state is internal detail of DABs and subject to change, any manual changes to these files can lead to unexpected behaviour, so we recommend not to change this state manually

You're right, It was just a test on my side. When I try to access registry.terraform.io/databricks/databricks, it returns a 404, and that's what puzzled me.

Which version of CLI you use?

I'm using 0.213.0 on my local machine, but using latest one on the Azure DevOps agent.

@andrewnester
Copy link
Contributor

@jeduardo90

When I try to access registry.terraform.io/databricks/databricks, it returns a 404

This is expected.

just to double check that I understand the issue correctly

  1. You created DABs project on your local machine
  2. You run databricks bundle deploy from this local machine and it succeeds
  3. You run databricks bundle deploy again from the same machine and it fails?

Is this correct? If not, can you clarify what's different? Is there anything else happening between step 2 and 3?

Thanks!

@jeduardo90
Copy link
Author

jeduardo90 commented Mar 14, 2024

@andrewnester

You created DABs project on your local machine

yes

You run databricks bundle deploy from this local machine and it succeeds

Also yes

Then I remove the .databricks folder that is generated from the first deployment to simulate the CI/CD process. I assume, based on initial bundle configuration, that the .databricks folder is not to be source controlled.

And then I try to deploy again, and I get the above mentioned error:

Error while loading schemas for plugin components: Failed to obtain provider
schema: Could not load the schema for provider
registry.terraform.io/databricks/databricks: failed to instantiate provider
"registry.terraform.io/databricks/databricks" to obtain schema: unavailable

@pietern
Copy link
Contributor

pietern commented Mar 15, 2024

You're right that the .databricks directory must not be committed. It's a local cache and it can safely be removed in between deployments and everything should continue to work just the same. That's not the case for you though...

Do you have a ~/.terraformrc file on your local system perhaps? That could be interfering.

Also, could you try upgrading your local copy of the CLI to the latest and confirm that that doesn't solve the issue?

@MAJVeld
Copy link

MAJVeld commented Mar 18, 2024

I can confirm this issue in the following situation / setup using Databricks CLI version 0.215.0:

We have setup a simple Azure DevOps pipeline to deploy a few workflows using a Databricks Asset Bundle. Two target workspaces are defined (test and prod) and present as separate stages in the Azure DevOps pipeline. The prod workspace can only be deployed after the test stage has succeeded.

In the pipeline, the git repository (without the .databricks folder) is checked out and a databricks bundle deploy -t <target_environment> command is issued after authenticating the Azure DevOps service principal against the Databricks workspace.

The first deployment to the test workspace succeeds without any issues, confirming that all communication between Azure DevOps and the target Databricks workspace works as expected. The expected Databricks workflows are created and can be ran.

When triggering the Azure DevOps pipeline again, a new Linux-based build agent is used, the git repo is checked out again and the deployment of the databricks asset bundle fails with the error message mentioned above.

Issuing a direct deployment of the Databricks Asset bundle from our (Windows) developer machine, where the .databricks folder stays present does work without any issues when being ran repeatedly. Removing the .databricks folder does result in the same error message.

We have no .terraformrc file in use or present on our local machine or build agents, so I do not expect that to be the issue.

Addition:
When the .databricks folder is removed from my local machine and the databricks bundle deploy -t <target_environment> is ran again, the .databricks folder is created again while the command failed. The .tfstate file for the environment is present again. The file contents of the .tfstate as created again on my local machine and the .tfstate present in the databricks workspace as part of the bundle is identical. I would expect that any further DAB deployments should succeed again, but this is not the case.

Renaming the .databricks folder to ._databricks on my local machine shines some more light on this issue. The .terraform folder is missing inside the .databricks\bundle\<env> folder. That folder contains a cached version of the databricks terraform provider.

image

The /providers/ fragment seems to be missing from the registered provider in the stored tfstate. As a result, the provider cannot be resolved any more and the deployment fails. I believe the /providers/ part must be present in the reference to the terraform module (https://registry.terraform.io/providers/databricks/databricks/1.37.0)

Feel free to setup a direct call with me should you have any further questions if that would be of any help.

@jeduardo90
Copy link
Author

Hi @pietern
Yes, I can confirm that I don't have a ~/.terraformrc file neither in my local machine, nor in the Azure DevOps agent.

I have upgraded the databricks CLI locally and I'm still facing the same issue.

My issue is exactly the same as @MAJVeld described.

This is blocking us from having a robust CI/CD process using databricks asset bundles.

Should a customer support incident to databricks be raised?

Thanks,
Regards

@andrewnester
Copy link
Contributor

Hi @jeduardo90 @MAJVeld ! Thanks a lot for the details.

The /providers/ fragment seems to be missing from the registered provider in the stored tfstate.

I don't think it should be there, this is an expected way of defining resources (even though it does not resolve to URL) and this line is generated by TF itself. If it would be incorrect it would likely even fail at the first deploy.

Other than that, could you check if you have databricks terraform provider stored in cache here? %APPDATA%\terraform.d\plugins

Also, could you run bundle deploy --log-level TRACE for the failed deploy and provide the output here.

Thanks!

@MAJVeld
Copy link

MAJVeld commented Mar 18, 2024

@andrewnester the %APPDATA%\terraform.d\ folder on my machine has no plugins subfolder contained in it. The only contents are two files named checkpoint_cache and checkpoint_signature.

Here are the trace details as requested. I sanitized some of the contents.

databricks bundle deploy -t test --log-level TRACE > out.txt
15:30:29  INFO start pid=9376 version=0.214.1 args="c:\\Utils\\databricks_cli\\databricks.exe, bundle, deploy, -t, test, --log-level, TRACE"
15:30:29 DEBUG Loading bundle configuration from: C:\repos\cm-dwh-databricks-jobs\src\pbix_ingest\databricks.yml pid=9376
15:30:29 DEBUG Apply pid=9376 mutator=seq
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=scripts.preinit
15:30:29 DEBUG No script defined for preinit, skipping pid=9376 mutator=seq mutator=scripts.preinit
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=ProcessRootIncludes
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=ProcessRootIncludes mutator=seq
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=ProcessRootIncludes mutator=seq mutator=ProcessInclude(resources\pbix_daily_workflow.yml)
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=ProcessRootIncludes mutator=seq mutator=ProcessInclude(resources\pbix_maintenance_workflow.yml)
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=ProcessRootIncludes mutator=seq mutator=ProcessInclude(resources\pbix_setup_workflow.yml)
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=ProcessRootIncludes mutator=seq mutator=ProcessInclude(resources\pbix_transformation_pipeline.yml)
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=EnvironmentsToTargets
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=InitializeVariables
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=DefineDefaultTarget(default)
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=LoadGitDetails
15:30:29 DEBUG Apply pid=9376 mutator=SelectTarget(test)
15:30:29 DEBUG Apply pid=9376 mutator=<func>
15:30:29 DEBUG Apply pid=9376 mutator=<func>
15:30:29 DEBUG Apply pid=9376 mutator=seq
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=initialize
15:30:29  INFO Phase: initialize pid=9376 mutator=seq mutator=initialize
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=RewriteSyncPaths
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=MergeJobClusters
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=MergeJobTasks
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=MergePipelineClusters
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=InitializeWorkspaceClient
15:30:29 TRACE Loading config via environment pid=9376 sdk=true
15:30:29 TRACE Loading config via resolve-profile-from-host pid=9376 sdk=true
15:30:29 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=PopulateCurrentUser
15:30:29 TRACE Loading config via environment pid=9376 sdk=true
15:30:29 TRACE Loading config via resolve-profile-from-host pid=9376 sdk=true
15:30:29 TRACE Attempting to configure auth: pat pid=9376 sdk=true
15:30:29 TRACE Attempting to configure auth: basic pid=9376 sdk=true
15:30:29 TRACE Attempting to configure auth: oauth-m2m pid=9376 sdk=true
15:30:29 TRACE Attempting to configure auth: databricks-cli pid=9376 sdk=true
15:30:29 TRACE Attempting to configure auth: metadata-service pid=9376 sdk=true
15:30:29 TRACE Attempting to configure auth: azure-msi pid=9376 sdk=true
15:30:29 TRACE Attempting to configure auth: azure-client-secret pid=9376 sdk=true
15:30:29 TRACE Attempting to configure auth: azure-cli pid=9376 sdk=true
15:30:30  INFO Refreshed OAuth token for 2ff814a6-3304-4ab8-85cb-cd0e6f879c1d from Azure CLI, which expires on 2024-03-18 16:36:51.000000 pid=9376 sdk=true
15:30:31  INFO Refreshed OAuth token for https://management.core.windows.net/ from Azure CLI, which expires on 2024-03-18 16:52:40.000000 pid=9376 sdk=true
15:30:31  INFO Using Azure CLI authentication with AAD tokens pid=9376 sdk=true
15:30:31 DEBUG GET /api/2.0/preview/scim/v2/Me
< HTTP/2.0 200 OK
< {
<   "active": true,
<   "displayName": "Sanitized Username",
<   "emails": [
<     {
<       "primary": true,
<       "type": "work",
<       "value": "sanitized.user@email.com"
<     }
<   ],
<   "externalId": "d9449355-c0fd-49b4-ad1e-18a81ab2ff40",
<   "groups": [
<     {
<       "$ref": "Groups/314442227164113",
<       "display": "administrators",
<       "type": "direct",
<       "value": "314442227164113"
<     },
<     {
<       "$ref": "Groups/1113162815373887",
<       "display": "bullseye",
<       "type": "direct",
<       "value": "1113162815373887"
<     },
<     {
<       "$ref": "Groups/789982242099003",
<       "display": "admins",
<       "type": "indirect",
<       "value": "789982242099003"
<     }
<   ],
<   "id": "8968085024078409",
<   "name": {
<     "familyName": "User",
<     "givenName": "Sanitized"
<   },
<   "schemas": [
<     "urn:ietf:params:scim:schemas:core:2.0:User",
<     "urn:ietf:params:scim:schemas:extension:workspace:2.0:User"
<   ],
<   "userName": "sanitized.user@email.com"
< } pid=9376 mutator=seq mutator=initialize mutator=seq mutator=PopulateCurrentUser sdk=true
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=DefineDefaultWorkspaceRoot
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=ExpandWorkspaceRoot
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=DefaultWorkspacePaths
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=SetVariables
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=ResolveResourceReferences
15:30:31 DEBUG GET /api/2.0/sql/warehouses
< HTTP/2.0 200 OK
< {
<   "warehouses": [
<     {
<       "auto_resume": true,
<       "auto_stop_mins": 5,
<       "channel": {},
<       "cluster_size": "2X-Small",
<       "creator_id": 267023646375506,
<       "creator_name": "8839cac0-6876-401f-9f3c-1bf9a2b055c1",
<       "enable_photon": true,
<       "enable_serverless_compute": true,
<       "id": "e391edeecd143b60",
<       "jdbc_url": "jdbc:spark://*******************.15.azuredatabricks.net:443/default;transportMode=http;ssl=1;Au... (55 more bytes)",
<       "max_num_clusters": 1,
<       "min_num_clusters": 1,
<       "name": "cm-dwh-sqlwh",
<       "num_active_sessions": 0,
<       "num_clusters": 0,
<       "odbc_params": {
<         "hostname": "*******************.15.azuredatabricks.net",
<         "path": "/sql/1.0/warehouses/e391edeecd143b60",
<         "port": 443,
<         "protocol": "https"
<       },
<       "size": "XXSMALL",
<       "spot_instance_policy": "COST_OPTIMIZED",
<       "state": "STOPPED",
<       "tags": {
<         "custom_tags": [
<           {
<             "key": "environment",
<             "value": "test"
<           }
<         ]
<       },
<       "warehouse_type": "PRO"
<     }
<   ]
< } pid=9376 mutator=seq mutator=initialize mutator=seq mutator=ResolveResourceReferences sdk=true
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=ResolveVariableReferences
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=SetRunAs
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=OverrideCompute
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=ProcessTargetMode
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=ExpandPipelineGlobPaths
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=TranslatePaths
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=PythonWrapperWarning
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=ApplyBundlePermissions
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=metadata.AnnotateJobs
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=terraform.Initialize
15:30:31 DEBUG Using Terraform at C:\repos\cm-dwh-databricks-jobs\src\pbix_ingest\.databricks\bundle\test\bin\terraform.exe pid=9376 mutator=seq mutator=initialize mutator=seq mutator=terraform.Initialize
15:30:31 DEBUG Environment variables for Terraform: DATABRICKS_HOST, DATABRICKS_CLI_PATH, DATABRICKS_AUTH_TYPE, HOME, USERPROFILE, PATH, TMP pid=9376 mutator=seq mutator=initialize mutator=seq 
mutator=terraform.Initialize
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=initialize mutator=seq mutator=scripts.postinit
15:30:31 DEBUG No script defined for postinit, skipping pid=9376 mutator=seq mutator=initialize mutator=seq mutator=scripts.postinit
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build
15:30:31  INFO Phase: build pid=9376 mutator=seq mutator=build
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build mutator=seq
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build mutator=seq mutator=scripts.prebuild
15:30:31 DEBUG No script defined for prebuild, skipping pid=9376 mutator=seq mutator=build mutator=seq mutator=scripts.prebuild
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build mutator=seq mutator=artifacts.DetectPackages
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build mutator=seq mutator=artifacts.DetectPackages mutator=seq
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build mutator=seq mutator=artifacts.DetectPackages mutator=seq mutator=artifacts.whl.AutoDetect
15:30:31  INFO No local wheel tasks in databricks.yml config, skipping auto detect pid=9376 mutator=seq mutator=build mutator=seq mutator=artifacts.DetectPackages mutator=seq mutator=artifacts.whl.AutoDetect
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build mutator=seq mutator=artifacts.DetectPackages mutator=seq mutator=artifacts.whl.DefineArtifactsFromLibraries
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build mutator=seq mutator=artifacts.inferAll
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build mutator=seq mutator=artifacts.inferAll mutator=seq
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build mutator=seq mutator=artifacts.BuildAll
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build mutator=seq mutator=artifacts.BuildAll mutator=seq
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build mutator=seq mutator=scripts.postbuild
15:30:31 DEBUG No script defined for postbuild, skipping pid=9376 mutator=seq mutator=build mutator=seq mutator=scripts.postbuild
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=build mutator=seq mutator=ResolveVariableReferences
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=deploy
15:30:31  INFO Phase: deploy pid=9376 mutator=seq mutator=deploy
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=deploy mutator=seq
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=scripts.predeploy
15:30:31 DEBUG No script defined for predeploy, skipping pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=scripts.predeploy
15:30:31 DEBUG Apply pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=lock:acquire
15:30:31  INFO Acquiring deployment lock (force: false) pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=lock:acquire
15:30:32 DEBUG POST /api/2.0/workspace-files/import-file/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/deploy.lock?overwrite=false
> {
>   "AcquisitionTime": "2024-03-18T15:30:31.6148699+01:00",
>   "ID": "9efbfe38-2ba5-4649-b95c-ae5c0db6c521",
>   "IsForced": false,
>   "User": "sanitized.user@email.com"
> }
< HTTP/2.0 200 OK pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=lock:acquire sdk=true
15:30:32 DEBUG GET /api/2.0/workspace/get-status?path=/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/deploy.lock
< HTTP/2.0 200 OK
< {
<   "created_at": 1710772231279,
<   "modified_at": 1710772231279,
<   "object_id": 3594829461582531,
<   "object_type": "FILE",
<   "path": "/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/deploy.lock",
<   "resource_id": "3594829461582531"
< } pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=lock:acquire sdk=true
15:30:32 DEBUG GET /api/2.0/workspace/export?direct_download=true&path=/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/deploy.lock
< HTTP/2.0 200 OK
< <Streaming response> pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=lock:acquire sdk=true
15:30:32 DEBUG Apply pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred
15:30:32 DEBUG Apply pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=seq
15:30:32 DEBUG Apply pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=seq mutator=terraform:state-pull
15:30:32  INFO Opening remote state file pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=seq mutator=terraform:state-pull
15:30:32 DEBUG GET /api/2.0/workspace/get-status?path=/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/terraform.tfstate
< HTTP/2.0 200 OK
< {
<   "created_at": 1710772140715,
<   "modified_at": 1710772140715,
<   "object_id": 3594829461582527,
<   "object_type": "FILE",
<   "path": "/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/terraform.tfstate",
<   "resource_id": "3594829461582527"
< } pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=seq mutator=terraform:state-pull sdk=true
15:30:32 DEBUG GET /api/2.0/workspace/export?direct_download=true&path=/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/terraform.tfstate
< HTTP/2.0 200 OK
< <Streaming response> pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=seq mutator=terraform:state-pull sdk=true
15:30:32  INFO Local state is the same or newer, ignoring remote state pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=seq mutator=terraform:state-pull
15:30:32 DEBUG Apply pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=seq mutator=check-running-resources
15:30:32 ERROR Error: exit status 1

Error: Failed to load plugin schemas

Error while loading schemas for plugin components: Failed to obtain provider
schema: Could not load the schema for provider
registry.terraform.io/databricks/databricks: failed to instantiate provider
"registry.terraform.io/databricks/databricks" to obtain schema: unavailable
provider "registry.terraform.io/databricks/databricks"..
 pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=seq mutator=check-running-resources
15:30:32 ERROR Error: exit status 1

Error: Failed to load plugin schemas

Error while loading schemas for plugin components: Failed to obtain provider
schema: Could not load the schema for provider
registry.terraform.io/databricks/databricks: failed to instantiate provider
"registry.terraform.io/databricks/databricks" to obtain schema: unavailable
provider "registry.terraform.io/databricks/databricks"..
 pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=seq
15:30:32 DEBUG Apply pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=lock:release
15:30:32  INFO Releasing deployment lock pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=lock:release
15:30:33 DEBUG GET /api/2.0/workspace/get-status?path=/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/deploy.lock
< HTTP/2.0 200 OK
< {
<   "created_at": 1710772231279,
<   "modified_at": 1710772231279,
<   "object_id": 3594829461582531,
<   "object_type": "FILE",
<   "path": "/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/deploy.lock",
<   "resource_id": "3594829461582531"
< } pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=lock:release sdk=true
15:30:33 DEBUG GET /api/2.0/workspace/get-status?path=/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/deploy.lock
< HTTP/2.0 200 OK
< {
<   "created_at": 1710772231279,
<   "modified_at": 1710772231279,
<   "object_id": 3594829461582531,
<   "object_type": "FILE",
<   "path": "/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/deploy.lock",
<   "resource_id": "3594829461582531"
< } pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=lock:release sdk=true
15:30:33 DEBUG GET /api/2.0/workspace/export?direct_download=true&path=/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/deploy.lock
< HTTP/2.0 200 OK
< <Streaming response> pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=lock:release sdk=true
15:30:33 DEBUG POST /api/2.0/workspace/delete
> {
>   "path": "/Users/sanitized.user@email.com/.bundle/pbix_ingest/test/state/deploy.lock"
> }
< HTTP/2.0 200 OK
< {} pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred mutator=lock:release sdk=true
15:30:33 ERROR Error: exit status 1

Error: Failed to load plugin schemas

Error while loading schemas for plugin components: Failed to obtain provider
schema: Could not load the schema for provider
registry.terraform.io/databricks/databricks: failed to instantiate provider
"registry.terraform.io/databricks/databricks" to obtain schema: unavailable
provider "registry.terraform.io/databricks/databricks"..
 pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq mutator=deferred
15:30:33 ERROR Error: exit status 1

Error: Failed to load plugin schemas

Error while loading schemas for plugin components: Failed to obtain provider
schema: Could not load the schema for provider
registry.terraform.io/databricks/databricks: failed to instantiate provider
"registry.terraform.io/databricks/databricks" to obtain schema: unavailable
provider "registry.terraform.io/databricks/databricks"..
 pid=9376 mutator=seq mutator=deploy mutator=seq mutator=seq
15:30:33 ERROR Error: exit status 1

Error: Failed to load plugin schemas

Error while loading schemas for plugin components: Failed to obtain provider
schema: Could not load the schema for provider
registry.terraform.io/databricks/databricks: failed to instantiate provider
"registry.terraform.io/databricks/databricks" to obtain schema: unavailable
provider "registry.terraform.io/databricks/databricks"..
 pid=9376 mutator=seq mutator=deploy mutator=seq
15:30:33 ERROR Error: exit status 1

Error: Failed to load plugin schemas

Error while loading schemas for plugin components: Failed to obtain provider
schema: Could not load the schema for provider
registry.terraform.io/databricks/databricks: failed to instantiate provider
"registry.terraform.io/databricks/databricks" to obtain schema: unavailable
provider "registry.terraform.io/databricks/databricks"..
 pid=9376 mutator=seq mutator=deploy
15:30:33 ERROR Error: exit status 1

Error: Failed to load plugin schemas

Error while loading schemas for plugin components: Failed to obtain provider
schema: Could not load the schema for provider
registry.terraform.io/databricks/databricks: failed to instantiate provider
"registry.terraform.io/databricks/databricks" to obtain schema: unavailable
provider "registry.terraform.io/databricks/databricks"..
 pid=9376 mutator=seq
Error: exit status 1

Error: Failed to load plugin schemas

Error while loading schemas for plugin components: Failed to obtain provider
schema: Could not load the schema for provider
registry.terraform.io/databricks/databricks: failed to instantiate provider
"registry.terraform.io/databricks/databricks" to obtain schema: unavailable
provider "registry.terraform.io/databricks/databricks"..

15:30:33 ERROR failed execution pid=9376 exit_code=1 error="exit status 1\n\nError: Failed to load plugin schemas\n\nError while loading schemas for plugin components: Failed to obtain provider\nschema: Could not load the schema for provider\nregistry.terraform.io/databricks/databricks: failed to instantiate provider\n\"registry.terraform.io/databricks/databricks\" to obtain schema: 
unavailable\nprovider \"registry.terraform.io/databricks/databricks\"..\n"

@andrewnester
Copy link
Contributor

@MAJVeld thanks a lot for the log, it's helpful! Do you have fail_on_active_runs setting set somewhere in your bundle configurations?

@MAJVeld
Copy link

MAJVeld commented Mar 18, 2024

I cannot check that at the moment, but will let you know in about 45 minutes by updating this comment.

For now, I do not recall setting that actively somewhere in the bundle config but I may be wrong. It may have been set implicitly somewhere as a default.

@andrewnester I herewith confirm that the fail_on_active_runs property is indeed nowhere set in the Databricks bundle definition. The output obtained by running databricks bundle validate -t <target_environment> does also not include the fail_on_active_runs fragment anywhere confirming that it is not set implicitly somewhere.

@andrewnester
Copy link
Contributor

I was able to reproduce the issue with fail_on_active_runs setting on. The check for the resource was too early in the chain causing the error.

Separately, if this setting was not set anywhere in the bundle then it's a separate issue which we might need to take a look at

@jeduardo90
Copy link
Author

jeduardo90 commented Mar 18, 2024

Hi @andrewnester

Thanks for questioning us of which parameters we're using.

On my end yes, I was using the fail_on_active_runs flag.

Once I removed it from the deployment command, the deployment went fine.

Thanks for fixing it with the PR.

Regards,
Eduardo

github-merge-queue bot pushed a commit that referenced this issue Mar 18, 2024
## Changes
CheckRunningResource does `terraform.Show` which (I believe) expects
valid `bundle.tf.json` which is only written as part of
`terraform.Write` later.

With this PR order is changed.

Fixes #1286 

## Tests
Added regression E2E test
@andrewnester
Copy link
Contributor

The fix was merged and will be released in the upcoming CLI release this week

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
DABs DABs related issues Question Further information is requested
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants