-
Notifications
You must be signed in to change notification settings - Fork 52
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Databricks provider not found when redeploying #1286
Comments
Hi @jeduardo90 It's not entirely clear what the issue you're running into hence a few clarifying questions.
Do you do this locally on your machine without
You do this on the same machine as the first time, correct?
This error might mean you have stale TF cache or firewall issues. Could you double check this?
TF state is internal detail of DABs and subject to change, any manual changes to these files can lead to unexpected behaviour, so we recommend not to change this state manually
|
Hi @andrewnester, thanks for the quick reply!
Yes, you are correct, the .databricks folder.
I do this both locally and on an Azure DevOps agent, getting both the same results. The only moment in which I am successfully able to redeploy is when I have the databricks folder from the first deployment created in the same folder as the asset bundle configuration. I can't have the .databricks folder generated from the dev deployment in the repo when I'm deploying to different environment workspaces.
I have no firewall issues, but not sure about stale TF cache. Where is this cache stored?
You're right, It was just a test on my side. When I try to access registry.terraform.io/databricks/databricks, it returns a 404, and that's what puzzled me.
I'm using 0.213.0 on my local machine, but using latest one on the Azure DevOps agent. |
This is expected. just to double check that I understand the issue correctly
Is this correct? If not, can you clarify what's different? Is there anything else happening between step 2 and 3? Thanks! |
yes
Also yes Then I remove the .databricks folder that is generated from the first deployment to simulate the CI/CD process. I assume, based on initial bundle configuration, that the .databricks folder is not to be source controlled. And then I try to deploy again, and I get the above mentioned error: Error while loading schemas for plugin components: Failed to obtain provider |
You're right that the Do you have a Also, could you try upgrading your local copy of the CLI to the latest and confirm that that doesn't solve the issue? |
I can confirm this issue in the following situation / setup using Databricks CLI version 0.215.0: We have setup a simple Azure DevOps pipeline to deploy a few workflows using a Databricks Asset Bundle. Two target workspaces are defined (test and prod) and present as separate stages in the Azure DevOps pipeline. The prod workspace can only be deployed after the test stage has succeeded. In the pipeline, the git repository (without the .databricks folder) is checked out and a The first deployment to the test workspace succeeds without any issues, confirming that all communication between Azure DevOps and the target Databricks workspace works as expected. The expected Databricks workflows are created and can be ran. When triggering the Azure DevOps pipeline again, a new Linux-based build agent is used, the git repo is checked out again and the deployment of the databricks asset bundle fails with the error message mentioned above. Issuing a direct deployment of the Databricks Asset bundle from our (Windows) developer machine, where the We have no Addition: Renaming the The Feel free to setup a direct call with me should you have any further questions if that would be of any help. |
Hi @pietern I have upgraded the databricks CLI locally and I'm still facing the same issue. My issue is exactly the same as @MAJVeld described. This is blocking us from having a robust CI/CD process using databricks asset bundles. Should a customer support incident to databricks be raised? Thanks, |
Hi @jeduardo90 @MAJVeld ! Thanks a lot for the details.
I don't think it should be there, this is an expected way of defining resources (even though it does not resolve to URL) and this line is generated by TF itself. If it would be incorrect it would likely even fail at the first deploy. Other than that, could you check if you have databricks terraform provider stored in cache here? Also, could you run Thanks! |
@andrewnester the Here are the trace details as requested. I sanitized some of the contents.
|
@MAJVeld thanks a lot for the log, it's helpful! Do you have |
For now, I do not recall setting that actively somewhere in the bundle config but I may be wrong. It may have been set implicitly somewhere as a default. @andrewnester I herewith confirm that the |
I was able to reproduce the issue with Separately, if this setting was not set anywhere in the bundle then it's a separate issue which we might need to take a look at |
Thanks for questioning us of which parameters we're using. On my end yes, I was using the Once I removed it from the deployment command, the deployment went fine. Thanks for fixing it with the PR. Regards, |
## Changes CheckRunningResource does `terraform.Show` which (I believe) expects valid `bundle.tf.json` which is only written as part of `terraform.Write` later. With this PR order is changed. Fixes #1286 ## Tests Added regression E2E test
The fix was merged and will be released in the upcoming CLI release this week |
When I try to deploy Asset Bundles, I am able to do it successfully for the first time.
However, when I try to do it for the second time without the .bundle folder locally, I get the following error messages:
Error while loading schemas for plugin components: Failed to obtain provider
schema: Could not load the schema for provider
registry.terraform.io/databricks/databricks: failed to instantiate provider
"registry.terraform.io/databricks/databricks" to obtain schema: unavailable
When I change the provider in the tfstate file in the target workspace to registry.terraform.io/providers/databricks/databricks, I am able to deploy successfully, but instead of updating the jobs I have created on the first deploy, it creates new ones with exactly the same names.
How can I deploy the same bundle multiple times without having the .bundle folder (that gets generated on deployment runtime)?
The text was updated successfully, but these errors were encountered: