Skip to content

Latest commit

 

History

History
118 lines (79 loc) · 6.63 KB

troubleshooting.md

File metadata and controls

118 lines (79 loc) · 6.63 KB
page_title
Troubleshooting Guide

How to troubleshoot your problem

If you have problems with code that uses Databricks Terraform provider, follow these steps to solve them:

TF_LOG=DEBUG DATABRICKS_DEBUG_TRUNCATE_BYTES=250000 terraform apply 2>&1 > tf-debug.log
  • Open a new GitHub issue providing all information described in the issue template - debug logs, your Terraform code, Terraform & plugin versions, etc.

Typical problems

Data resources and Authentication is not configured errors

In Terraform 0.13 and later, data resources have the same dependency resolution behavior as defined for managed resources. Most data resources make an API call to a workspace. If a workspace doesn't exist yet, authentication is not configured for provider error is raised. To work around this issue and guarantee a proper lazy authentication with data resources, you should add depends_on = [azurerm_databricks_workspace.this] or depends_on = [databricks_mws_workspaces.this] to the body. This issue doesn't occur if workspace is created in one module and resources within the workspace are created in another. We do not recommend using Terraform 0.12 and earlier, if your usage involves data resources.

Multiple Provider Configurations

The most common reason for technical difficulties might be related to missing alias attribute in provider "databricks" {} blocks or provider attribute in resource "databricks_..." {} blocks, when using multiple provider configurations. Please make sure to read alias: Multiple Provider Configurations documentation article.

Error while installing: registry does not have a provider

Error while installing hashicorp/databricks: provider registry
registry.terraform.io does not have a provider named
registry.terraform.io/hashicorp/databricks

If you notice below error, it might be due to the fact that required_providers block is not defined in every module, that uses Databricks Terraform Provider. Create versions.tf file with the following contents:

# versions.tf
terraform {
  required_providers {
    databricks = {
      source  = "databricks/databricks"
      version = "1.0.1"
    }
  }
}

... and copy the file in every module in your codebase. Our recommendation is to skip the version field for versions.tf file on module level, and keep it only on the environment level.

├── environments
│   ├── sandbox
│   │   ├── README.md
│   │   ├── main.tf
│   │   └── versions.tf
│   └── production
│       ├── README.md
│       ├── main.tf
│       └── versions.tf
└── modules
    ├── first-module
    │   ├── ...
    │   └── versions.tf
    └── second-module
        ├── ...
        └── versions.tf

Error: Failed to install provider

Running the terraform init command, you may see Failed to install provider error if you didn't check-in .terraform.lock.hcl to the source code version control:

Error: Failed to install provider

Error while installing databricks/databricks: v1.0.0: checksum list has no SHA-256 hash for "https://github.com/databricks/terraform-provider-databricks/releases/download/v1.0.0/terraform-provider-databricks_1.0.0_darwin_amd64.zip"

You can fix it by following three simple steps:

  • Replace databrickslabs/databricks with databricks/databricks in all your .tf files with the python3 -c "$(curl -Ls https://dbricks.co/updtfns)" command.
  • Run the terraform state replace-provider databrickslabs/databricks databricks/databricks command and approve the changes. See Terraform CLI docs for more information.
  • Run terraform init to verify everything working.

The terraform apply command should work as expected now.

Alternatively, you can find the hashes of the last 30 provider versions in .terraform.lock.hcl. As a temporary measure, you can lock on a prior version by following the following steps:

  • Copy versions-lock.hcl to the root folder of your terraform project.
  • Rename to terraform.lock.hcl
  • Run terraform init and verify the provider is installed.
  • Be sure to commit the new .terraform.lock.hcl file to your source code repository.

Error: Failed to query available provider packages

See the same steps as in Error: Failed to install provider.

Error: Deployment name cannot be used until a deployment name prefix is defined

You can get this error during provisioning of the Databricks workspace. It arises when you're trying to set deployment_name by no deployment prefix was set on the Databricks side (you can't set it yourself). The problem could be solved one of the following methods:

  1. Contact your Databricks representative, like Solutions Architect, Customer Success Engineer, Account Executive, or Partner Solutions Architect to set a deployment prefix for your account.

  2. Comment out the deployment_name parameter to create workspace with default URL: dbc-XXXXXX.cloud.databricks.com.

Azure KeyVault cannot yet be configured for Service Principal authorization

This is a well known limitation of the Azure Databricks - currently you cannot create Azure Key Vault-based secret scope because OBO flow is not supported yet for service principals on Azure Active Directory side. Use azure-cli authentication with user principal to create AKV-based secret scope.