Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Terraform apply does not return warnings from sentinel policies #353

Open
dkyanakiev opened this issue Jul 25, 2023 · 3 comments
Open

Terraform apply does not return warnings from sentinel policies #353

dkyanakiev opened this issue Jul 25, 2023 · 3 comments

Comments

@dkyanakiev
Copy link

Hi there,

Terraform Version

Terraform v1.3.0
on darwin_arm64
+ provider registry.terraform.io/hashicorp/nomad v1.4.20

Nomad Version

build: 1.5.1+ent

Provider Configuration

Which values are you setting in the provider configuration?

provider "nomad" {
  address = "https://<my-nomad-cluster>:<port>"
  region  = "us-west-2"
}

Affected Resource(s)

Please list the resources as a list, for example:

  • nomad_job

Terraform Configuration Files

variable "namespace" { 
    type = string 
    default="monitoring" 
}

variable "job_name" { 
    type = string
    default="consul-exporter-two" 
}


job "consul-exporter-two" {
  name        = var.job_name
  namespace   = "my-namespace"
  type        = "service"
  region      = "us-west-2"
  datacenters  = ["us-west-2a","us-west-2b"]
  # system (admin components e.g. metrics and logs collectors)
  priority = 90

  reschedule {
    delay          = "30s"
    delay_function = "exponential"
    max_delay      = "75s"
    unlimited      = true
  }



  group "consul-exporter" {
    count = 1

    network {
      port "consul-exporter" {
        to = 9107
      }
    }
    task "consul-exporter" {
        resources {
        cpu    = 100
        memory = 100
      }
      driver = "docker"
      config {
        args = [
          "--consul.health-summary",
          "--consul.ca-file=/secrets/server.crt",
          "--consul.cert-file=/secrets/server.crt",
          "--consul.key-file=/secrets/server.key",
          "--consul.insecure",
          "--consul.request-limit=100",

        ]
        
        image = "<registry>/consul-exporter:0.0.3"
        ports = [
          "consul-exporter",
        ]
      }

      service {
        name = "${var.job_name}-${var.namespace}"
        port = "consul-exporter"

        check {
          type            = "http"
          path            = "/health"
          interval        = "10s"
          timeout         = "3s"
          protocol        = "http"
          tls_skip_verify = "true"
        }
      }
    }
  }
}

Debug Output

https://gist.github.com/dkyanakiev/e26aab001fd7b0ea6c47aaf38be45e94

Expected Behavior

When running and applying to receive the following massage

nomad_job.test: Creating...
nomad_job.test: Creation complete after 1s [id=consul-exporter-two]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.
Job Warnings:
1 warning:

* task-docker-telemetry : Result: false (allowed failure based on level)

Description:
   * Validate Docker container labels that are used for Telemetry related  purposes (logging metadata, metric labels)  

Print messages:

!! Missing Docker task container labels !!
....

Actual Behavior

We currently have multiple Sentinel policies in place to track and enforce various things.
We noticed that policies that are set as warning never actually print anything out once you do terraform apply
If the policy is meant to stop you from deploying you would immediately see the error on apply but warnings are dropped.

nomad_job.test: Creating...
nomad_job.test: Creation complete after 1s [id=consul-exporter-two]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Steps to Reproduce

  1. terraform apply

Important Factoids

Please list the steps required to reproduce the issue, for example:
The one of the sentinel policies we have applied - https://gist.github.com/dkyanakiev/9ef5a958eea67f6a837cfaa2c4a2ad3f
It checks for labels (Note: We did notice another issue in terms of how labels are rendered but thats not relevant for the overall results)
Any warning level policy would be enough

References

https://github.com/hashicorp/nomad/blob/v1.5.8/command/job_plan.go#L317
Noticed that the nomad client actually looks for warnings and displays them, so its just a matter of actually looking for those warnings on the resource level . Currently its only checking for errors but warnings should be displayed as terraform warnings as well

@lgfa29
Copy link
Contributor

lgfa29 commented Jul 26, 2023

Thanks for the suggestion @dkyanakiev.

I believe warnings are only supported in the new provider framework so we currently don't have a way to surface messages like these.

I will keep this issue open so we can work on it whenever we migrate to the provider framework.

@dkyanakiev
Copy link
Author

Hmm, I see. What about using the https://developer.hashicorp.com/terraform/plugin/log/managing#log-levels + https://developer.hashicorp.com/terraform/plugin/log/writing , warn level with the exception that ppl might have to set TF_LOG_PROVIDER=WARN - seems to be supported in the sdk2. @lgfa29 - I'll try and test it out myself

@lgfa29
Copy link
Contributor

lgfa29 commented Jul 26, 2023

Ah yes, we could use that for now. Usually those logs are for dev debugging but given the lack of alternatives that's probably the best we can do at this point.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants