Skip to content

Commit

Permalink
fix: better error messages during the runs (#16)
Browse files Browse the repository at this point in the history
* fix: better error messages during the runs

This commit improves the logging for this GitHub Action
so developer can understand most of the thing from the console
output without a need to look into the HTML report.

This commit also removes couple of simple functions and embeds
their functionality into the main function without losing
readability.

The HTML report will be anyways downloaded and stored for Splunk
repositories.

* chore: temporarily use Dockerfile in action.yml for testing

* fix: make 15 retries for some bigger add-ons

* fix: up to 20 retries for getting the results

* refactor: combine 2 lines into 1
  • Loading branch information
artemrys authored Sep 20, 2023
1 parent ebc2ade commit 78f322b
Show file tree
Hide file tree
Showing 6 changed files with 546 additions and 142 deletions.
2 changes: 1 addition & 1 deletion .coveragerc
Original file line number Diff line number Diff line change
@@ -1,2 +1,2 @@
[run]
plugins = covdefaults
plugins = covdefaults
6 changes: 3 additions & 3 deletions .github/workflows/build-test-release.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -30,8 +30,7 @@ jobs:
- uses: actions/checkout@v3
with:
persist-credentials: false
- name: Set up Python
uses: actions/setup-python@v4
- uses: actions/setup-python@v4
with:
python-version: "3.11"
- name: Install dependencies
Expand All @@ -41,7 +40,7 @@ jobs:
pip install -r requirements-dev.txt
- name: Test
run: |
python -m pytest -v test/unit
python -m pytest -v test/unit --cov
build_action:
runs-on: ubuntu-latest
Expand Down Expand Up @@ -90,6 +89,7 @@ jobs:
git_committer_email: ${{ secrets.SA_GH_USER_EMAIL }}
gpg_private_key: ${{ secrets.SA_GPG_PRIVATE_KEY }}
passphrase: ${{ secrets.SA_GPG_PASSPHRASE }}

update-semver:
if: startsWith(github.ref, 'refs/tags/v')
needs: build_action
Expand Down
15 changes: 7 additions & 8 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -25,33 +25,32 @@ jobs:
| Name | Description | Notes | Default |
|-----------------|--------------------------------------------------------------------------------|--------------|---------|
| `username` | Splunk.com user used to login to the appinspect API | **required** | |
| `password` | Splunk.com password used to login to the appinspect API | **required** | |
| `username` | Splunk.com user used to login to the AppInspect API | **required** | |
| `password` | Splunk.com password used to login to the AppInspect API | **required** | |
| `app_path` | Path to the directory where addon is located, without filename | **required** | |
| `included_tags` | Comma separated list of [tags](#reference-docs) to include in appinspect job | | None |
| `excluded_tags` | Comma separated list of [tags](#reference-docs) to exclude from appinspect job | | None |
| `included_tags` | Comma separated list of [tags](#reference-docs) to include in AppInspect job | | None |
| `excluded_tags` | Comma separated list of [tags](#reference-docs) to exclude from AppInspect job | | None |
| `log_level` | Python logging level for action | | `INFO` |

You can explicitly include and exclude tags from a validation by including additional options in your request. Specifically, using the included_tags and excluded_tags options includes and excludes the tags you specify from a validation. If no tags are specified all checks will be done and no tags are excluded from the validation.

Appinspect failures are handled via `.appinspect_api.expect.yaml` file. To make exceptions the file should look like that:
AppInspect failures are handled via `.appinspect_api.expect.yaml` file. To make exceptions the file should look like that:

```yaml
name_of_the_failed_checks:
comment: jira-123
```

If you are a Splunker please specify jira issue in the comment where reason for exception is granted and explained
If you are a Splunker please specify a JIRA issue in the comment where reason for exception is granted and explained.

### Reference Docs

For more info on check criteria, tags and the API see the [Splunk AppInspect reference](https://dev.splunk.com/enterprise/reference/appinspect).


### Differences between v2

Missing parameters:

- `failOnError` - hardcoded to be true
- `failOnWarning` - hardcoded to be false
- `ignoredChecks` - hardcoded to be None
Expand Down
2 changes: 1 addition & 1 deletion action.yml
Original file line number Diff line number Diff line change
Expand Up @@ -25,4 +25,4 @@ inputs:
required: false
runs:
using: "docker"
image: docker://ghcr.io/splunk/appinspect-api-action/appinspect-api-action:v3.0.1
image: Dockerfile
150 changes: 70 additions & 80 deletions main.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,6 +11,7 @@
from typing import Dict, Any, Tuple, Callable, Sequence, Optional, List

NUM_RETRIES = 3
APPINSPECT_EXPECT_FILENAME = ".appinspect_api.expect.yaml"


class CouldNotAuthenticateException(Exception):
Expand All @@ -21,12 +22,13 @@ class CouldNotRetryRequestException(Exception):
pass


class AppinspectChecksFailuresException(Exception):
pass


class AppinspectFailures(Exception):
pass
logger = logging.getLogger("splunk_appinspect_api")
logger.setLevel(logging.INFO)
formatter = logging.Formatter("%(levelname)s: %(message)s")
stream_handler = logging.StreamHandler()
stream_handler.setLevel(logging.INFO)
stream_handler.setFormatter(formatter)
logger.addHandler(stream_handler)


def _retry_request(
Expand All @@ -42,17 +44,17 @@ def _retry_request(
sleep: Callable[[float], Any] = time.sleep,
rand: Callable[[], float] = random.random,
validation_function: Callable[[requests.Response], bool] = lambda _: True,
):
) -> requests.Response:
reason = ""
for retry_num in range(num_retries):
if retry_num > 0:
sleep_time = rand() + retry_num
logging.info(
logger.info(
f"Sleeping {sleep_time} seconds before retry "
f"{retry_num} of {num_retries - 1}"
)
if reason:
logging.info(reason)
logger.info(reason)
sleep(sleep_time)
response = requests.request(
method,
Expand All @@ -73,7 +75,6 @@ def _retry_request(
reason = f"response status code: {response.status_code}, for message: {error_message}"
continue
if not validation_function(response):
logging.info("Response did not pass the validation, retrying...")
continue
return response
raise CouldNotRetryRequestException()
Expand All @@ -96,18 +97,18 @@ def _download_report(


def login(username: str, password: str) -> requests.Response:
logging.debug("Sending request to retrieve login token")
logger.debug("Sending request to retrieve login token")
try:
return _retry_request(
"GET",
"https://api.splunk.com/2.0/rest/login/splunk",
auth=(username, password),
)
except CouldNotAuthenticateException:
logging.error("Credentials are not correct, please check the configuration.")
logger.error("Credentials are not correct, please check the configuration.")
sys.exit(1)
except CouldNotRetryRequestException:
logging.error("Could not get response after all retries, exiting...")
logger.error("Could not get response after all retries, exiting...")
sys.exit(1)


Expand All @@ -118,7 +119,7 @@ def validate(token: str, build: Path, payload: Dict[str, str]) -> requests.Respo
(build.name, open(build.as_posix(), "rb"), "application/octet-stream"),
)
]
logging.debug(f"Sending package `{build.name}` for validation")
logger.debug(f"Sending package `{build.name}` for validation")
try:
response = _retry_request(
"POST",
Expand All @@ -131,57 +132,58 @@ def validate(token: str, build: Path, payload: Dict[str, str]) -> requests.Respo
)
return response
except CouldNotAuthenticateException:
logging.error("Credentials are not correct, please check the configuration.")
logger.error("Credentials are not correct, please check the configuration.")
sys.exit(1)
except CouldNotRetryRequestException:
logging.error("Could not get response after all retries, exiting...")
logger.error("Could not get response after all retries, exiting...")
sys.exit(1)


def submit(token: str, request_id: str) -> requests.Response:
def submit(
token: str, request_id: str, seconds_to_wait: float = 60.0
) -> requests.Response:
def _validate_validation_status(response: requests.Response) -> bool:
is_successful = response.json()["status"] == "SUCCESS"
if is_successful:
logging.debug(
f'Response status is `{response.json()["status"]}`, "SUCCESS" expected.'
)
status = response.json()["status"]
is_successful = status == "SUCCESS"
if not is_successful:
logger.info(f'Response status is `{status}`, "SUCCESS" expected.')
return is_successful

# appinspect api needs some time to process the request
# if the response status will be "PROCESSING" wait 60s and make another call
# Splunk AppInspect API needs some time to process the request.
# If the response status will be "PROCESSING" wait 60s and make another call.

# there is a problem with pycov marking this line as not covered - excluded from coverage
# There is a problem with pytest-cov marking this line as not covered - excluded from coverage.
try:
logging.debug("Submitting package")
logger.debug("Submitting package")
return _retry_request( # pragma: no cover
"GET",
f"https://appinspect.splunk.com/v1/app/validate/status/{request_id}",
headers={
"Authorization": f"bearer {token}",
},
rand=lambda: 60.0,
num_retries=10,
rand=lambda: seconds_to_wait,
num_retries=20,
validation_function=_validate_validation_status,
)
except CouldNotAuthenticateException:
logging.error("Credentials are not correct, please check the configuration.")
logger.error("Credentials are not correct, please check the configuration.")
sys.exit(1)
except CouldNotRetryRequestException:
logging.error("Could not get response after all retries, exiting...")
logger.error("Could not get response after all retries, exiting...")
sys.exit(1)


def download_json_report(
token: str, request_id: str, payload: Dict[str, Any]
) -> requests.Response:
logging.debug("Downloading response in json format")
logger.info("Downloading response in JSON format")
return _download_report(
token=token, request_id=request_id, payload=payload, response_type="json"
)


def download_and_save_html_report(token: str, request_id: str, payload: Dict[str, Any]):
logging.debug("Downloading report in html format")
logger.info("Downloading report in HTML format")
response = _download_report(
token=token, request_id=request_id, payload=payload, response_type="html"
)
Expand All @@ -191,17 +193,18 @@ def download_and_save_html_report(token: str, request_id: str, payload: Dict[str


def get_appinspect_failures_list(response_dict: Dict[str, Any]) -> List[str]:
logging.debug("Parsing json response to find failed checks\n")
logger.debug("Parsing JSON response to find failed checks")
reports = response_dict["reports"]
groups = reports[0]["groups"]

failed_tests_list = []

for group in groups:
for check in group["checks"]:
if check["result"] == "failure":
failed_tests_list.append(check["name"])
logging.debug(f"Failed appinspect check for name: {check['name']}\n")
logger.info(f"Failed AppInspect check for name: {check['name']}")
check_messages = check["messages"]
for check_message in check_messages:
logger.info(f"\t* {check_message['message']}")
return failed_tests_list


Expand All @@ -210,55 +213,41 @@ def read_yaml_as_dict(filename_path: Path) -> Dict[str, str]:
try:
out_dict = yaml.safe_load(file)
except yaml.YAMLError as e:
logging.error(f"Can not read yaml file named {filename_path}")
logger.error(f"Can not read YAML file named {filename_path}")
raise e
return out_dict if out_dict else {}


def compare_failures(failures: List[str], expected: List[str]):
if sorted(failures) != sorted(expected):
logging.debug(f"Appinspect failures: {failures}")
logging.debug(f"Expected failures: {expected}")
raise AppinspectFailures


def parse_results(results: Dict[str, Any]):
print("\n======== AppInspect Api Results ========")
for metric, count in results["info"].items():
print(f"{metric:>15} : {count: <4}")
if results["info"]["error"] > 0 or results["info"]["failure"] > 0:
logging.warning("Error or failures found in AppInspect Report")
raise AppinspectChecksFailuresException


def build_payload(included_tags: str, excluded_tags: str) -> Dict[str, str]:
payload = {}
if included_tags != "":
payload["included_tags"] = included_tags
if excluded_tags != "":
payload["excluded_tags"] = excluded_tags

return payload


def compare_against_known_failures(response_json: Dict[str, Any], exceptions_file_path):
logging.info(
f"Comparing AppInspect Failures with `{exceptions_file_path.name}` file"
def compare_against_known_failures(
response_json: Dict[str, Any], exceptions_file_path: Path
) -> None:
logger.info(
f"Comparing AppInspect failures with `{exceptions_file_path.name}` file"
)
failures = get_appinspect_failures_list(response_json)

if exceptions_file_path.exists():
expected_failures = list(read_yaml_as_dict(exceptions_file_path).keys())
try:
compare_failures(failures, expected_failures)
except AppinspectFailures:
logging.error(
"Appinspect failures don't match appinspect.expect file, check for exceptions file"
if sorted(failures) != sorted(expected_failures):
logger.info(f"AppInspect failures: {failures}")
logger.info(f"Expected failures: {expected_failures}")
logger.error(
"AppInspect failures don't match appinspect.expect file, check for exceptions file"
)
sys.exit(1)
else:
logging.error(
f"File `{exceptions_file_path.name}` not found, please create `{exceptions_file_path.name}` file with exceptions\n" # noqa: E501
logger.error(
f"File `{exceptions_file_path.name}` not found, "
f"please create `{exceptions_file_path.name}` file with exceptions"
)
sys.exit(1)

Expand All @@ -274,41 +263,42 @@ def main(argv: Optional[Sequence[str]] = None):
parser.add_argument("excluded_tags")
parser.add_argument("log_level")

appinspect_expect_filename = ".appinspect_api.expect.yaml"

args = parser.parse_args(argv)

logging.basicConfig(level=args.log_level)
logger.setLevel(args.log_level)

logging.info(
f"app_path={args.app_path}, included_tags={args.included_tags}, excluded_tags={args.excluded_tags}"
logger.info(
f"Running Splunk AppInspect API for app_path={args.app_path}, "
f"included_tags={args.included_tags}, excluded_tags={args.excluded_tags}"
)
build = Path(args.app_path)

login_response = login(args.username, args.password)
token = login_response.json()["data"]["token"]
logging.debug("Successfully received token")
logger.info("Successfully received token after login")

payload = build_payload(args.included_tags, args.excluded_tags)
logging.debug(f"Validation payload: {payload}")
logger.info(f"Validation payload {payload}")

validate_response = validate(token, build, payload)
logging.debug(f"Successfully sent package for validation using {payload}")
logger.info(f"Successfully sent package for validation using {payload}")
request_id = validate_response.json()["request_id"]

submit_response = submit(token, request_id)
logging.info("Successfully submitted and validated package")

logger.info("Successfully submitted and validated package")
submit_response_info = submit_response.json()["info"]
logger.info(f"Report info {submit_response_info}")
download_and_save_html_report(token, request_id, payload)

# if this is true it compares the exceptions and results
try:
parse_results(submit_response.json())
except AppinspectChecksFailuresException:
issues_in_response = False
if submit_response_info["error"] > 0 or submit_response_info["failure"] > 0:
issues_in_response = True

if issues_in_response:
logger.info("Detected errors / failures in response")
response_in_json = download_json_report(token, request_id, payload)
response_json = json.loads(response_in_json.content.decode("utf-8"))
yaml_file_path = Path(appinspect_expect_filename).absolute()

yaml_file_path = Path(APPINSPECT_EXPECT_FILENAME).absolute()
compare_against_known_failures(response_json, yaml_file_path)


Expand Down
Loading

0 comments on commit 78f322b

Please sign in to comment.