From 92efe4214d405d18749f5434931f15199ea5049c Mon Sep 17 00:00:00 2001 From: Clint Baxley Date: Thu, 8 Aug 2024 09:43:02 -0400 Subject: [PATCH] #390 Install v2 pipeline (#392) MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit merge all lme 2.0 changes into release-2.0.0 ## ๐Ÿ—ฃ Description ## Add dashboard-descriptions.md in /docs/markdown/reference. Add a link to this file within the main README.md's table of contents. ### ๐Ÿ’ญ Motivation and context The LME repository does not have a location for dashboard descriptions. ## ๐Ÿงช Testing N/A ## โœ… Pre-approval checklist ## - [x] Changes are limited to a single goal **AND** the title reflects this in a clear human readable format - [x] I have read and agree to LME's [CONTRIBUTING.md](https://github.com/cisagov/LME/CONTRIBUTING.md) document. - [x] The PR adheres to LME's requirements in [RELEASES.md](https://github.com/cisagov/LME/RELEASES.md#steps-to-submit-a-PR) - [x] These code changes follow [cisagov code standards](https://github.com/cisagov/development-guide). - [x] All relevant repo and/or project documentation has been updated to reflect the changes in this PR. ## โœ… Post-merge Checklist - [x] Squash all commits into one PR level commit - [x] Delete the branch to keep down number of branches * Update README.md to include dashboard-descriptions.md * Update wording for computer software overview dashboard * Fix some grammatical changes in dashboard-descriptions.md * Release 1.3.1 merge into main (#154) * Update retention function to fix retention policy bug (#143) * Updated troubleshooting guide to account for index management (#134) * Update upgrading.md to account for 1.3.1 (#151) * Update upgrading.md * Update upgrading.md --------- Co-authored-by: Andrew Arz <149685528+aarz-snl@users.noreply.github.com> * Fixes dashboard update not importing on fresh install (#167) (#169) * Fixes dashboard update not importing on fresh install #165 * Update upgrading.md to include status on v1.3.2, along with revisions to the document overall * remove step 4 from upgrading.md; add additional instructions for v1.3.2 --------- Co-authored-by: Clint Baxley Co-authored-by: Clint Baxley * Add proof of concept selenium tests * Correct the script name in the doc string * User Security Selenium Tests for No Results Panels * First full selenium test. Currently just User Security * WIP User HR * Completed all dashboards. Requires testing now * Cut dev comments Co-authored-by: Alden Hilton <106177711+adhilto@users.noreply.github.com> * Debugging a couple unit tests that error out. Two left * Install LME in the testbed from a single script (#150) * Adding the configure scripts * Add scripts to zip and copy to a container for downloading in the server * Grab the expiry time properly in copy file * Overwrite the blob if it exists * Add the script to download file into DC * Script that unzips the files in a container * Adds username argument to download files * Add script to run scripts in container * Adds username argument to gpo script * Modifies the url name in the client GPO * Adds the functionality for chapter 1 and first half of chapter 2 * Imports the sysmon GPO * Update the variables for sysmon gpo * Name the scripts so they are grouped together in a listing * Echos the file download url * Expands the domain name correctly in create ou * Write the url output of copy file to container to a different output stream * Create a new LME folder for our scripts and files * Set path for extract to lme * Update paths for scripts to /lme * Fix the wec server name setting * Adds the scripts to install chapter 1 and 2 * Allows azure to download in linux and windows * Adds linux install scripts. * Adds winlogbeat installer * emove garbage in update server name * Tweak several scripts to get the scp of files_for_windows * Adds installer script to run all the scripts * Fixes the formatting method for az output * Clean up the scripts and add documentation * Fixes outputting format errors * Fixes hanging on adding ls1 to domain * Fix formatting errors on responses * Update linux expect script for different prompts. * Handle the reboot message for linux expect script * Echos the file download url * Create a new LME folder for our scripts and files * Set path for extract to lme * Update paths for scripts to /lme * Update paths for scripts to /lme * Fix the wec server name setting * Adds the scripts to install chapter 1 and 2 * Allows azure to download in linux and windows * Adds linux install scripts. * Adds winlogbeat installer * emove garbage in update server name * Tweak several scripts to get the scp of files_for_windows * Adds installer script to run all of the scripts * Fixes the formatting method for az output * Clean up the scripts and add documentation * Fixes outputting format errors * Fixes hanging on adding ls1 to domain * Fix formatting errors on responses * Update linux expect script for different prompts. * Handle the reboot message for linux expect script * Adds InstallTestbed instructions to Readme.md * Modifies parameters to be pascal case * ls1 not being set on DC1 * Adds Linux Only install to SetupTestbed * Remove separate linux only script * Update testing/Readme.md Co-authored-by: Alden Hilton <106177711+adhilto@users.noreply.github.com> * Make number of clients consisten between scripts * Add ports for elk stack for testing * Update readmes to change ResourceGroupName to ResourceGroup * Adds a switch to install linux only * Adds simple tests to check install * Removes the error if the old configure zip is not found. * Adds variables to linux tests run command * Move credential extraction to lib for use by other scripts. * Adds npm for other testing * Adds latest version of nodejs for testing * Make output.log readable for tests * Add the -m parameter in the testing readme * Download the latest version or a specified version * Reboot for 1.3.0 * Notes that we could have different expect scripts * Put back in the restart after all of the domain updates * Scp uses ls1 instead of ls1.lme.local * Up the timeout of the adding ls1.lme.local * Up the timeout of the adding ls1.lme.local * Fixes chmod of the output.log for tests * Adds venv to the gitignore * Adds the ability to pass a branch to the installer * Remove node installer * Change timeout in expect script for slow connections * Make shell files executable --------- Co-authored-by: Clint Baxley Co-authored-by: Alden Hilton <106177711+adhilto@users.noreply.github.com> * Fix deploy.sh data retention failure error (#190) * Fix deploysh data retention failure (#179) * Update deploy.sh * Update deploy.sh * Update deploy.sh * Update deploy.sh * Remove free (#188) * changed the word free to no-cost or no-cost to users * rephrased wording to 'which comes at no cost to users' --------- Co-authored-by: Linda Lovero-Waterhouse * Update upgrading.md with data retention failure resolution (#189) --------- Co-authored-by: Andrew Arz <149685528+aarz-snl@users.noreply.github.com> Co-authored-by: Linda Waterhouse <82845774+llwaterhouse@users.noreply.github.com> Co-authored-by: Linda Lovero-Waterhouse * Automatically Add Tags to Azure Resources (#186) * Add tags to all Azure resource creations calls --------- Co-authored-by: Clint Baxley * Switched script to headless mode * added switch for headless, detached, and debug mode. Bug where driver.quit does not close window. * Refactored long line and added switch for debug mode * Removed unnecessary comments * Update pull_request_template.md (#198) * Update pull_request_template.md Moved Squash commits from post-merge to pre-merge. * overriding default PR template for preferred LME template * overriding default PR template for preferred LME template * updating issue template to shorten the template --------- Co-authored-by: mreeve-snl * Python testbed setup (#183) * Add simple tests for http requests * Add an env file to gitignore * Remove unneeded pip install * Hide pytest_cache * Add pycache to gitignore * Adds dev containers for vscode * Adds testing information for vscode * Uses .env file for tests if present * Adds env example file * Modify development container name * Adds readme for the testing environment * Add simple tests for http requests * Add an env file to gitignore * Remove unneeded pip install * Adds dev containers for vscode * Adds testing information for vscode * Uses .env file for tests if present * Adds env example file * Modify development container name * Adds readme for the testing environment * Create helpers and conftest for python tests * Setup for using test explorer in the dev environment * Adding azure shell requirements to docker image * Adding Python API tests * Merges additional tests * Made changes to fix tests that were failing * Separate linux only tests from others * Create a workflow for building test environments * Make the docker user be the same as the vbox user id * Set up to run the installer in docker * Pick up different fs types in data_retention * Change the build path for building lme container * Install lme after build * Make lme installer executable * Set up the build for tests * Add the cluster workflow for github actions --------- Co-authored-by: Clint Baxley Co-authored-by: Rishi * Update PULL_REQUEST_TEMPLATE.md (#206) Added instruction to select Issue in Development area so that the corresponding Issue is automatically closed when the PR is merged. * Made changes to facilitate HTML Reports on test execution (#211) * Made changes to requirements.txt, ReadMe and gitignore to facilitate HTML reporting * Fixed Typos on Readme * Fixed Typos on Readme * removed tags flag from nsg because it was preventing some rules from being created (#214) Co-authored-by: Linda Lovero-Waterhouse * Update PULL_REQUEST_TEMPLATE.md (#217) Using keywords like "fixes" or "closes" only auto-closes the corresponding issue if the PR is going to be merged into main. For PR's merged into release branches, we need to add the issue to the development box in the right sidebar in order to auto-close the issue. Added some documentation to clarify this. * Create new workflow for automating the release process (#199) * Github workflows for building environments (#195) * Run the correct installer file * Run the installer from the root directory * Try a self hosted github runner * Reduce logging for docker pull. * Adds quiet flag to docker pull command * Pull the images before expect to reduce run time * Install docker early in order to speed up install * Builds the right docker-compose file * Increase timeout for linux install expect script * Change timeout on expect script * Change the way expect watches the script * Expand the timeout when waiting for Elasticsearch * Search for more output in the expect script * Change the match for the dots in expect * Change the regex for matching dots * Change the output for catching dots * Add chrome to Dockerfile for selenium * Import selenium tests and run python tests * Activate venv when running tests * Correct path for venv in the container * Correct path for venv in the container * Running only linux tests * Adjust scripts to run as a non super user * Change the permissions on the output log to source for environment variables later * Check for output log * Make output log available to test instantiation * Change pytest cache dir to home for user * Change pytest cache dir to home for user * Change pytest cache dir permissions * Hide get-docker.sh from installs * Cleanup test files in workflow * Add the cluster workflow for github actions * Adds a cluster build * Run the test cluster in pwsh * Fail pipeline when commands fail * Catch the error from powershell * Remove duplicate run command * Set env vars explicitly * Modify the escape char for env vars * Try a different method of catching errors in pwsh script * Check failure of pwsh script * Test successful run of build_cluster * Test failure of script * Capture the output from the az commands * Continue on error condition * Simplify run command * Try catching failures in a new way. * Test failure capture * Setting error action to continue * Remove ErrorAction * Use docker-compose run instead * Capture exit code to fail step * Try propigating errors from pwsh * Capture external command exit code * Send lastexitcode * Don't exit right away * Disable immediate stop on exit * Run simple test for exit code * Cd to docker compose file * Catch exec exit code * Remove unneded flags from the command * Adds back in the build script * Adds an explicit exit for powershell script * Remove spaces after escape character * Escape the exitcode variable in the shell command * Remove extra exit from build_cluster.ps1 * Add a passing command for build_cluster.ps1 * Move to the install directory * Run setup testbed to get an error * Try to build a cluster with the build_cluster.ps1 script. * Check resource group variable * Set the resource group name differently * Build a cluster using the generated resource group * Make the paths relative in the build_cluster script * Move to the right directory to do an install * Destroy cluster on pipeline finish * Change the owner of the files to match the host in the development container * Su user to remove testing files * Run the docker-compose as root to clean up * Run as root to clean up containers * Build the cluster in azure * List the files in the current directory on exec * Run the files from the new path * Investigate more about the file environment * Update the envornment for building the cluster * Update the environment users before docker up * Try to start hung job * List all the files with their owners in the container * Escape the powershell commands * Check the paths and files with bash * Find the path we are on * Check powershell environment * Cd to home directory in powershell * Cd to home directory in powershell * Rebuild docker compose as the right user * Change directory to source directory for powershell * Change to proper directory for powershell * Build a full cluster in pipeline * Run the linux tests and check permissions of files * Change permissions on output file with sudo * Turn off cluster creation for speed * Comment out building cluster in steps * Only delete the resource group if it exists * Adds ability to get the public ip for fw rules * Put the tags in quotes when creating nsg rules * Output the command being run for nsg rules * Remove tags for nsg port definitions * Install lme on the cluster * Builds the full cluster install * Cleans up the useage of the environment variables in pipeline * Extract environment variables from the build script and use them in the GitHub workflow. * Do a minimal linux install * Fix the path for retrieving env vars * Check setting of github env * Source the env file and push it to github env * Print some debug information to the console * Check setting of each key in functions * Parse the output for the passwords better * Uses a unique id instead of run_id to make sure it is unique * Double quote the file name for sed in output.log * Changes the way we get passwords from output.log * Make sure key doesn't have newline * Escape dollar sign * Properly escape double quotes inside of docker-compose command * Escape all of the dollar signs in the compose command * Write the environment variables to the githut environment * Clean up debugging output * Remove more debugging output * Remove set e * Adds function to write passwords to a file for actions * List files in directory after writing passwords * Export the env vars in the github file * Fail the workflow if the environment is not set correctly * Clean up the environment vars for the container * Set the variables on run of the pwsh command * Run commands on the domain controller * Get the envrionment checker to pass * Update passing variables to remote script * Escape the powershell environment variables * Change the case of the resource group env var * Don't destroy cluster so we can manually test * Build the entire cluster to run commands against * Run a command on the linux machine * Run remote tests * Run minimal installs to debug tests * Fix escaping for test commands * Move to the correct directory for tests * Add continuation characters to the lines in the script * Remove nested double quotes * Uses the ip of LS1 to run the tests on * Put the cluster build command on one line * Destroy clusters at the end * Quote output log correctly on build * Run all api tests on cluster * Build full cluster and add verbose logging to pytest * Stop deleting the cluster in the destroy_cluster.ps1 script * Modify installer to use the new winlogbeat index pattern * Try to get the dns to resolve ls1 * Add ls1 to the hosts file so it resolves always * Modify tests to pass on a working cluster * Skip the fragile test for mapping * Set up to run selenium tests on the cluster * Testing * Rerun build after rebasing to the right branch * Pass the minimal install flag to install lme * Build complete cluster and run all tests * Pull the images quietly if running without a terminal. * Run the simple tests on PR checkin and the longer ones when triggered * Build the linux docker container upon check in of a pr * Build lme container fresh before install * Runs an end to end build in docker and cluster * Print out the download feedback when pulling images * Build 1.4.0 branch * Build the cluster using the main branch of the repository * Allow passing branch to installers from the pipeline * Run tests from a different base branch * Remove the ampersand typo * Allow passing arguments to the installer scripts * Rearrange install arguments * Test passing arguments in install lme * Build lme without arguments * Install lme with no arguments * Run command as string in install_lme.ps1 * Build by passing arguments * Run a complete build using arguments * Update the sources to allow for updating in the pipeline * Build the cluster using the latest branch * Set up the latest branch var * Runs an upgrade in the pipeline * Run the upgrade in the remote linux machine * Run upgrade on minimal install * Checks out the current branch to run an upgrade on linux * Capture the exit code of the upgrade script * Check the directories we are working in * Clone the git repository to run the upgrade * Checkout the proper branch from origin * Get the remote username and home dir for the remote server * Set the home directory for the az user * Use origin when checking out in the upgrade script * Revert the changes to deploy.sh * Set a dumb terminal to avoid terminal errors * Export the terminal variable correctly * Capture the output of the upgrade script to fail pipeline if it fails * Revert previous changes as they seemed to break upgrade * Use a different format for executing the pwsh script * Destroy the cluster when done * Output the upgrade information to the terminal * Try capturing the docker-compose output * Directly capture the output of the compose command * Fixes unbalanced quote * Build and run full cluster with an upgrade * Builds the current brand for the cluster * Add a unique id for the docker-compose so you can run multiple instances of the same docker-compose file * Adds upgrade.yml to gh workflows * Runs both a build and an upgrade * Adds upgrade to the gh workflows * Get gh to notice new workflow * Match build names to parent branch * Trigger gh to see the workflow * Get gh actions to trigger workflow * Update code to get gh to see the actions * Update code to use the new workflow module. * Trigger gh actions to run * Get gh to run workflows * Try to get gh to run workflows * Change upgrade branch pulling * Checking out branch for upgrade in a new way * Rename workflow for upgrade * Convert to docker compose * Run all three builds using docker compose and -p * Clean up docker containers * Build the docker containers fresh for the linux_only workflow * Adds readme and checks an upgrade where the upgrade version is the same as the current version * Fixes typo in the workflow file * Runs docker as sudo * Remove the privileged flag from the lme container * Try leaving the swarm on the host if running in non privileged environment * Leave the swarm on the host * Reset to run docker as privileged * Installs the current branch in linux only * Stop pruning system to see if elastc starts faster * Don't take down the docker containers to see why they aren't working * Removes the gh actions shell escape vulnerability * Remove the docker containers at end of run * changing .github/README.md name to prevent it apperaing on main web page (#260) * Append the flags to the end of the password file (#263) * Append the flags to the end of the password file * Prints the contents of password.txt to the console * Extract the credentials in a new way to compensate for the flags being in the file * Tests a build that runs locally on github * Keep container running for debugging purposes * Fix the credentials parsing function * Create a workflow for a burndown chart (#302) * Display the chart in the burndown summary * Get workflow dispatch to show * Adds defaults for the burndown chart workflow * Clean up debugging information from the workflow (#310) * Clean up debugging information from the workflow * Increase column count to match the number of columns in the board. * Break up selenium tests (#281) * Adding selenium directory and readme * Separate out the selenium tests so they can be run separately * Run selenium tests in pipeline * Puts the variables for env one to a line * Issue # 289 selenium test for Computer Software Overview dashboard (#290) * Updated Selenium tests for Computer Overview Dashboard * Updated Selenium tests for Computer Overview Dashboard * Updated Selenium test scripts for Health Check Dashboard (#292) * Set up selenium tests to run on cluster test * Point tests to the proper test folder * Update Selenium tests for Process Explorer Dashboard (#295) * Rewrite completed for Selenium test scripts for Security Dashboard - Security Log (#300) * Rewrote Selenium Tests for Sysmon Summary Dashboard (#301) * Rewrite Selenium Tests for User HR Dashboard * Rewrite of Selenium Tests for User Security Dashboard (#304) --------- Co-authored-by: rishagg01 <149525835+rishagg01@users.noreply.github.com> Co-authored-by: Rishi * API calls code for Data Insertion (#343) * modified: testing/tests/api_tests/helpers.py new file: testing/tests/api_tests/selenium_tests/__init__.py new file: testing/tests/api_tests/selenium_tests/conftest.py new file: testing/tests/api_tests/selenium_tests/fixtures/hosts.json new file: testing/tests/api_tests/selenium_tests/fixtures/logonevents.json new file: testing/tests/api_tests/selenium_tests/queries/filter_hosts.json new file: testing/tests/api_tests/selenium_tests/queries/filter_logonevents.json new file: testing/tests/api_tests/selenium_tests/test_server.py * commit renamed: testing/tests/api_tests/selenium_tests/__init__.py -> testing/tests/api_tests/data_insertion_tests/__init__.py commit renamed: testing/tests/api_tests/selenium_tests/conftest.py -> testing/tests/api_tests/data_insertion_tests/conftest.py commit renamed: testing/tests/api_tests/selenium_tests/fixtures/hosts.json -> testing/tests/api_tests/data_insertion_tests/fixtures/hosts.json commit renamed: testing/tests/api_tests/selenium_tests/fixtures/logonevents.json -> testing/tests/api_tests/data_insertion_tests/fixtures/logonevents.json commit renamed: testing/tests/api_tests/selenium_tests/queries/filter_hosts.json -> testing/tests/api_tests/data_insertion_tests/queries/filter_hosts.json commit renamed: testing/tests/api_tests/selenium_tests/queries/filter_logonevents.json -> testing/tests/api_tests/data_insertion_tests/queries/filter_logonevents.json commit renamed: testing/tests/api_tests/selenium_tests/test_server.py -> testing/tests/api_tests/data_insertion_tests/test_server.py commit modified: testing/tests/api_tests/helpers.py * Updated selenium tests for USER HR dashboard panels post data insertion (#358) * adding ignore for vim files * moving old readme to old_chapters directory * moving chapters to old_chapters folder * Committing Readme changes and updates and removing old backups directory * Adding Configuration files for lme 2.0 * Adding Ansible Playbook Yaml for installing lme 2.0 * Committing Quadlet files for LME 2.0 arch * Adding Scripts: - download.sh/upload.sh: upload/download logs in mass from elasticsearch (will be integrated into future merging from 1 -> 2) - link_latest_podman_quadlet.sh: links from the nix store the latest podman version into its expected directories - set-fleet.sh: sets up the required fleet settings on kibana - set_sysctl_limits.sh: sets the sysctl_limits as required by the architecture and containers - install_lme_local.yml: sets up the ansible playbook for lme 2.0 installation. * move lme playbook to scripts directory * pushing some more documentation to Readme * initial diagram * pushing updates to Readme to document ports/services/etc... * Updated User HR Dashboard Selenium Test for User HR Logon Title panel (#385) * Updated selenium tests for USER HR dashboard panels post data insertion * Updated User HR Dashboard Selenium Test for User HR Logon Title panel * Merge in the pipeline files * Adds in the tesing installers * Updates the paths to the LME install scripts * Make the user create the environment file before doing install * Make the lme-environment file so the install succeeds * Adding pre-reqs to main testing/v2 readme * Add some extra to the readme. * Associate the nsg with the public ip * Associate the nic instead of ip to the nsg * Change default ports for nsg * Update Caddyfile to include access log * Adds back in some files from Chapter 3 --------- Co-authored-by: mitchelbaker-cisa <149098823+mitchelbaker-cisa@users.noreply.github.com> Co-authored-by: Andrew Arz <149685528+aarz-snl@users.noreply.github.com> Co-authored-by: Clint Baxley Co-authored-by: Alden Hilton Co-authored-by: unknown Co-authored-by: Grant (SNL) <108766839+rgbrow1949@users.noreply.github.com> Co-authored-by: Alden Hilton <106177711+adhilto@users.noreply.github.com> Co-authored-by: Linda Waterhouse <82845774+llwaterhouse@users.noreply.github.com> Co-authored-by: Linda Lovero-Waterhouse Co-authored-by: Brown Co-authored-by: mreeve-snl Co-authored-by: Rishi Co-authored-by: rishagg01 <149525835+rishagg01@users.noreply.github.com> Co-authored-by: Connor <107427279+causand22@users.noreply.github.com> --- .../python_development/devcontainer.json | 19 + .devcontainer/python_tests/devcontainer.json | 18 + .github/ISSUE_TEMPLATE/bug-or-error-report.md | 19 +- ...t_template.md => PULL_REQUEST_TEMPLATE.md} | 6 +- .github/README-github.md | 1 + .github/changelog-configuration.json | 22 + .github/workflows/build_release.yaml | 49 + .github/workflows/burndown_chart.yml | 100 + .github/workflows/cluster.yml | 278 + .github/workflows/linux_only.yml | 123 + .github/workflows/main.yml | 27 +- .github/workflows/upgrade.yml | 300 + .gitignore | 19 +- .../Group Policy Objects/manifest.xml | 0 .../Backup.xml | 0 .../Machine/Preferences/Services/Services.xml | 0 .../DomainSysvol/GPO/Machine/comment.cmtx | 0 .../microsoft/windows nt/SecEdit/GptTmpl.inf | Bin .../DomainSysvol/GPO/Machine/registry.pol | Bin .../bkupInfo.xml | 0 .../gpreport.xml | Bin .../Backup.xml | 0 .../GPO/Machine/Preferences/Groups/Groups.xml | 0 .../Machine/Preferences/Services/Services.xml | 0 .../DomainSysvol/GPO/Machine/comment.cmtx | 0 .../microsoft/windows nt/SecEdit/GptTmpl.inf | Bin .../DomainSysvol/GPO/Machine/registry.pol | Bin .../bkupInfo.xml | 0 .../gpreport.xml | Bin .../Chapter 1 Files}/lme_gpo_for_windows.zip | Bin .../Chapter 1 Files}/lme_wec_config.xml | 0 .../Group Policy Objects/manifest.xml | 0 .../Backup.xml | 0 .../DomainSysvol/GPO/GPO.cmt | Bin .../ScheduledTasks/ScheduledTasks.xml | 0 .../bkupInfo.xml | 0 .../gpreport.xml | Bin .../GPO Deployment/sysmon_gpo.zip | Bin .../Group Policy Objects/manifest.xml | 0 .../Backup.xml | 0 .../DomainSysvol/GPO/GPO.cmt | Bin .../ScheduledTasks/ScheduledTasks.xml | 0 .../bkupInfo.xml | 0 .../gpreport.xml | Bin .../GPO Deployment/update.bat | 0 .../SCCM Deployment/Install_Sysmon64.ps1 | 0 .../SCCM Deployment/Uninstall_Sysmon64.ps1 | 0 .../Chapter 3 Files}/.gitignore | 0 .../Chapter 3 Files}/dashboard_update.sh | 2 +- .../Chapter 3 Files}/deploy.sh | 63 +- .../Chapter 3 Files}/docker-compose-stack.yml | 0 .../Chapter 3 Files}/lme_update.sh | 0 .../Chapter 3 Files}/logstash.conf | 0 .../winlog-index-mapping.json | 0 .../Chapter 3 Files}/winlogbeat.yml | 0 .../Healthcheckoverview_dashboard.ndjson | 0 .../Chapter 4 Files}/dashboards/Readme.md | 0 .../dashboards/alerting_dashboard.ndjson | 0 .../computer_software_overview.ndjson | 0 .../dashboards/process_explorer.ndjson | 0 .../security_dashboard_security_log.ndjson | 0 .../dashboards/sysmon_summary.ndjson | 0 .../dashboards/user_hr.ndjson | 0 .../dashboards/user_security.ndjson | 0 .../Chapter 4 Files}/export_dashboards.py | 0 .../Chapter 4 Files}/requirements.txt | 0 OLD_CHAPTERS/README.md | 76 + README.md | 390 +- config/caddy/Caddyfile | 22 + config/containers.txt | 5 + config/example.env | 95 + config/kibana.yml | 17 + config/setup/acct-init.sh | 17 + config/setup/init-setup.sh | 29 + config/setup/instances.yml | 51 + config/wazuh_cluster/wazuh_manager.conf | 385 + docs/markdown/chapter3/chapter3.md | 2 +- docs/markdown/maintenance/upgrading.md | 57 +- docs/markdown/prerequisites.md | 2 +- .../reference/dashboard-descriptions.md | 40 + docs/markdown/reference/troubleshooting.md | 59 + quadlet/lme-caddy.container | 22 + quadlet/lme-elasticsearch.container | 28 + quadlet/lme-fleet-server.container | 25 + quadlet/lme-kibana.container | 29 + quadlet/lme-setup-accts.container | 24 + quadlet/lme-setup-certs.container | 24 + quadlet/lme-wazuh-manager.container | 47 + quadlet/lme.network | 7 + quadlet/lme.service | 16 + scripts/download.sh | 36 + scripts/gen_cert.sh | 29 + scripts/install_lme_local.yml | 216 + scripts/link_latest_podman_quadlet.sh | 29 + scripts/set-fleet.sh | 22 + scripts/set_sysctl_limits.sh | 63 + scripts/upload.sh | 33 + testing/InstallTestbed.ps1 | 402 + testing/Readme.md | 67 +- testing/SetupTestbed.ps1 | 508 +- .../azure_scripts/copy_file_to_container.ps1 | 81 + .../azure_scripts/create_blob_container.ps1 | 101 + .../azure_scripts/download_in_container.ps1 | 106 + .../azure_scripts/extract_archive.ps1 | 90 + .../azure_scripts/lib/utilityFunctions.ps1 | 143 + .../azure_scripts/run_script_in_container.ps1 | 59 + .../azure_scripts/zip_my_parents_parent.ps1 | 34 + testing/configure/chown_dc1_private_key.ps1 | 21 + testing/configure/create_lme_directory.ps1 | 27 + testing/configure/create_ou.ps1 | 23 + testing/configure/download_files.ps1 | 23 + testing/configure/install_chapter_1.ps1 | 65 + testing/configure/install_chapter_2.ps1 | 28 + testing/configure/lib/functions.sh | 47 + .../configure/linux_authorize_private_key.sh | 4 + testing/configure/linux_install_lme.exp | 81 + testing/configure/linux_install_lme.sh | 111 + testing/configure/linux_make_private_key.exp | 16 + testing/configure/linux_test_install.sh | 119 + testing/configure/linux_update_system.sh | 3 + .../list_computers_forwarding_events.ps1 | 27 + testing/configure/move_computers_to_ou.ps1 | 38 + testing/configure/sysmon_gpo_update_vars.ps1 | 43 + testing/configure/sysmon_import_gpo.ps1 | 34 + .../configure/sysmon_install_in_sysvol.ps1 | 69 + testing/configure/sysmon_link_gpo.ps1 | 18 + testing/configure/trust_ls1_ssh_key.ps1 | 66 + testing/configure/wec_firewall.ps1 | 18 + .../configure/wec_gpo_update_server_name.ps1 | 42 + testing/configure/wec_import_gpo.ps1 | 34 + testing/configure/wec_link_gpo.ps1 | 27 + testing/configure/wec_service_provisioner.ps1 | 24 + testing/configure/wec_start_service.ps1 | 19 + testing/configure/winlogbeat_install.ps1 | 84 + testing/development/Dockerfile | 62 + testing/development/README.md | 162 + testing/development/build_cluster.ps1 | 18 + .../development/build_docker_lme_install.sh | 46 + testing/development/destroy_cluster.ps1 | 18 + testing/development/docker-compose.yml | 56 + testing/development/install_lme.ps1 | 40 + testing/development/upgrade_lme.sh | 30 + testing/merging_version.sh | 2 + testing/project_management/Dockerfile | 20 + testing/project_management/docker-compose.yml | 10 + testing/project_management/setup_config.sh | 71 + testing/tests/.env_example | 19 + testing/tests/.vscode/launch.json | 16 + testing/tests/.vscode/settings.json | 7 + testing/tests/Dockerfile | 22 + testing/tests/README.md | 265 + .../tests/api_tests/__init__.py | 0 .../data_insertion_tests/__init__.py | 0 .../data_insertion_tests/conftest.py | 37 + .../data_insertion_tests/fixtures/hosts.json | 29 + .../fixtures/logonevents.json | 38 + .../queries/filter_hosts.json | 287 + .../queries/filter_logonevents.json | 127 + .../data_insertion_tests/test_server.py | 55 + testing/tests/api_tests/helpers.py | 103 + .../tests/api_tests/linux_only/__init__.py | 0 .../tests/api_tests/linux_only/conftest.py | 37 + .../api_tests/linux_only/schemas/es_root.json | 68 + .../linux_only/test_data/response.json | 17 + .../tests/api_tests/linux_only/test_server.py | 101 + .../tests/api_tests/winlogbeat/__init__.py | 0 .../tests/api_tests/winlogbeat/conftest.py | 37 + .../winlogbeat/schemas/winlogbeat_search.json | 959 +++ .../test_data/mapping_datafields.txt | 492 ++ .../test_data/mapping_response.json | 7379 +++++++++++++++++ .../test_data/mapping_response_actual.json | 7376 ++++++++++++++++ .../test_data/winlog_search_data.json | 86 + .../tests/api_tests/winlogbeat/test_server.py | 111 + testing/tests/docker-compose.yml | 9 + testing/tests/requirements.txt | 21 + testing/tests/selenium_tests.py | 636 ++ .../tests/selenium_tests/Old/dashboards.py | 334 + .../selenium_tests/Old/dashboards_cluster.py | 784 ++ .../tests/selenium_tests/cluster/__init__.py | 0 .../tests/selenium_tests/cluster/conftest.py | 92 + testing/tests/selenium_tests/cluster/lib.py | 41 + ...st_computer_software_overview_dashboard.py | 38 + .../cluster/test_health_check_dashboard.py | 42 + .../test_process_explorer_dashboard.py | 53 + .../test_security_dashboard_security_log.py | 98 + .../cluster/test_sysmon_summary_dashboard.py | 48 + .../cluster/test_user_h_r_dashboard.py | 65 + .../cluster/test_user_security_dashboard.py | 180 + .../selenium_tests/linux_only/conftest.py | 93 + .../selenium_tests/linux_only/move_tests.sh | 48 + .../linux_only/test_basic_loading.py | 40 + ...computer_software_overview_dashboard_lo.py | 39 + .../test_health_check_dashboard_lo.py | 24 + ...test_security_dashboard_security_log_lo.py | 65 + .../test_sysmon_summary_dashboard_lo.py | 39 + .../linux_only/test_user_h_r_dashboard_lo.py | 78 + .../test_user_security_dashboard_lo.py | 91 + testing/v2/development/Dockerfile | 64 + testing/v2/development/docker-compose.yml | 26 + testing/v2/installers/README.md | 15 + .../azure/build_azure_linux_network.md | 136 + .../azure/build_azure_linux_network.py | 624 ++ ...build_azure_linux_network_requirements.txt | 6 + testing/v2/installers/install_v2/install.sh | 42 + .../install_v2/install_in_minimega.sh | 69 + testing/v2/installers/lib/copy_ssh_key.sh | 31 + testing/v2/installers/minimega/README.md | 67 + .../v2/installers/minimega/check_dpkg_lock.sh | 31 + .../v2/installers/minimega/copy_ssh_key.sh | 31 + .../v2/installers/minimega/create_bridge.sh | 4 + testing/v2/installers/minimega/fix_dnsmasq.sh | 3 + testing/v2/installers/minimega/install.sh | 77 + .../v2/installers/minimega/install_local.sh | 29 + .../v2/installers/minimega/minimega.service | 11 + .../v2/installers/minimega/miniweb.service | 11 + testing/v2/installers/minimega/set_gopath.sh | 11 + .../v2/installers/minimega/update_packages.sh | 48 + .../v2/installers/ubuntu_qcow_maker/README.md | 94 + .../ubuntu_qcow_maker/clear_cloud_config.sh | 77 + .../ubuntu_qcow_maker/create_tap.sh | 26 + .../ubuntu_qcow_maker/create_ubuntu_qcow.sh | 152 + .../ubuntu_qcow_maker/create_vm_from_qcow.sh | 101 + .../ubuntu_qcow_maker/get_ip_of_machine.sh | 25 + .../installers/ubuntu_qcow_maker/install.sh | 62 + .../installers/ubuntu_qcow_maker/iptables.sh | 47 + .../ubuntu_qcow_maker/launch_multiple_vms.sh | 23 + .../ubuntu_qcow_maker/remove_test_files.sh | 6 + .../installers/ubuntu_qcow_maker/resize_fs.sh | 7 + .../ubuntu_qcow_maker/resize_qcow.sh | 70 + .../ubuntu_qcow_maker/setup_dnsmasq.sh | 48 + .../ubuntu_qcow_maker/ubuntu-runner.mm | 9 + .../ubuntu_qcow_maker/wait_for_login.sh | 60 + 232 files changed, 28587 insertions(+), 362 deletions(-) create mode 100644 .devcontainer/python_development/devcontainer.json create mode 100644 .devcontainer/python_tests/devcontainer.json rename .github/{PULL_REQUEST_TEMPLATE/pull_request_template.md => PULL_REQUEST_TEMPLATE.md} (81%) create mode 100644 .github/README-github.md create mode 100644 .github/changelog-configuration.json create mode 100644 .github/workflows/build_release.yaml create mode 100644 .github/workflows/burndown_chart.yml create mode 100644 .github/workflows/cluster.yml create mode 100644 .github/workflows/linux_only.yml create mode 100644 .github/workflows/upgrade.yml rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/manifest.xml (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/Backup.xml (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/Preferences/Services/Services.xml (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/comment.cmtx (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/microsoft/windows nt/SecEdit/GptTmpl.inf (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/registry.pol (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/bkupInfo.xml (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/gpreport.xml (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/Backup.xml (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/Preferences/Groups/Groups.xml (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/Preferences/Services/Services.xml (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/comment.cmtx (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/microsoft/windows nt/SecEdit/GptTmpl.inf (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/registry.pol (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/bkupInfo.xml (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/gpreport.xml (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/lme_gpo_for_windows.zip (100%) rename {Chapter 1 Files => OLD_CHAPTERS/Chapter 1 Files}/lme_wec_config.xml (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/Group Policy Objects/manifest.xml (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/Backup.xml (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/GPO.cmt (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/Machine/Preferences/ScheduledTasks/ScheduledTasks.xml (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/bkupInfo.xml (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/gpreport.xml (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/sysmon_gpo.zip (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/sysmon_gpo/Group Policy Objects/manifest.xml (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/Backup.xml (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/GPO.cmt (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/Machine/Preferences/ScheduledTasks/ScheduledTasks.xml (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/bkupInfo.xml (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/gpreport.xml (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/GPO Deployment/update.bat (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/SCCM Deployment/Install_Sysmon64.ps1 (100%) rename {Chapter 2 Files => OLD_CHAPTERS/Chapter 2 Files}/SCCM Deployment/Uninstall_Sysmon64.ps1 (100%) rename {Chapter 3 Files => OLD_CHAPTERS/Chapter 3 Files}/.gitignore (100%) rename {Chapter 3 Files => OLD_CHAPTERS/Chapter 3 Files}/dashboard_update.sh (93%) rename {Chapter 3 Files => OLD_CHAPTERS/Chapter 3 Files}/deploy.sh (96%) rename {Chapter 3 Files => OLD_CHAPTERS/Chapter 3 Files}/docker-compose-stack.yml (100%) rename {Chapter 3 Files => OLD_CHAPTERS/Chapter 3 Files}/lme_update.sh (100%) rename {Chapter 3 Files => OLD_CHAPTERS/Chapter 3 Files}/logstash.conf (100%) rename {Chapter 3 Files => OLD_CHAPTERS/Chapter 3 Files}/winlog-index-mapping.json (100%) rename {Chapter 3 Files => OLD_CHAPTERS/Chapter 3 Files}/winlogbeat.yml (100%) rename {Chapter 4 Files => OLD_CHAPTERS/Chapter 4 Files}/dashboards/Healthcheckoverview_dashboard.ndjson (100%) rename {Chapter 4 Files => OLD_CHAPTERS/Chapter 4 Files}/dashboards/Readme.md (100%) rename {Chapter 4 Files => OLD_CHAPTERS/Chapter 4 Files}/dashboards/alerting_dashboard.ndjson (100%) rename {Chapter 4 Files => OLD_CHAPTERS/Chapter 4 Files}/dashboards/computer_software_overview.ndjson (100%) rename {Chapter 4 Files => OLD_CHAPTERS/Chapter 4 Files}/dashboards/process_explorer.ndjson (100%) rename {Chapter 4 Files => OLD_CHAPTERS/Chapter 4 Files}/dashboards/security_dashboard_security_log.ndjson (100%) rename {Chapter 4 Files => OLD_CHAPTERS/Chapter 4 Files}/dashboards/sysmon_summary.ndjson (100%) rename {Chapter 4 Files => OLD_CHAPTERS/Chapter 4 Files}/dashboards/user_hr.ndjson (100%) rename {Chapter 4 Files => OLD_CHAPTERS/Chapter 4 Files}/dashboards/user_security.ndjson (100%) rename {Chapter 4 Files => OLD_CHAPTERS/Chapter 4 Files}/export_dashboards.py (100%) rename {Chapter 4 Files => OLD_CHAPTERS/Chapter 4 Files}/requirements.txt (100%) create mode 100644 OLD_CHAPTERS/README.md create mode 100644 config/caddy/Caddyfile create mode 100644 config/containers.txt create mode 100644 config/example.env create mode 100644 config/kibana.yml create mode 100644 config/setup/acct-init.sh create mode 100644 config/setup/init-setup.sh create mode 100644 config/setup/instances.yml create mode 100644 config/wazuh_cluster/wazuh_manager.conf create mode 100644 docs/markdown/reference/dashboard-descriptions.md create mode 100644 quadlet/lme-caddy.container create mode 100644 quadlet/lme-elasticsearch.container create mode 100644 quadlet/lme-fleet-server.container create mode 100644 quadlet/lme-kibana.container create mode 100644 quadlet/lme-setup-accts.container create mode 100644 quadlet/lme-setup-certs.container create mode 100644 quadlet/lme-wazuh-manager.container create mode 100644 quadlet/lme.network create mode 100644 quadlet/lme.service create mode 100755 scripts/download.sh create mode 100755 scripts/gen_cert.sh create mode 100644 scripts/install_lme_local.yml create mode 100755 scripts/link_latest_podman_quadlet.sh create mode 100755 scripts/set-fleet.sh create mode 100755 scripts/set_sysctl_limits.sh create mode 100755 scripts/upload.sh create mode 100644 testing/InstallTestbed.ps1 create mode 100644 testing/configure/azure_scripts/copy_file_to_container.ps1 create mode 100644 testing/configure/azure_scripts/create_blob_container.ps1 create mode 100644 testing/configure/azure_scripts/download_in_container.ps1 create mode 100644 testing/configure/azure_scripts/extract_archive.ps1 create mode 100644 testing/configure/azure_scripts/lib/utilityFunctions.ps1 create mode 100644 testing/configure/azure_scripts/run_script_in_container.ps1 create mode 100644 testing/configure/azure_scripts/zip_my_parents_parent.ps1 create mode 100644 testing/configure/chown_dc1_private_key.ps1 create mode 100644 testing/configure/create_lme_directory.ps1 create mode 100644 testing/configure/create_ou.ps1 create mode 100644 testing/configure/download_files.ps1 create mode 100644 testing/configure/install_chapter_1.ps1 create mode 100644 testing/configure/install_chapter_2.ps1 create mode 100644 testing/configure/lib/functions.sh create mode 100755 testing/configure/linux_authorize_private_key.sh create mode 100755 testing/configure/linux_install_lme.exp create mode 100755 testing/configure/linux_install_lme.sh create mode 100755 testing/configure/linux_make_private_key.exp create mode 100755 testing/configure/linux_test_install.sh create mode 100755 testing/configure/linux_update_system.sh create mode 100644 testing/configure/list_computers_forwarding_events.ps1 create mode 100644 testing/configure/move_computers_to_ou.ps1 create mode 100644 testing/configure/sysmon_gpo_update_vars.ps1 create mode 100644 testing/configure/sysmon_import_gpo.ps1 create mode 100644 testing/configure/sysmon_install_in_sysvol.ps1 create mode 100644 testing/configure/sysmon_link_gpo.ps1 create mode 100644 testing/configure/trust_ls1_ssh_key.ps1 create mode 100644 testing/configure/wec_firewall.ps1 create mode 100644 testing/configure/wec_gpo_update_server_name.ps1 create mode 100644 testing/configure/wec_import_gpo.ps1 create mode 100644 testing/configure/wec_link_gpo.ps1 create mode 100644 testing/configure/wec_service_provisioner.ps1 create mode 100644 testing/configure/wec_start_service.ps1 create mode 100644 testing/configure/winlogbeat_install.ps1 create mode 100644 testing/development/Dockerfile create mode 100644 testing/development/README.md create mode 100644 testing/development/build_cluster.ps1 create mode 100755 testing/development/build_docker_lme_install.sh create mode 100644 testing/development/destroy_cluster.ps1 create mode 100644 testing/development/docker-compose.yml create mode 100644 testing/development/install_lme.ps1 create mode 100755 testing/development/upgrade_lme.sh create mode 100644 testing/merging_version.sh create mode 100644 testing/project_management/Dockerfile create mode 100644 testing/project_management/docker-compose.yml create mode 100755 testing/project_management/setup_config.sh create mode 100644 testing/tests/.env_example create mode 100644 testing/tests/.vscode/launch.json create mode 100644 testing/tests/.vscode/settings.json create mode 100644 testing/tests/Dockerfile create mode 100644 testing/tests/README.md rename backups/.gitkeep => testing/tests/api_tests/__init__.py (100%) create mode 100644 testing/tests/api_tests/data_insertion_tests/__init__.py create mode 100644 testing/tests/api_tests/data_insertion_tests/conftest.py create mode 100644 testing/tests/api_tests/data_insertion_tests/fixtures/hosts.json create mode 100644 testing/tests/api_tests/data_insertion_tests/fixtures/logonevents.json create mode 100644 testing/tests/api_tests/data_insertion_tests/queries/filter_hosts.json create mode 100644 testing/tests/api_tests/data_insertion_tests/queries/filter_logonevents.json create mode 100644 testing/tests/api_tests/data_insertion_tests/test_server.py create mode 100644 testing/tests/api_tests/helpers.py create mode 100644 testing/tests/api_tests/linux_only/__init__.py create mode 100644 testing/tests/api_tests/linux_only/conftest.py create mode 100644 testing/tests/api_tests/linux_only/schemas/es_root.json create mode 100644 testing/tests/api_tests/linux_only/test_data/response.json create mode 100644 testing/tests/api_tests/linux_only/test_server.py create mode 100644 testing/tests/api_tests/winlogbeat/__init__.py create mode 100644 testing/tests/api_tests/winlogbeat/conftest.py create mode 100644 testing/tests/api_tests/winlogbeat/schemas/winlogbeat_search.json create mode 100644 testing/tests/api_tests/winlogbeat/test_data/mapping_datafields.txt create mode 100644 testing/tests/api_tests/winlogbeat/test_data/mapping_response.json create mode 100644 testing/tests/api_tests/winlogbeat/test_data/mapping_response_actual.json create mode 100644 testing/tests/api_tests/winlogbeat/test_data/winlog_search_data.json create mode 100644 testing/tests/api_tests/winlogbeat/test_server.py create mode 100644 testing/tests/docker-compose.yml create mode 100644 testing/tests/requirements.txt create mode 100644 testing/tests/selenium_tests.py create mode 100644 testing/tests/selenium_tests/Old/dashboards.py create mode 100644 testing/tests/selenium_tests/Old/dashboards_cluster.py create mode 100644 testing/tests/selenium_tests/cluster/__init__.py create mode 100644 testing/tests/selenium_tests/cluster/conftest.py create mode 100644 testing/tests/selenium_tests/cluster/lib.py create mode 100644 testing/tests/selenium_tests/cluster/test_computer_software_overview_dashboard.py create mode 100644 testing/tests/selenium_tests/cluster/test_health_check_dashboard.py create mode 100644 testing/tests/selenium_tests/cluster/test_process_explorer_dashboard.py create mode 100644 testing/tests/selenium_tests/cluster/test_security_dashboard_security_log.py create mode 100644 testing/tests/selenium_tests/cluster/test_sysmon_summary_dashboard.py create mode 100644 testing/tests/selenium_tests/cluster/test_user_h_r_dashboard.py create mode 100644 testing/tests/selenium_tests/cluster/test_user_security_dashboard.py create mode 100644 testing/tests/selenium_tests/linux_only/conftest.py create mode 100755 testing/tests/selenium_tests/linux_only/move_tests.sh create mode 100644 testing/tests/selenium_tests/linux_only/test_basic_loading.py create mode 100644 testing/tests/selenium_tests/linux_only/test_computer_software_overview_dashboard_lo.py create mode 100644 testing/tests/selenium_tests/linux_only/test_health_check_dashboard_lo.py create mode 100644 testing/tests/selenium_tests/linux_only/test_security_dashboard_security_log_lo.py create mode 100644 testing/tests/selenium_tests/linux_only/test_sysmon_summary_dashboard_lo.py create mode 100644 testing/tests/selenium_tests/linux_only/test_user_h_r_dashboard_lo.py create mode 100644 testing/tests/selenium_tests/linux_only/test_user_security_dashboard_lo.py create mode 100644 testing/v2/development/Dockerfile create mode 100644 testing/v2/development/docker-compose.yml create mode 100644 testing/v2/installers/README.md create mode 100644 testing/v2/installers/azure/build_azure_linux_network.md create mode 100755 testing/v2/installers/azure/build_azure_linux_network.py create mode 100644 testing/v2/installers/azure/build_azure_linux_network_requirements.txt create mode 100755 testing/v2/installers/install_v2/install.sh create mode 100755 testing/v2/installers/install_v2/install_in_minimega.sh create mode 100755 testing/v2/installers/lib/copy_ssh_key.sh create mode 100644 testing/v2/installers/minimega/README.md create mode 100755 testing/v2/installers/minimega/check_dpkg_lock.sh create mode 100755 testing/v2/installers/minimega/copy_ssh_key.sh create mode 100755 testing/v2/installers/minimega/create_bridge.sh create mode 100755 testing/v2/installers/minimega/fix_dnsmasq.sh create mode 100755 testing/v2/installers/minimega/install.sh create mode 100755 testing/v2/installers/minimega/install_local.sh create mode 100644 testing/v2/installers/minimega/minimega.service create mode 100644 testing/v2/installers/minimega/miniweb.service create mode 100755 testing/v2/installers/minimega/set_gopath.sh create mode 100755 testing/v2/installers/minimega/update_packages.sh create mode 100644 testing/v2/installers/ubuntu_qcow_maker/README.md create mode 100755 testing/v2/installers/ubuntu_qcow_maker/clear_cloud_config.sh create mode 100755 testing/v2/installers/ubuntu_qcow_maker/create_tap.sh create mode 100755 testing/v2/installers/ubuntu_qcow_maker/create_ubuntu_qcow.sh create mode 100755 testing/v2/installers/ubuntu_qcow_maker/create_vm_from_qcow.sh create mode 100755 testing/v2/installers/ubuntu_qcow_maker/get_ip_of_machine.sh create mode 100755 testing/v2/installers/ubuntu_qcow_maker/install.sh create mode 100755 testing/v2/installers/ubuntu_qcow_maker/iptables.sh create mode 100755 testing/v2/installers/ubuntu_qcow_maker/launch_multiple_vms.sh create mode 100755 testing/v2/installers/ubuntu_qcow_maker/remove_test_files.sh create mode 100755 testing/v2/installers/ubuntu_qcow_maker/resize_fs.sh create mode 100755 testing/v2/installers/ubuntu_qcow_maker/resize_qcow.sh create mode 100755 testing/v2/installers/ubuntu_qcow_maker/setup_dnsmasq.sh create mode 100644 testing/v2/installers/ubuntu_qcow_maker/ubuntu-runner.mm create mode 100755 testing/v2/installers/ubuntu_qcow_maker/wait_for_login.sh diff --git a/.devcontainer/python_development/devcontainer.json b/.devcontainer/python_development/devcontainer.json new file mode 100644 index 00000000..8e6dda12 --- /dev/null +++ b/.devcontainer/python_development/devcontainer.json @@ -0,0 +1,19 @@ +{ + "name": "Python Development", + "dockerComposeFile": [ + "../../testing/development/docker-compose.yml" + ], + "service": "ubuntu", + "shutdownAction": "none", + "workspaceFolder": "/lme", + "customizations": { + "vscode": { + "extensions": [ + "ms-python.python", + "littlefoxteam.vscode-python-test-adapter", + "ms-python.black-formatter" + ] + } + }, + "remoteUser": "admin.ackbar" +} \ No newline at end of file diff --git a/.devcontainer/python_tests/devcontainer.json b/.devcontainer/python_tests/devcontainer.json new file mode 100644 index 00000000..187df1c5 --- /dev/null +++ b/.devcontainer/python_tests/devcontainer.json @@ -0,0 +1,18 @@ +{ + "name": "Python Tests", + "dockerComposeFile": [ + "../../testing/tests/docker-compose.yml" + ], + "service": "ubuntu", + "shutdownAction": "none", + "workspaceFolder": "/app", + "customizations": { + "vscode": { + "extensions": [ + "ms-python.python", + "littlefoxteam.vscode-python-test-adapter", + "ms-python.black-formatter" + ] + } + } +} \ No newline at end of file diff --git a/.github/ISSUE_TEMPLATE/bug-or-error-report.md b/.github/ISSUE_TEMPLATE/bug-or-error-report.md index 210ff5ee..bda324c4 100644 --- a/.github/ISSUE_TEMPLATE/bug-or-error-report.md +++ b/.github/ISSUE_TEMPLATE/bug-or-error-report.md @@ -15,26 +15,25 @@ assignees: '' If the above did not answer your question, proceed with creating an issue below: ## Describe the bug -A clear and concise description of what the bug is. + ## To Reproduce -Steps to reproduce the behavior. These should be clear enough that our team can understand your running environment, software/operating system versions, and anything else we might need to debug the issue. - -An example of a usable reproducible list are shown in these issues: [Issue 1](https://github.com/cisagov/LME/issues/15) [Issue 2](https://github.com/cisagov/LME/issues/19). - -To increase the speed and relevance of the reply we suggest you list down debugging steps you have tried, as well as the following information: + + ### Please complete the following information -**Desktop:** +#### **Desktop:** - OS: [e.g. Windows 10] - Browser: [e.g. Firefox Version 104.0.1] - Software version: [e.g. Sysmon v15.0, Winlogbeat 8.11.1] - -**Server:** + +#### **Server:** - OS: [e.g. Ubuntu 22.04] - Software Versions: - ELK: [e.g. 8.7.1] - Docker: [e.g. 20.10.23, build 7155243] + +**OPTIONAL**: - The output of these commands: ``` free -h @@ -52,7 +51,7 @@ Increase the number of lines if your issue is not present, or include a relevant ## Expected behavior A clear and concise description of what you expected to happen. -## Screenshots +## Screenshots **OPTIONAL** If applicable, add screenshots to help explain your problem. ## Additional context diff --git a/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md b/.github/PULL_REQUEST_TEMPLATE.md similarity index 81% rename from .github/PULL_REQUEST_TEMPLATE/pull_request_template.md rename to .github/PULL_REQUEST_TEMPLATE.md index 12d7fd5a..d2b83f2d 100644 --- a/.github/PULL_REQUEST_TEMPLATE/pull_request_template.md +++ b/.github/PULL_REQUEST_TEMPLATE.md @@ -9,6 +9,9 @@ + + + ### ๐Ÿ“ท Screenshots (DELETE IF UNAPPLICABLE) @@ -22,6 +25,7 @@ - [ ] Changes are limited to a single goal **AND** the title reflects this in a clear human readable format +- [ ] Issue that this PR solves has been selected in the Development section - [ ] I have read and agree to LME's [CONTRIBUTING.md](https://github.com/cisagov/LME/CONTRIBUTING.md) document. - [ ] The PR adheres to LME's requirements in [RELEASES.md](https://github.com/cisagov/LME/RELEASES.md#steps-to-submit-a-PR) - [ ] These code changes follow [cisagov code standards](https://github.com/cisagov/development-guide). @@ -31,9 +35,9 @@ - [ ] All tests pass - [ ] PR has been tested and the documentation for testing is above +- [ ] Squash and merge all commits into one PR level commit ## โœ… Post-merge Checklist -- [ ] Squash all commits into one PR level commit - [ ] Delete the branch to keep down number of branches diff --git a/.github/README-github.md b/.github/README-github.md new file mode 100644 index 00000000..3f313815 --- /dev/null +++ b/.github/README-github.md @@ -0,0 +1 @@ +See the readme in `testing/development` for more information about these workflows and how to develop for them. \ No newline at end of file diff --git a/.github/changelog-configuration.json b/.github/changelog-configuration.json new file mode 100644 index 00000000..4cd4a598 --- /dev/null +++ b/.github/changelog-configuration.json @@ -0,0 +1,22 @@ +{ + "categories": [ + { + "title": "## What's Added", + "labels": ["feat"], + }, + { + "title": "## What's Fixed", + "labels": ["fix"], + }, + { + "title": "## What's Updated", + "labels": ["update"], + }, + { + "title": "## Uncategorized", + "labels": [], + }, + ], + "template": "#{{CHANGELOG}}", + "pr_template": "* #{{TITLE}} by @#{{AUTHOR}} in ##{{NUMBER}}" +} diff --git a/.github/workflows/build_release.yaml b/.github/workflows/build_release.yaml new file mode 100644 index 00000000..22cb10cf --- /dev/null +++ b/.github/workflows/build_release.yaml @@ -0,0 +1,49 @@ +on: + workflow_dispatch: + inputs: + version: + description: "Release version (e.g., 1.1.0)" + required: true + type: string + +name: Build Release + +jobs: + build-release: + runs-on: ubuntu-latest + steps: + - name: Checkout + uses: actions/checkout@v4 + + - name: Get current date + id: date + run: | + echo "date=$(date +'%Y-%m-%d')" >> $GITHUB_ENV + + - name: Build Assets + run: git ls-files | zip LME-${{ inputs.version }}.zip -@ + + - name: Build Changelog + id: release + uses: mikepenz/release-changelog-builder-action@v4.1.1 + with: + toTag: "release-${{ inputs.version }}" + configuration: ".github/changelog-configuration.json" + failOnError: true + env: + GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} + + - name: Create Draft Release + uses: softprops/action-gh-release@v0.1.15 + with: + name: LME v${{ inputs.version }} + tag_name: v${{ inputs.version }} + body: | + ## [${{ inputs.version }}] - Timberrrrr! - ${{ env.date }} + ${{ steps.release.outputs.changelog }} + files: LME-${{ inputs.version }}.zip + draft: true + prerelease: false + discussion_category_name: "Announcements" + generate_release_notes: false + fail_on_unmatched_files: true diff --git a/.github/workflows/burndown_chart.yml b/.github/workflows/burndown_chart.yml new file mode 100644 index 00000000..8eee3839 --- /dev/null +++ b/.github/workflows/burndown_chart.yml @@ -0,0 +1,100 @@ +name: Burndown Chart + +on: + workflow_dispatch: + inputs: + start_date: + description: 'Sprint start date (YYYY-MM-DD)' + required: true + default: '2024-05-09' + type: string + end_date: + description: 'Sprint end date (YYYY-MM-DD)' + required: true + default: '2024-05-25' + type: string + view: + description: 'View number' + required: true + default: '1' + type: string + pull_request: + branches: + - '*' + +jobs: + create_chart: + runs-on: ubuntu-latest + env: + UNIQUE_ID: + start_date: + end_date: + view: + + steps: + - name: Checkout repository + uses: actions/checkout@v4.1.1 + + - name: Setup environment variables + run: | + echo "UNIQUE_ID=$(openssl rand -hex 3 | head -c 6)" >> $GITHUB_ENV + + - name: Set default dates + if: github.event_name == 'pull_request' + run: | + echo "start_date=2024-05-09" >> $GITHUB_ENV + echo "end_date=2024-05-25" >> $GITHUB_ENV + echo "view=1" >> $GITHUB_ENV + + - name: Use dispatch inputs + if: github.event_name == 'workflow_dispatch' + run: | + echo "start_date=${{ github.event.inputs.start_date }}" >> $GITHUB_ENV + echo "end_date=${{ github.event.inputs.end_date }}" >> $GITHUB_ENV + echo "view=${{ github.event.inputs.view }}" >> $GITHUB_ENV + + - name: Run Docker Build + run: docker compose -p ${{ env.UNIQUE_ID }} -f testing/project_management/docker-compose.yml build burndown --no-cache + + - name: Run Docker Compose + env: + BURNDOWN_TOKEN: ${{ secrets.BURNDOWN_TOKEN }} + run: docker compose -p ${{ env.UNIQUE_ID }} -f testing/project_management/docker-compose.yml up -d + + - name: List docker containers to wait for them to start + run: | + docker ps + + - name: Set up the burndown chart config + env: + BURNDOWN_TOKEN: ${{ secrets.BURNDOWN_TOKEN }} + UNIQUE_ID: ${{ env.UNIQUE_ID }} + START_DATE: ${{ env.start_date }} + END_DATE: ${{ env.end_date }} + VIEW: ${{ env.view }} + run: | + cd testing/project_management + docker compose -p ${{ env.UNIQUE_ID }} exec -T burndown bash -c ' + /lme/testing/project_management/setup_config.sh -s ${{ env.START_DATE }} -e ${{ env.END_DATE }} -v ${{ env.VIEW }} -f /github-projects-burndown-chart/src/github_projects_burndown_chart/config/config.json + sed -i "s/\"github_token\": \"\"/\"github_token\": \"$BURNDOWN_TOKEN\"/g" /github-projects-burndown-chart/src/github_projects_burndown_chart/config/secrets.json + cat /github-projects-burndown-chart/src/github_projects_burndown_chart/config/config.json + ' + + - name: Run the burndown chart script + run: | + cd testing/project_management + docker compose -p ${{ env.UNIQUE_ID }} exec -T burndown bash -c ' + python3 /github-projects-burndown-chart/src/github_projects_burndown_chart/main.py organization LME --filepath /lme/burndown.png + ' + - name: Upload chart artifact + uses: actions/upload-artifact@v4 + with: + name: burndown + path: burndown.png + + - name: Cleanup Docker Compose + if: always() + run: | + cd testing/project_management + docker compose -p ${{ env.UNIQUE_ID }} down + # docker system prune -a --force \ No newline at end of file diff --git a/.github/workflows/cluster.yml b/.github/workflows/cluster.yml new file mode 100644 index 00000000..c958f680 --- /dev/null +++ b/.github/workflows/cluster.yml @@ -0,0 +1,278 @@ +name: Cluster Run + +on: + workflow_dispatch: + # pull_request: + # branches: + # - '*' + +jobs: + build-and-test-cluster: + runs-on: self-hosted + env: + UNIQUE_ID: + IP_ADDRESS: + LS1_IP: + BRANCH_NAME: + elastic: + + steps: + - name: Checkout repository + uses: actions/checkout@v4.1.1 + + - name: Setup environment variables + run: | + PUBLIC_IP=$(curl -s https://api.ipify.org) + echo "IP_ADDRESS=$PUBLIC_IP" >> $GITHUB_ENV + echo "UNIQUE_ID=$(openssl rand -hex 3 | head -c 6)" >> $GITHUB_ENV + + - name: Get branch name + shell: bash + run: | + if [ "${{ github.event_name }}" == "pull_request" ]; then + echo "BRANCH_NAME=${{ github.head_ref }}" >> $GITHUB_ENV + else + echo "BRANCH_NAME=${GITHUB_REF##*/}" >> $GITHUB_ENV + fi + + - name: Set up Docker Compose + run: | + sudo curl -L "https://github.com/docker/compose/releases/download/v2.3.3/docker-compose-$(uname -s)-$(uname -m)" \ + -o /usr/local/bin/docker-compose + sudo chmod +x /usr/local/bin/docker-compose + + - name: Set the environment for docker-compose + run: | + cd testing/development + # Get the UID and GID of the current user + echo "HOST_UID=$(id -u)" > .env + echo "HOST_GID=$(id -g)" >> .env + + # - name: Run Docker Compose Build to fix a user id issue in a prebuilt container + # run: | + # cd testing/development + # docker compose -p ${{ env.UNIQUE_ID }} build --no-cache + + - name: Run Docker Compose + run: docker compose -p ${{ env.UNIQUE_ID }} -f testing/development/docker-compose.yml up -d + + - name: List docker containers to wait for them to start + run: | + docker ps + + - name: List files in home directory + run: | + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme bash -c "pwd && ls -la" + + - name: Check powershell environment + run: | + set +e + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& { + cd /home/admin.ackbar/LME; \ + ls -la; \ + exit \$LASTEXITCODE; + }" + EXIT_CODE=$? + echo "Exit code: $EXIT_CODE" + set -e + if [ "$EXIT_CODE" -ne 0 ]; then + exit $EXIT_CODE + fi + + - name: Build the cluster + run: | + set +e + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& { + cd /home/admin.ackbar/LME/testing; \ + \$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \ + \$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \ + \$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \ + \$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \ + \$env:IP_ADDRESS='${{ env.IP_ADDRESS }}'; \ + ./development/build_cluster.ps1 -IPAddress \$env:IP_ADDRESS; \ + exit \$LASTEXITCODE; + }" + EXIT_CODE=$? + echo "Exit code: $EXIT_CODE" + set -e + if [ "$EXIT_CODE" -ne 0 ]; then + exit $EXIT_CODE + fi + cd .. + . configure/lib/functions.sh + extract_ls1_ip 'LME-pipe-${{ env.UNIQUE_ID }}.cluster.output.log' + echo "LS1_IP=$LS1_IP" >> $GITHUB_ENV + + - name: Install lme on cluster + run: | + set +e + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& { + cd /home/admin.ackbar/LME/testing; \ + \$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \ + \$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \ + \$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \ + \$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \ + ./development/install_lme.ps1 -b '${{ env.BRANCH_NAME }}'; \ + exit \$LASTEXITCODE; + }" + EXIT_CODE=$? + echo "Exit code: $EXIT_CODE" + set -e + if [ "$EXIT_CODE" -ne 0 ]; then + exit $EXIT_CODE + fi + + - name: Set the environment passwords for other steps + run: | + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme bash -c " + cd /home/admin.ackbar/LME/testing \ + && . configure/lib/functions.sh \ + && extract_credentials 'LME-pipe-${{ env.UNIQUE_ID }}.password.txt' \ + && write_credentials_to_file '${{ env.UNIQUE_ID }}.github_env.sh' \ + " + . ../${{ env.UNIQUE_ID }}.github_env.sh + rm ../${{ env.UNIQUE_ID }}.github_env.sh + echo "elastic=$elastic" >> $GITHUB_ENV + echo "kibana=$kibana" >> $GITHUB_ENV + echo "logstash_system=$logstash_system" >> $GITHUB_ENV + echo "logstash_writer=$logstash_writer" >> $GITHUB_ENV + echo "dashboard_update=$dashboard_update" >> $GITHUB_ENV + + - name: Check that the environment variables are set + run: | + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme bash -c " + if [ -z \"${{ env.elastic }}\" ]; then + echo 'Error: env.elastic variable is not set' >&2 + exit 1 + else + echo 'Elastic password is set' + fi + " + + # - name: Run a command on the domain controller + # run: | + # set +e + # cd testing/development + # docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& { + # cd /home/admin.ackbar/LME/testing; \ + # \$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \ + # \$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \ + # \$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \ + # \$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \ + # \$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \ + # \$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \ + # az login --service-principal -u \$env:AZURE_CLIENT_ID -p \$env:AZURE_SECRET --tenant \$env:AZURE_TENANT; \ + # az vm run-command invoke \ + # --command-id RunPowerShellScript \ + # --name DC1 \ + # --resource-group \$env:RESOURCE_GROUP \ + # --scripts 'ls C:\'; \ + # exit \$LASTEXITCODE; + # }" + # EXIT_CODE=$? + # echo "Exit code: $EXIT_CODE" + # set -e + # if [ "$EXIT_CODE" -ne 0 ]; then + # exit $EXIT_CODE + # fi + + - name: Run a command on the linux machine + run: | + set +e + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& { + cd /home/admin.ackbar/LME/testing; \ + \$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \ + \$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \ + \$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \ + \$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \ + az login --service-principal -u \$env:AZURE_CLIENT_ID -p \$env:AZURE_SECRET --tenant \$env:AZURE_TENANT; \ + az vm run-command invoke \ + --command-id RunShellScript \ + --name LS1 \ + --resource-group \$env:RESOURCE_GROUP \ + --scripts 'ls -lan'; \ + exit \$LASTEXITCODE; + }" + EXIT_CODE=$? + echo "Exit code: $EXIT_CODE" + set -e + if [ "$EXIT_CODE" -ne 0 ]; then + exit $EXIT_CODE + fi + + # This only passes when you do a full install + - name: Run api tests in container + run: | + set +e + cd testing/development + docker-compose -p ${{ env.UNIQUE_ID }} exec -T -u admin.ackbar lme bash -c " cd testing/tests \ + && echo export elastic=${{ env.elastic }} > .env \ + && echo export ES_HOST=${{ env.LS1_IP }} >> .env \ + && python3 -m venv /home/admin.ackbar/venv_test \ + && . /home/admin.ackbar/venv_test/bin/activate \ + && pip install -r requirements.txt \ + && sudo chmod ugo+w /home/admin.ackbar/LME/ -R \ + && pytest -v api_tests/" + + - name: Run selenium tests in container + run: | + set +e + cd testing/development + docker-compose -p ${{ env.UNIQUE_ID }} exec -T -u admin.ackbar lme bash -c " cd testing/tests \ + && echo export elastic=${{ env.elastic }} > .env \ + && echo export ES_HOST=${{ env.LS1_IP }} >> .env \ + && echo export KIBANA_HOST= ${{ env.LS1_IP }} >> .env \ + && echo export KIBANA_PORT=443 >> .env \ + && echo export KIBANA_USER=elastic >> .env \ + && echo export SELENIUM_TIMEOUT=60 >> .env \ + && echo export SELENIUM_MODE=headless >> .env \ + && cat .env \ + && python3 -m venv /home/admin.ackbar/venv_test \ + && . /home/admin.ackbar/venv_test/bin/activate \ + && pip install -r requirements.txt \ + && sudo chmod ugo+w /home/admin.ackbar/LME/ -R \ + && pytest -v selenium_tests/" + + # - name: Run selenium tests in container + # run: | + # set +e + # cd testing/development + # docker compose -p ${{ env.UNIQUE_ID }} exec -T -u admin.ackbar lme bash -c " cd testing/tests \ + # && echo export ELASTIC_PASSWORD=${{ env.elastic }} > .env \ + # && . .env \ + # && python3 -m venv /home/admin.ackbar/venv_test \ + # && . /home/admin.ackbar/venv_test/bin/activate \ + # && pip install -r requirements.txt \ + # && sudo chmod ugo+w /home/admin.ackbar/LME/ -R \ + # && python selenium_tests.py --domain ${{ env.LS1_IP }} -v" + + - name: Cleanup environment + if: always() + run: | + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& { + cd /home/admin.ackbar/LME/testing; \ + \$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \ + \$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \ + \$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \ + \$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \ + ./development/destroy_cluster.ps1; \ + exit \$LASTEXITCODE; + }" + docker compose -p ${{ env.UNIQUE_ID }} down + docker system prune --force diff --git a/.github/workflows/linux_only.yml b/.github/workflows/linux_only.yml new file mode 100644 index 00000000..c5dd7332 --- /dev/null +++ b/.github/workflows/linux_only.yml @@ -0,0 +1,123 @@ +name: Linux Only + +on: + workflow_dispatch: + pull_request: + branches: + - '*' + +jobs: + build-and-test-linux-only: + # runs-on: ubuntu-latest + runs-on: self-hosted + + env: + UNIQUE_ID: + BRANCH_NAME: + + steps: + - name: Checkout repository + uses: actions/checkout@v4.1.1 + + - name: Setup environment variables + run: | + echo "UNIQUE_ID=$(openssl rand -hex 3 | head -c 6)" >> $GITHUB_ENV + + - name: Setup environment variables + run: | + echo "AZURE_CLIENT_ID=${{ secrets.AZURE_CLIENT_ID }}" >> $GITHUB_ENV + echo "AZURE_SECRET=${{ secrets.AZURE_SECRET }}" >> $GITHUB_ENV + echo "AZURE_CLIENT_SECRET=${{ secrets.AZURE_SECRET }}" >> $GITHUB_ENV + echo "AZURE_TENANT=${{ secrets.AZURE_TENANT }}" >> $GITHUB_ENV + echo "AZURE_SUBSCRIPTION_ID=${{ secrets.AZURE_SUBSCRIPTION_ID }}" >> $GITHUB_ENV + + - name: Set Branch Name + shell: bash + env: + EVENT_NAME: ${{ github.event_name }} + HEAD_REF: ${{ github.head_ref }} + GITHUB_REF: ${{ github.ref }} + run: | + if [ "$EVENT_NAME" == "pull_request" ]; then + echo "BRANCH_NAME=$HEAD_REF" >> $GITHUB_ENV + else + BRANCH_REF="${GITHUB_REF##*/}" + echo "BRANCH_NAME=$BRANCH_REF" >> $GITHUB_ENV + fi + + - name: Set up Docker Compose + run: | + sudo curl -L "https://github.com/docker/compose/releases/download/v2.3.3/docker-compose-$(uname -s)-$(uname -m)" \ + -o /usr/local/bin/docker-compose + sudo chmod +x /usr/local/bin/docker-compose + + - name: Set the environment for docker-compose + run: | + cd testing/development + # Get the UID and GID of the current user + echo "HOST_UID=$(id -u)" > .env + echo "HOST_GID=$(id -g)" >> .env + + - name: Run Docker Build + run: docker compose -p ${{ env.UNIQUE_ID }} -f testing/development/docker-compose.yml build lme --no-cache + + - name: Run Docker Compose + run: docker compose -p ${{ env.UNIQUE_ID }} -f testing/development/docker-compose.yml up lme -d + + - name: List docker containers to wait for them to start + run: | + docker ps + + # We are not using the ubuntu container so no use waiting for it to start + # - name: Execute commands inside ubuntu container + # run: | + # cd testing/development + # docker compose -p ${{ env.UNIQUE_ID }} exec -T ubuntu bash -c "echo 'Ubuntu container built'" + + - name: Install LME in container + run: | + set -x + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme bash -c "./testing/development/build_docker_lme_install.sh -b ${{ env.BRANCH_NAME }} \ + && sudo chmod go+r /opt/lme/Chapter\ 3\ Files/output.log" + + - name: Run api tests in container + run: | + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T -u admin.ackbar lme bash -c ". testing/configure/lib/functions.sh \ + && sudo cp /opt/lme/Chapter\ 3\ Files/output.log . \ + && extract_credentials output.log \ + && sudo rm output.log \ + && sudo docker ps \ + && . /home/admin.ackbar/venv_test/bin/activate \ + && sudo chmod ugo+w /home/admin.ackbar/LME/ \ + && pytest testing/tests/api_tests/linux_only/ " + + - name: Run selenium tests in container + run: | + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T -u admin.ackbar lme bash -c " + . testing/configure/lib/functions.sh \ + && echo export ELASTIC_PASSWORD=${{ env.elastic }} > testing/tests/.env \ + && echo export KIBANA_HOST=localhost >> testing/tests/.env \ + && echo export KIBANA_PORT=443 >> testing/tests/.env \ + && echo export KIBANA_USER=elastic >> testing/tests/.env \ + && echo export SELENIUM_TIMEOUT=60 >> testing/tests/.env \ + && echo export SELENIUM_MODE=headless >> testing/tests/.env \ + && . testing/tests/.env \ + && sudo cp /opt/lme/Chapter\\ 3\\ Files/output.log . \ + && extract_credentials output.log \ + && sudo rm output.log \ + && sudo docker ps \ + && . /home/admin.ackbar/venv_test/bin/activate \ + && sudo chmod ugo+w /home/admin.ackbar/LME/ \ + && pytest testing/tests/selenium_tests/linux_only/ \ + " + + - name: Cleanup Docker Compose + if: always() + run: | + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T -u root lme bash -c "rm -rf /home/admin.ackbar/LME/.pytest_cache" + docker compose -p ${{ env.UNIQUE_ID }} down + docker system prune -a --force \ No newline at end of file diff --git a/.github/workflows/main.yml b/.github/workflows/main.yml index baea7ae2..f408faa2 100644 --- a/.github/workflows/main.yml +++ b/.github/workflows/main.yml @@ -5,8 +5,8 @@ on: - main tags: - 'v[0-9]+.[0-9]+.[0-9]+*' # match basic semver tags - pull_request: - branches: + pull_request: + branches: - main - 'release-*' @@ -62,25 +62,4 @@ jobs: run: | semgrep --config "p/r2c" . - release: - runs-on: ubuntu-latest - if: startsWith(github.ref, 'refs/tags/v') - needs: [lint, semgrep-scan] - steps: - - name: Checkout - uses: actions/checkout@3df4ab11eba7bda6032a0b82a6bb43b11571feac # v4.0.0 - - - name: Set up tag name - id: tag - run: echo "::set-output name=tag::${GITHUB_REF##*/}" - - - name: Build - run: git ls-files | zip release-${{ steps.tag.outputs.tag }}.zip -@ - - - name: Release - uses: softprops/action-gh-release@de2c0eb89ae2a093876385947365aca7b0e5f844 # v0.1.15 - with: - files: release-${{ steps.tag.outputs.tag }}.zip - draft: true - generate_release_notes: true - fail_on_unmatched_files: true + diff --git a/.github/workflows/upgrade.yml b/.github/workflows/upgrade.yml new file mode 100644 index 00000000..28592706 --- /dev/null +++ b/.github/workflows/upgrade.yml @@ -0,0 +1,300 @@ +name: Build an upgrade + +on: + workflow_dispatch: + # pull_request: + # branches: + # - '*' + +jobs: + + build-and-test-upgrade: + runs-on: self-hosted + env: + UNIQUE_ID: + IP_ADDRESS: + LS1_IP: + LATEST_BRANCH: + BRANCH_NAME: + elastic: + steps: + - name: Checkout repository + uses: actions/checkout@v4.1.1 + + - name: Setup environment variables + run: | + PUBLIC_IP=$(curl -s https://api.ipify.org) + echo "IP_ADDRESS=$PUBLIC_IP" >> $GITHUB_ENV + echo "UNIQUE_ID=$(openssl rand -hex 3 | head -c 6)" >> $GITHUB_ENV + LATEST_BRANCH_VAR=$(curl -s https://api.github.com/repos/cisagov/LME/tags | jq -r '.[].name | sub("^v"; "") | "release-" + .' | head -n 1) + echo "LATEST_BRANCH=$LATEST_BRANCH_VAR" + echo "LATEST_BRANCH=$LATEST_BRANCH_VAR" >> $GITHUB_ENV + + - name: Get branch name + shell: bash + run: | + if [ "${{ github.event_name }}" == "pull_request" ]; then + echo "BRANCH_NAME=${{ github.head_ref }}" >> $GITHUB_ENV + else + echo "BRANCH_NAME=${GITHUB_REF##*/}" >> $GITHUB_ENV + fi + + + - name: Set up Docker Compose + run: | + sudo curl -L "https://github.com/docker/compose/releases/download/v2.3.3/docker-compose-$(uname -s)-$(uname -m)" \ + -o /usr/local/bin/docker-compose + sudo chmod +x /usr/local/bin/docker-compose + + - name: Set the environment for docker-compose + run: | + cd testing/development + # Get the UID and GID of the current user + echo "HOST_UID=$(id -u)" > .env + echo "HOST_GID=$(id -g)" >> .env + + # - name: Run Docker Compose Build to fix a user id issue in a prebuilt container + # run: | + # cd testing/development + # docker-compose -p ${{ env.UNIQUE_ID }} build --no-cache + + - name: Run Docker Compose + run: docker compose -p ${{ env.UNIQUE_ID }} -f testing/development/docker-compose.yml up -d + + - name: List docker containers to wait for them to start + run: | + docker ps + + - name: List files in home directory + run: | + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme bash -c "pwd && ls -la" + + - name: Check powershell environment + run: | + set +e + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& { + cd /home/admin.ackbar/LME; \ + ls -la; \ + exit \$LASTEXITCODE; + }" + EXIT_CODE=$? + echo "Exit code: $EXIT_CODE" + set -e + if [ "$EXIT_CODE" -ne 0 ]; then + exit $EXIT_CODE + fi + + - name: Build the cluster + run: | + set +e + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& { + cd /home/admin.ackbar/LME/testing; \ + \$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \ + \$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \ + \$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \ + \$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \ + \$env:IP_ADDRESS='${{ env.IP_ADDRESS }}'; \ + ./development/build_cluster.ps1 -IPAddress \$env:IP_ADDRESS; \ + exit \$LASTEXITCODE; + }" + EXIT_CODE=$? + echo "Exit code: $EXIT_CODE" + set -e + if [ "$EXIT_CODE" -ne 0 ]; then + exit $EXIT_CODE + fi + cd .. + . configure/lib/functions.sh + extract_ls1_ip 'LME-pipe-${{ env.UNIQUE_ID }}.cluster.output.log' + echo "LS1_IP=$LS1_IP" >> $GITHUB_ENV + + - name: Install lme on cluster + run: | + set +e + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& { + cd /home/admin.ackbar/LME/testing; \ + \$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \ + \$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \ + \$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \ + \$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \ + ./development/install_lme.ps1 -b '${{ env.LATEST_BRANCH }}'; \ + exit \$LASTEXITCODE; + }" + EXIT_CODE=$? + echo "Exit code: $EXIT_CODE" + set -e + if [ "$EXIT_CODE" -ne 0 ]; then + exit $EXIT_CODE + fi + + - name: Set the environment passwords for other steps + run: | + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme bash -c " + cd /home/admin.ackbar/LME/testing \ + && . configure/lib/functions.sh \ + && extract_credentials 'LME-pipe-${{ env.UNIQUE_ID }}.password.txt' \ + && write_credentials_to_file '${{ env.UNIQUE_ID }}.github_env.sh' \ + " + . ../${{ env.UNIQUE_ID }}.github_env.sh + rm ../${{ env.UNIQUE_ID }}.github_env.sh + echo "elastic=$elastic" >> $GITHUB_ENV + echo "kibana=$kibana" >> $GITHUB_ENV + echo "logstash_system=$logstash_system" >> $GITHUB_ENV + echo "logstash_writer=$logstash_writer" >> $GITHUB_ENV + echo "dashboard_update=$dashboard_update" >> $GITHUB_ENV + + - name: Check that the environment variables are set + run: | + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme bash -c " + if [ -z \"${{ env.elastic }}\" ]; then + echo 'Error: env.elastic variable is not set' >&2 + exit 1 + else + echo 'Elastic password is set' + fi + " + + - name: Upgrade to the version being built + # This will check out the code in the /root directory so that it can use the latest version of the code. + # But it will also check out the branch in the /opt/lme directory so that upgrade_lme.sh script can use the branch. + run: | + set +e + cd testing/development + output=$(docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "\ + cd /home/admin.ackbar/LME/testing; \ + \$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \ + \$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \ + \$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \ + \$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \ + az login --service-principal -u \$env:AZURE_CLIENT_ID -p \$env:AZURE_SECRET --tenant \$env:AZURE_TENANT; \ + az vm run-command invoke \ + --command-id RunShellScript \ + --name LS1 \ + --resource-group \$env:RESOURCE_GROUP \ + --scripts 'export HOME=/root; pwd && whoami && cd ~ \ + && git clone https://github.com/cisagov/LME.git \ + && cd LME \ + && echo "Checking out current branch: ${{ env.BRANCH_NAME }}" \ + && git checkout ${{ env.BRANCH_NAME }} \ + && cd testing \ + && ./development/upgrade_lme.sh; exit \$?'") + echo "Output: $output" + if echo "$output" | grep -q "UPGRADE_SUCCESSFUL"; then + echo "Upgrade successful" + exit 0 + else + echo "Upgrade failed" + exit 1 + fi + + # - name: Run a command on the domain controller + # run: | + # set +e + # cd testing/development + # docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& { + # cd /home/admin.ackbar/LME/testing; \ + # \$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \ + # \$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \ + # \$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \ + # \$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \ + # \$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \ + # \$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \ + # az login --service-principal -u \$env:AZURE_CLIENT_ID -p \$env:AZURE_SECRET --tenant \$env:AZURE_TENANT; \ + # az vm run-command invoke \ + # --command-id RunPowerShellScript \ + # --name DC1 \ + # --resource-group \$env:RESOURCE_GROUP \ + # --scripts 'ls C:\'; \ + # exit \$LASTEXITCODE; + # }" + # EXIT_CODE=$? + # echo "Exit code: $EXIT_CODE" + # set -e + # if [ "$EXIT_CODE" -ne 0 ]; then + # exit $EXIT_CODE + # fi + + - name: Run a command on the linux machine + run: | + set +e + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& { + cd /home/admin.ackbar/LME/testing; \ + \$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \ + \$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \ + \$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \ + \$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \ + az login --service-principal -u \$env:AZURE_CLIENT_ID -p \$env:AZURE_SECRET --tenant \$env:AZURE_TENANT; \ + az vm run-command invoke \ + --command-id RunShellScript \ + --name LS1 \ + --resource-group \$env:RESOURCE_GROUP \ + --scripts 'ls -lan'; \ + exit \$LASTEXITCODE; + }" + EXIT_CODE=$? + echo "Exit code: $EXIT_CODE" + set -e + if [ "$EXIT_CODE" -ne 0 ]; then + exit $EXIT_CODE + fi + + # This only passes when you do a full install + - name: Run api tests in container + run: | + set +e + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T -u admin.ackbar lme bash -c " cd testing/tests \ + && echo export elastic=${{ env.elastic }} > .env \ + && echo export ES_HOST=${{ env.LS1_IP }} >> .env \ + && cat .env \ + && python3 -m venv /home/admin.ackbar/venv_test \ + && . /home/admin.ackbar/venv_test/bin/activate \ + && pip install -r requirements.txt \ + && sudo chmod ugo+w /home/admin.ackbar/LME/ -R \ + && pytest -v api_tests/" + + - name: Run selenium tests in container + run: | + set +e + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T -u admin.ackbar lme bash -c " cd testing/tests \ + && echo export ELASTIC_PASSWORD=${{ env.elastic }} > .env \ + && . .env \ + && python3 -m venv /home/admin.ackbar/venv_test \ + && . /home/admin.ackbar/venv_test/bin/activate \ + && pip install -r requirements.txt \ + && sudo chmod ugo+w /home/admin.ackbar/LME/ -R \ + && python selenium_tests.py --domain ${{ env.LS1_IP }} -v" + + - name: Cleanup environment + if: always() + run: | + cd testing/development + docker compose -p ${{ env.UNIQUE_ID }} exec -T lme pwsh -Command "& { + cd /home/admin.ackbar/LME/testing; \ + \$env:AZURE_CLIENT_ID='${{ secrets.AZURE_CLIENT_ID }}'; \ + \$env:AZURE_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_CLIENT_SECRET='${{ secrets.AZURE_SECRET }}'; \ + \$env:AZURE_TENANT='${{ secrets.AZURE_TENANT }}'; \ + \$env:UNIQUE_ID='${{ env.UNIQUE_ID }}'; \ + \$env:RESOURCE_GROUP='LME-pipe-${{ env.UNIQUE_ID }}'; \ + ./development/destroy_cluster.ps1; \ + exit \$LASTEXITCODE; + }" + docker compose -p ${{ env.UNIQUE_ID }} down + docker system prune --force \ No newline at end of file diff --git a/.gitignore b/.gitignore index 5b650322..0f3bfc43 100644 --- a/.gitignore +++ b/.gitignore @@ -2,6 +2,7 @@ .DS_Store /.idea/ /.vscode/ +**/.env /Chapter 4 Files/*.dumped.ndjson /Chapter 4 Files/exported/ @@ -11,7 +12,21 @@ Chapter 3 Files/docker-compose-stack-live.yml Chapter 3 Files/logstash.edited.conf Chapter 3 Files/logstash_custom.conf LME/ -dashboard_update.sh files_for_windows.zip lme.conf -lme_update.sh +**/venv/ +/testing/tests/.env +**/.pytest_cache/ +**/__pycache__/ +/testing/*.password.txt +/testing/configure/azure_scripts/config.ps1 +/testing/configure.zip +/testing/*.output.log +/testing/tests/report.html +testing/tests/assets/style.css +.history/ +**/get-docker.sh +*.vim +**.password.txt +**.ip.txt +**.swp \ No newline at end of file diff --git a/Chapter 1 Files/Group Policy Objects/manifest.xml b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/manifest.xml similarity index 100% rename from Chapter 1 Files/Group Policy Objects/manifest.xml rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/manifest.xml diff --git a/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/Backup.xml b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/Backup.xml similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/Backup.xml rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/Backup.xml diff --git a/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/Preferences/Services/Services.xml b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/Preferences/Services/Services.xml similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/Preferences/Services/Services.xml rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/Preferences/Services/Services.xml diff --git a/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/comment.cmtx b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/comment.cmtx similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/comment.cmtx rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/comment.cmtx diff --git a/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/microsoft/windows nt/SecEdit/GptTmpl.inf b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/microsoft/windows nt/SecEdit/GptTmpl.inf similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/microsoft/windows nt/SecEdit/GptTmpl.inf rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/microsoft/windows nt/SecEdit/GptTmpl.inf diff --git a/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/registry.pol b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/registry.pol similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/registry.pol rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/DomainSysvol/GPO/Machine/registry.pol diff --git a/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/bkupInfo.xml b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/bkupInfo.xml similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/bkupInfo.xml rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/bkupInfo.xml diff --git a/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/gpreport.xml b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/gpreport.xml similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/gpreport.xml rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{36FE9489-FE2B-42DF-835C-DEA226B1AC72}/gpreport.xml diff --git a/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/Backup.xml b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/Backup.xml similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/Backup.xml rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/Backup.xml diff --git a/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/Preferences/Groups/Groups.xml b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/Preferences/Groups/Groups.xml similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/Preferences/Groups/Groups.xml rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/Preferences/Groups/Groups.xml diff --git a/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/Preferences/Services/Services.xml b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/Preferences/Services/Services.xml similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/Preferences/Services/Services.xml rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/Preferences/Services/Services.xml diff --git a/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/comment.cmtx b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/comment.cmtx similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/comment.cmtx rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/comment.cmtx diff --git a/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/microsoft/windows nt/SecEdit/GptTmpl.inf b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/microsoft/windows nt/SecEdit/GptTmpl.inf similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/microsoft/windows nt/SecEdit/GptTmpl.inf rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/microsoft/windows nt/SecEdit/GptTmpl.inf diff --git a/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/registry.pol b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/registry.pol similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/registry.pol rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/DomainSysvol/GPO/Machine/registry.pol diff --git a/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/bkupInfo.xml b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/bkupInfo.xml similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/bkupInfo.xml rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/bkupInfo.xml diff --git a/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/gpreport.xml b/OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/gpreport.xml similarity index 100% rename from Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/gpreport.xml rename to OLD_CHAPTERS/Chapter 1 Files/Group Policy Objects/{9C409013-05EC-4640-B27A-617EDE2FA837}/gpreport.xml diff --git a/Chapter 1 Files/lme_gpo_for_windows.zip b/OLD_CHAPTERS/Chapter 1 Files/lme_gpo_for_windows.zip similarity index 100% rename from Chapter 1 Files/lme_gpo_for_windows.zip rename to OLD_CHAPTERS/Chapter 1 Files/lme_gpo_for_windows.zip diff --git a/Chapter 1 Files/lme_wec_config.xml b/OLD_CHAPTERS/Chapter 1 Files/lme_wec_config.xml similarity index 100% rename from Chapter 1 Files/lme_wec_config.xml rename to OLD_CHAPTERS/Chapter 1 Files/lme_wec_config.xml diff --git a/Chapter 2 Files/GPO Deployment/Group Policy Objects/manifest.xml b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/Group Policy Objects/manifest.xml similarity index 100% rename from Chapter 2 Files/GPO Deployment/Group Policy Objects/manifest.xml rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/Group Policy Objects/manifest.xml diff --git a/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/Backup.xml b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/Backup.xml similarity index 100% rename from Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/Backup.xml rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/Backup.xml diff --git a/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/GPO.cmt b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/GPO.cmt similarity index 100% rename from Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/GPO.cmt rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/GPO.cmt diff --git a/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/Machine/Preferences/ScheduledTasks/ScheduledTasks.xml b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/Machine/Preferences/ScheduledTasks/ScheduledTasks.xml similarity index 100% rename from Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/Machine/Preferences/ScheduledTasks/ScheduledTasks.xml rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/Machine/Preferences/ScheduledTasks/ScheduledTasks.xml diff --git a/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/bkupInfo.xml b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/bkupInfo.xml similarity index 100% rename from Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/bkupInfo.xml rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/bkupInfo.xml diff --git a/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/gpreport.xml b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/gpreport.xml similarity index 100% rename from Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/gpreport.xml rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/gpreport.xml diff --git a/Chapter 2 Files/GPO Deployment/sysmon_gpo.zip b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo.zip similarity index 100% rename from Chapter 2 Files/GPO Deployment/sysmon_gpo.zip rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo.zip diff --git a/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/manifest.xml b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/manifest.xml similarity index 100% rename from Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/manifest.xml rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/manifest.xml diff --git a/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/Backup.xml b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/Backup.xml similarity index 100% rename from Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/Backup.xml rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/Backup.xml diff --git a/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/GPO.cmt b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/GPO.cmt similarity index 100% rename from Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/GPO.cmt rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/GPO.cmt diff --git a/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/Machine/Preferences/ScheduledTasks/ScheduledTasks.xml b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/Machine/Preferences/ScheduledTasks/ScheduledTasks.xml similarity index 100% rename from Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/Machine/Preferences/ScheduledTasks/ScheduledTasks.xml rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/DomainSysvol/GPO/Machine/Preferences/ScheduledTasks/ScheduledTasks.xml diff --git a/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/bkupInfo.xml b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/bkupInfo.xml similarity index 100% rename from Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/bkupInfo.xml rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/bkupInfo.xml diff --git a/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/gpreport.xml b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/gpreport.xml similarity index 100% rename from Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/gpreport.xml rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/sysmon_gpo/Group Policy Objects/{500D54E6-6409-4D75-BBA1-D101CD01216F}/gpreport.xml diff --git a/Chapter 2 Files/GPO Deployment/update.bat b/OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/update.bat similarity index 100% rename from Chapter 2 Files/GPO Deployment/update.bat rename to OLD_CHAPTERS/Chapter 2 Files/GPO Deployment/update.bat diff --git a/Chapter 2 Files/SCCM Deployment/Install_Sysmon64.ps1 b/OLD_CHAPTERS/Chapter 2 Files/SCCM Deployment/Install_Sysmon64.ps1 similarity index 100% rename from Chapter 2 Files/SCCM Deployment/Install_Sysmon64.ps1 rename to OLD_CHAPTERS/Chapter 2 Files/SCCM Deployment/Install_Sysmon64.ps1 diff --git a/Chapter 2 Files/SCCM Deployment/Uninstall_Sysmon64.ps1 b/OLD_CHAPTERS/Chapter 2 Files/SCCM Deployment/Uninstall_Sysmon64.ps1 similarity index 100% rename from Chapter 2 Files/SCCM Deployment/Uninstall_Sysmon64.ps1 rename to OLD_CHAPTERS/Chapter 2 Files/SCCM Deployment/Uninstall_Sysmon64.ps1 diff --git a/Chapter 3 Files/.gitignore b/OLD_CHAPTERS/Chapter 3 Files/.gitignore similarity index 100% rename from Chapter 3 Files/.gitignore rename to OLD_CHAPTERS/Chapter 3 Files/.gitignore diff --git a/Chapter 3 Files/dashboard_update.sh b/OLD_CHAPTERS/Chapter 3 Files/dashboard_update.sh similarity index 93% rename from Chapter 3 Files/dashboard_update.sh rename to OLD_CHAPTERS/Chapter 3 Files/dashboard_update.sh index 95462440..25b4322a 100644 --- a/Chapter 3 Files/dashboard_update.sh +++ b/OLD_CHAPTERS/Chapter 3 Files/dashboard_update.sh @@ -9,7 +9,7 @@ if [ -r /opt/lme/lme.conf ]; then #reference this file as a source . /opt/lme/lme.conf #check if the version number is equal to the one we want - if [ "$version" == "1.3.0" ]; then + if [ "$version" == "1.3.0" ] || [ "$FRESH_INSTALL" = "true" ]; then echo -e "\e[32m[X]\e[0m Updating from git repo" git -C /opt/lme/ pull #make sure the hostname variable is present diff --git a/Chapter 3 Files/deploy.sh b/OLD_CHAPTERS/Chapter 3 Files/deploy.sh similarity index 96% rename from Chapter 3 Files/deploy.sh rename to OLD_CHAPTERS/Chapter 3 Files/deploy.sh index a1d0ef9f..1cd1980c 100755 --- a/Chapter 3 Files/deploy.sh +++ b/OLD_CHAPTERS/Chapter 3 Files/deploy.sh @@ -152,7 +152,7 @@ function setpasswords() { temp="temp" echo -e "\e[32m[X]\e[0m Waiting for Elasticsearch to be ready" - max_attempts=25 + max_attempts=30 attempt=0 while [[ "$(curl -s -o /dev/null -w ''%{http_code}'' --cacert certs/root-ca.crt --user elastic:${temp} https://127.0.0.1:9200)" != "200" ]]; do printf '.' @@ -442,6 +442,9 @@ function installdocker() { echo -e "\e[32m[X]\e[0m Installing Docker" curl -fsSL https://get.docker.com -o get-docker.sh >/dev/null sh get-docker.sh >/dev/null + echo "Starting docker" + service docker start + sleep 5 } function initdockerswarm() { @@ -454,8 +457,8 @@ function initdockerswarm() { } function pulllme() { - info " Pulling ELK images" - docker compose -f /opt/lme/Chapter\ 3\ Files/docker-compose-stack-live.yml pull + echo "Pulling ELK images" + docker compose -f /opt/lme/Chapter\ 3\ Files/docker-compose-stack-live.yml pull --quiet } function deploylme() { @@ -534,35 +537,39 @@ function pipelineupdate() { } function data_retention() { - #show ext4 disk - DF_OUTPUT="$(df -h -l -t ext4 --output=source,size /var/lib/docker)" + # Show ext4 disk + DF_OUTPUT="$(df -BG -l --output=source,size /var/lib/docker)" - #pull dev name - DISK_DEV="$(echo "$DF_OUTPUT" | grep -Po '[0-9]+G')" + # Pull device name + DISK_DEV="$(echo "$DF_OUTPUT" | awk 'NR==2 {print $1}')" - #pull dev size - DISK_SIZE_ROUND="${DISK_DEV/G/}" + # Pull device size + DISK_SIZE="$(echo "$DF_OUTPUT" | awk 'NR==2 {print $2}' | sed 's/G//')" - #lets do math to get 75% (%80 is low watermark for ES but as curator uses this we want to delete data *before* the disk gets full) - DISK_80=$((DISK_SIZE_ROUND * 80 / 100)) + # Check if DISK_SIZE is empty or not a number + if ! [[ "$DISK_SIZE" =~ ^[0-9]+$ ]]; then + echo -e "\e[31m[!]\e[0m DISK_SIZE not an integer or is empty - exiting." + exit 1 + fi - echo -e "\e[32m[X]\e[0m We think your main disk is $DISK_DEV" + echo -e "\e[32m[X]\e[0m We think your main disk is $DISK_DEV and its size is $DISK_SIZE gigabytes" - if [ "$DISK_80" -lt 30 ]; then - echo -e "\e[31m[!]\e[0m LME Requires 128GB of space usable for log retention - exiting" - exit 1 - elif [ "$DISK_80" -ge 90 ] && [ "$DISK_80" -le 179 ]; then + if [ "$DISK_SIZE" -lt 128 ]; then + echo -e "\e[33m[!]\e[0m Warning: Disk size less than 128GB, recommend a larger disk for production environments. Install continuing..." + sleep 3 RETENTION="30" - elif [ "$DISK_80" -ge 180 ] && [ "$DISK_80" -le 359 ]; then + elif [ "$DISK_SIZE" -ge 128 ] && [ "$DISK_SIZE" -le 179 ]; then + RETENTION="45" + elif [ "$DISK_SIZE" -ge 180 ] && [ "$DISK_SIZE" -le 359 ]; then RETENTION="90" - elif [ "$DISK_80" -ge 360 ] && [ "$DISK_80" -le 539 ]; then + elif [ "$DISK_SIZE" -ge 360 ] && [ "$DISK_SIZE" -le 539 ]; then RETENTION="180" - elif [ "$DISK_80" -ge 540 ] && [ "$DISK_80" -le 719 ]; then + elif [ "$DISK_SIZE" -ge 540 ] && [ "$DISK_SIZE" -le 719 ]; then RETENTION="270" - elif [ "$DISK_80" -ge 720 ]; then + elif [ "$DISK_SIZE" -ge 720 ]; then RETENTION="365" else - echo -e "\e[31m[!]\e[0m Unable to determine retention policy - exiting" + echo -e "\e[31m[!]\e[0m Unable to determine disk size - exiting." exit 1 fi @@ -736,6 +743,7 @@ function fixreadability() { function install() { + export FRESH_INSTALL="true" echo -e "Will execute the following intrusive actions:\n\t- apt update & upgrade\n\t- install docker (please uninstall before proceeding, or indicate skipping the install)\n\t- initialize docker swarm (execute \`sudo docker swarm leave --force\` before proceeding if you are part of a swarm\n\t- automatic os updates via unattened-upgrades\n\t- checkout lme directory to latest version, and throw away local changes)" prompt "Proceed?" @@ -748,10 +756,11 @@ function install() { fi echo -e "\e[32m[X]\e[0m Updating OS software" - apt update && apt upgrade -y + apt-get update + DEBIAN_FRONTEND=noninteractive NEEDRESTART_MODE=a apt-get upgrade -yq echo -e "\e[32m[X]\e[0m Installing prerequisites" - apt install ${REQUIRED_PACKS[*]} -y -q + DEBIAN_FRONTEND=noninteractive NEEDRESTART_MODE=a apt-get install ${REQUIRED_PACKS[*]} -yq if [ -f /var/run/reboot-required ]; then echo -e "\e[31m[!]\e[0m A reboot is required in order to proceed with the install." @@ -880,7 +889,8 @@ function install() { displaycredentials echo -e "If you prefer to set your own elastic user password, then refer to our troubleshooting documentation:" - echo -e "https://github.com/cisagov/LME/blob/main/docs/markdown/reference/troubleshooting.md#changing-elastic-username-password\n\n" + echo -e "https://github.com/cisagov/LME/blob/main/docs/markdown/reference/troubleshooting.md#changing-elastic-username-password\n\n" + return 0 } function displaycredentials() { @@ -1070,6 +1080,8 @@ function upgrade() { elif [ "$version" == $latest ]; then info "You're on the latest version!" + elif [ "$version" > "1.3.0" ]; then + info "There are no upgrades in this version. $latest" else error "Updating directly to LME 1.0 from versions prior to 0.5.1 is not supported. Update to 0.5.1 first." fi @@ -1169,7 +1181,7 @@ then ready "Will install the following packages: ${missing_pkgs[*]}. These are required for LME." sudo apt-get update #confirm install - sudo apt-get --yes install ${missing_pkgs[*]} + sudo DEBIAN_FRONTEND=noninteractive NEEDRESTART_MODE=a apt-get -yq install ${missing_pkgs[*]} fi #Change current working directory so relative filepaths work @@ -1183,6 +1195,7 @@ if [ "$1" == "" ]; then usage elif [ "$1" == "install" ]; then install + exit $? # Exit with the status of the install function elif [ "$1" == "uninstall" ]; then uninstall elif [ "$1" == "upgrade" ]; then diff --git a/Chapter 3 Files/docker-compose-stack.yml b/OLD_CHAPTERS/Chapter 3 Files/docker-compose-stack.yml similarity index 100% rename from Chapter 3 Files/docker-compose-stack.yml rename to OLD_CHAPTERS/Chapter 3 Files/docker-compose-stack.yml diff --git a/Chapter 3 Files/lme_update.sh b/OLD_CHAPTERS/Chapter 3 Files/lme_update.sh similarity index 100% rename from Chapter 3 Files/lme_update.sh rename to OLD_CHAPTERS/Chapter 3 Files/lme_update.sh diff --git a/Chapter 3 Files/logstash.conf b/OLD_CHAPTERS/Chapter 3 Files/logstash.conf similarity index 100% rename from Chapter 3 Files/logstash.conf rename to OLD_CHAPTERS/Chapter 3 Files/logstash.conf diff --git a/Chapter 3 Files/winlog-index-mapping.json b/OLD_CHAPTERS/Chapter 3 Files/winlog-index-mapping.json similarity index 100% rename from Chapter 3 Files/winlog-index-mapping.json rename to OLD_CHAPTERS/Chapter 3 Files/winlog-index-mapping.json diff --git a/Chapter 3 Files/winlogbeat.yml b/OLD_CHAPTERS/Chapter 3 Files/winlogbeat.yml similarity index 100% rename from Chapter 3 Files/winlogbeat.yml rename to OLD_CHAPTERS/Chapter 3 Files/winlogbeat.yml diff --git a/Chapter 4 Files/dashboards/Healthcheckoverview_dashboard.ndjson b/OLD_CHAPTERS/Chapter 4 Files/dashboards/Healthcheckoverview_dashboard.ndjson similarity index 100% rename from Chapter 4 Files/dashboards/Healthcheckoverview_dashboard.ndjson rename to OLD_CHAPTERS/Chapter 4 Files/dashboards/Healthcheckoverview_dashboard.ndjson diff --git a/Chapter 4 Files/dashboards/Readme.md b/OLD_CHAPTERS/Chapter 4 Files/dashboards/Readme.md similarity index 100% rename from Chapter 4 Files/dashboards/Readme.md rename to OLD_CHAPTERS/Chapter 4 Files/dashboards/Readme.md diff --git a/Chapter 4 Files/dashboards/alerting_dashboard.ndjson b/OLD_CHAPTERS/Chapter 4 Files/dashboards/alerting_dashboard.ndjson similarity index 100% rename from Chapter 4 Files/dashboards/alerting_dashboard.ndjson rename to OLD_CHAPTERS/Chapter 4 Files/dashboards/alerting_dashboard.ndjson diff --git a/Chapter 4 Files/dashboards/computer_software_overview.ndjson b/OLD_CHAPTERS/Chapter 4 Files/dashboards/computer_software_overview.ndjson similarity index 100% rename from Chapter 4 Files/dashboards/computer_software_overview.ndjson rename to OLD_CHAPTERS/Chapter 4 Files/dashboards/computer_software_overview.ndjson diff --git a/Chapter 4 Files/dashboards/process_explorer.ndjson b/OLD_CHAPTERS/Chapter 4 Files/dashboards/process_explorer.ndjson similarity index 100% rename from Chapter 4 Files/dashboards/process_explorer.ndjson rename to OLD_CHAPTERS/Chapter 4 Files/dashboards/process_explorer.ndjson diff --git a/Chapter 4 Files/dashboards/security_dashboard_security_log.ndjson b/OLD_CHAPTERS/Chapter 4 Files/dashboards/security_dashboard_security_log.ndjson similarity index 100% rename from Chapter 4 Files/dashboards/security_dashboard_security_log.ndjson rename to OLD_CHAPTERS/Chapter 4 Files/dashboards/security_dashboard_security_log.ndjson diff --git a/Chapter 4 Files/dashboards/sysmon_summary.ndjson b/OLD_CHAPTERS/Chapter 4 Files/dashboards/sysmon_summary.ndjson similarity index 100% rename from Chapter 4 Files/dashboards/sysmon_summary.ndjson rename to OLD_CHAPTERS/Chapter 4 Files/dashboards/sysmon_summary.ndjson diff --git a/Chapter 4 Files/dashboards/user_hr.ndjson b/OLD_CHAPTERS/Chapter 4 Files/dashboards/user_hr.ndjson similarity index 100% rename from Chapter 4 Files/dashboards/user_hr.ndjson rename to OLD_CHAPTERS/Chapter 4 Files/dashboards/user_hr.ndjson diff --git a/Chapter 4 Files/dashboards/user_security.ndjson b/OLD_CHAPTERS/Chapter 4 Files/dashboards/user_security.ndjson similarity index 100% rename from Chapter 4 Files/dashboards/user_security.ndjson rename to OLD_CHAPTERS/Chapter 4 Files/dashboards/user_security.ndjson diff --git a/Chapter 4 Files/export_dashboards.py b/OLD_CHAPTERS/Chapter 4 Files/export_dashboards.py similarity index 100% rename from Chapter 4 Files/export_dashboards.py rename to OLD_CHAPTERS/Chapter 4 Files/export_dashboards.py diff --git a/Chapter 4 Files/requirements.txt b/OLD_CHAPTERS/Chapter 4 Files/requirements.txt similarity index 100% rename from Chapter 4 Files/requirements.txt rename to OLD_CHAPTERS/Chapter 4 Files/requirements.txt diff --git a/OLD_CHAPTERS/README.md b/OLD_CHAPTERS/README.md new file mode 100644 index 00000000..cdcc4d95 --- /dev/null +++ b/OLD_CHAPTERS/README.md @@ -0,0 +1,76 @@ +![N|Solid](/docs/imgs/cisa.png) + +[![Downloads](https://img.shields.io/github/downloads/cisagov/lme/total.svg)]() + +# Logging Made Easy +Initially created by NCSC and now maintained by CISA, Logging Made Easy is a self-install tutorial for small organizations to gain a basic level of centralized security logging for Windows clients and provide functionality to detect attacks. It's the coming together of multiple open software platforms which come at no cost to users, where LME helps the reader integrate them together to produce an end-to-end logging capability. We also provide some pre-made configuration files and scripts, although there is the option to do it on your own. + +Logging Made Easy can: +- Show where administrative commands are being run on enrolled devices +- See who is using which machine +- In conjunction with threat reports, it is possible to query for the presence of an attacker in the form of Tactics, Techniques and Procedures (TTPs) + +## Disclaimer + +**LME is currently still early in development.** + +***If you have an existing install of the LME Alpha (v0.5 or older) some manual intervention will be required in order to upgrade to the latest version, please see [Upgrading](/docs/markdown/maintenance/upgrading.md) for further information.*** + +**This is not a professional tool, and should not be used as a [SIEM](https://en.wikipedia.org/wiki/Security_information_and_event_management).** + +**LME is a 'homebrew' way of gathering logs and querying for attacks.** + +We have done the hard work to make things simple. We will tell you what to download, which configurations to use and have created convenient scripts to auto-configure wherever possible. + +The current architecture is based upon Windows Clients, Microsoft Sysmon, Windows Event Forwarding and the ELK stack. + +We are **not** able to comment on or troubleshoot individual installations. If you believe you have have found an issue with the LME code or documentation please submit a [GitHub issue](https://github.com/cisagov/lme/issues). If you have a question about your installation, please visit [GitHub Discussions](https://github.com/cisagov/lme/discussions) to see if your issue has been addressed before. + +## Who is Logging Made Easy for? + +From single IT administrators with a handful of devices in their network to larger organizations. + +LME is for you if: + +* You donโ€™t have a [SOC](https://en.wikipedia.org/wiki/Information_security_operations_center), SIEM or any monitoring in place at the moment. +* You lack the budget, time or understanding to set up your own logging system. +* You recognize the need to begin gathering logs and monitoring your IT. +* You understand that LME has limitations and is better than nothing - but no match for a professional tool. + +If any, or all, of these criteria fit, then LME is a step in the right direction for you. + +LME could also be useful for: + +* Small isolated networks where corporate monitoring doesnโ€™t reach. + +## Overview +The LME architecture consists of 3 groups of computers, as summarized in the following diagram: +![High level overview](/docs/imgs/OverviewDiagram.png) + +

+Figure 1: The 3 primary groups of computers in the LME architecture, their descriptions and the operating systems / software run by each. +

+ +## Table of contents + +### Installation: + - [Prerequisites - Start deployment here](/docs/markdown/prerequisites.md) + - [Chapter 1 - Set up Windows Event Forwarding](/docs/markdown/chapter1/chapter1.md) + - [Chapter 2 โ€“ Sysmon Install](/docs/markdown/chapter2.md) + - [Chapter 3 โ€“ Database Install](/docs/markdown/chapter3/chapter3.md) + - [Chapter 4 - Post Install Actions ](/docs/markdown/chapter4.md) + +### Logging Guidance + - [Log Retention](/docs/markdown/logging-guidance/retention.md) + - [Additional Log Types](/docs/markdown/logging-guidance/other-logging.md) + +### Reference: + - [FAQ](/docs/markdown/reference/faq.md) + - [Troubleshooting](/docs/markdown/reference/troubleshooting.md) + - [Dashboard Descriptions](/docs/markdown/reference/dashboard-descriptions.md) + - [Guide to Organizational Units](/docs/markdown/chapter1/guide_to_ous.md) + +### Maintenance: + - [Backups](/docs/markdown/maintenance/backups.md) + - [Upgrading](/docs/markdown/maintenance/upgrading.md) + - [Certificates](/docs/markdown/maintenance/certificates.md) diff --git a/README.md b/README.md index 9c89258b..24ed0cfc 100644 --- a/README.md +++ b/README.md @@ -1,75 +1,369 @@ + ![N|Solid](/docs/imgs/cisa.png) [![Downloads](https://img.shields.io/github/downloads/cisagov/lme/total.svg)]() -# Logging Made Easy -Initially created by NCSC and now maintained by CISA, Logging Made Easy is a self-install tutorial for small organizations to gain a basic level of centralized security logging for Windows clients and provide functionality to detect attacks. It's the coming together of multiple free and open software platforms, where LME helps the reader integrate them together to produce an end-to-end logging capability. We also provide some pre-made configuration files and scripts, although there is the option to do it on your own. +# Logging Made Easy: Podmanized + +This will eventually be merged with the Readme file at [LME-README](https://github.com/cisagov/LME). + +## TLDR: +LME will now execute its server stack via systemd through quadlet's. +All the original compose functionality has been implemented and working. + +## Architecture: +Ubuntu 22.04 server running podman containers setup as podman quadlets controlled via systemd. + +### Required Ports: +Ports required are as follows: + - Elasticsearch: *9200* + - Caddy: *443* + - Wazuh: *1514,1515,55000,514* + - Agent: *8220* + + +### Diagram: +A real diagram is coming, for now this poor man's flow chart is all that is available: (Created with [asciiflow](https://asciiflow.com/#/)) + +``` +# +---------------------------------------------------------------------+ +# # | | +# # | LME SERVER | +# # | | +# # | Podman Containers | +# # | | +# # | +-----------+ +-----------+ | +# # ------+------------------->| | | | | +# # +-----------------------------------+ ^ | | Wazuh +-------------+ | Kibana | | +# # | | | | +---------+ | Manager | | | | | +# # | CLIENT MACHINE | | | | | | | | +----+---^--+ | +# # | | | | | Caddy | +-----------+ | | | | +# # | | | | | | +----v-----+ | | | +# # | WINDOWS | | | | | | | | | | +# # | | | | +-----+--^+ +----------+ | Elastic <----+ | | +# # | +-----------------+ | | | | | | | | search | | | +# # | | | | | | | | | Fleet | | +--------+ | +# # | | Elastic Agent +--------+------------+-----+--------+--+---------> | +------^---+ | +# # | +-----------------+ | | | | | | Server | | | +# # | | | | +-v--+-------+ | +---------------+ | +# # | +-----------------+ | | | | LME | +----------+ | +# # | | | | | | | | | +# # | | Wazuh Agent +--------+------------+ | | FrontEnd | | +# # | | | | | | | | +# # | +-----------------+ | | +------------+ | +# # | | | | +# # +``` + +### why podman?: +Podman is more secure (by default) against container escape attacks than Docker. It also is far more debug and programmer friendly for making containers secure. + +### Containers: + - caddy: acts as a reverse proxy for the container architecture: + - routes traffic to the backend services + - hosts lme-front end + - helps access all services behind one pane of glass + - setup: runs `/config/setup/init-setup.sh` based on the configuration of dns defined in `/config/setup/instances.yml`. The script will create a CA, underlying certs for each service, and intialize the admin accounts for elasticsearch(user:`elastic`) and kibana(user:`kibana_system`). + - elasticsearch: runs the database for LME and indexes all logs + - kibana: the front end for querying logs, investigating via dashboards, and managing fleet agents... + - fleet-server: executes a [elastic agent ](https://github.com/elastic/elastic-agent) in fleet-server mode. It coordinates elastic agents to gather logs and status from clients. Configuration is inspired by the [elastic-container](https://github.com/peasead/elastic-container) project. + - Elastic agents provide integrations, have more features than winlogbeat. + - wazuh-manager: runs the wazuh manager so we can deploy and manage wazuh agents. + - Wazuh (open source) gives EDR (Endpoint Detection Response) with security dashboards to cover the security of all of the machines. + - lme-frontend: will host an api and gui that unifies the architecture behind one interface + +### Agents: +Wazuh agents will enable EDR capabilities, while Elastic agents will enable logging capabilities. + + - https://github.com/wazuh/wazuh-agent + - https://github.com/elastic/elastic-agent + +## Installation: + +### **Ubuntu 22.04**: +Important: Change appropriate variables in `$CLONE_DIRECTORY/example.env` Each variable is documented inside `example.env`. You'll want to change the default passwords! + +After changing those variables, you can run the automated install, or do a manual install. + +#### **Automated Install** +You can run this installer to run the total install in ansible. +```bash +sudo apt update && sudo apt install -y ansible +# cd ~/LME-PRIV/lme-2-arch # Or path to your clone of this repo +ansible-playbook install_lme_local.yml +``` +This assumes that you have the repo in `~/LME-PRIV/`. + +If you don't, you can pass the `CLONE_DIRECTORY` variable to the playbook. + +``` +ansible-playbook install_lme_local.yml -e "clone_dir=/path/to/clone/directory" +``` + +This also assumes your user can sudo without a password. If you need to input a password when you sudo, you can run it with the `-K` flag and it will prompt you for a password. + +**NOTE** [this script](/scripts/set_sysctl_limits.sh) is executed via ansible AND will change unprivileged ports to start at 80, to allow caddy to listen on 443 from a user run container. If this is not desired, we will be publishing steps to setup firewall rules using ufw//iptables to manage the firewall on this host at a later time. + +#### **-- End Automated Install** + +#### **Manual Install**( optional if not running ansible install): +``` +export CLONE_DIRECTORY=~/LME-PRIV/lme-2-arch +#systemd will setup nix: +#Old way to setup nix if desired: sh <(curl -L https://nixos.org/nix/install) --daemon +sudo apt install jq uidmap nix-bin nix-setup-systemd + +sudo nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs +sudo nix-channel --update + +# Add user to nix group in /etc/group +sudo usermod -aG nix-users $USER + +#install podman and podman-compose +sudo nix-env -iA nixpkgs.podman + +# Set the path for root and lme-user +#echo 'export PATH=$PATH:$HOME/.nix-profile/bin' >> ~/.bashrc +echo 'export PATH=$PATH:/nix/var/nix/profiles/default/bin' >> ~/.bashrc +sudo sh -c 'echo "export PATH=$PATH:/nix/var/nix/profiles/default/bin" >> /root/.bashrc' + +#to allow 443/80 bind and setup memory/limits +sudo NON_ROOT_USER=$USER $CLONE_DIRECTORY/set_sysctl_limits.sh + +#TODO are these needed? we'll have to see, don't set them for now +#export XDG_CONFIG_HOME="$HOME/.config" +#export XDG_RUNTIME_DIR=/run/user/$(id -u) + +#setup user-generator on systemd: +sudo $CLONE_DIRECTORY/link_latest_podman_quadlet.sh + +#setup loginctl +sudo loginctl enable-linger $USER +``` + +### Configuration + +Configuration is `/config/` + in `setup` find the configuration for certificate generation and password setting. `instances.yml` defines the certificates that will get created. The shellscripts initialize accounts and create certificates, and will run from their respective quadlet definitions `lme-setup-accts` and `lme-setup-certs` respectively. + in `caddy` is the Caddyfile for the reverse proxy. Find more notes on its syntax and configuraiton here: [CADDY DOCS](https://caddyserver.com/docs/caddyfile) + +Quadlet configuration for containers is in: `/quadlet/` + +1. setup `/opt/lme` thats the running directory for lme: +```bash +sudo mkdir -p /opt/lme +sudo chown -R $USER:$USER /opt/lme +cp -r $CLONE_DIRECTORY/config/ /opt/lme/ +cp -r $CLONE_DIRECTORY/quadlet/ /opt/lme/ + +#setup quadlets +mkdir -p ~/.config/containers/ +ln -s /opt/lme/quadlet ~/.config/containers/systemd + +#setup service file +mkdir -p ~/.config/systemd/user +ln -s /opt/lme/quadlet/lme.service ~/.config/systemd/user/ +``` + +#### **--- End Manual Install** + +### After install: + +Confirm setup: +``` +systemctl --user daemon-reload +systemctl --user list-unit-files lme\* +``` + +1. Copy the file `example.env` to the running environment file: +```bash +cp $CLONE_DIRECTORY/example.env /opt/lme/lme-environment.env +``` + +3. Change appropriate variables in `/opt/lme/lme-environment.env` Each variable is documented inside `example.env`. You'll want to change the default passwords! + +## Run: + +### pull and tag all containers: +This will let us maintain the lme container versions using the `LME_LATEST` tag. Whenever we update, we change the local image to point to the newest update, and run `podman auto-update` to update the containers. + +**NOTE TO FUTURE SELVES: NEEDS TO BE `LOCALHOST` TO AVOID REMOTE TAGGING ATTACK** + +```bash +sudo mkdir -p /etc/containers +sudo tee /etc/containers/policy.json < quadlet +1. start the containers with compose +2. podlet generate from the containers created -If any, or all, of these criteria fit, then LME is a step in the right direction for you. +### compose: +running: +```shell +podman-compose up -d +``` -LME could also be useful for: +stopping: +```shell +podman-compose down --remove-orphans -* Small isolated networks where corporate monitoring doesnโ€™t reach. +#only run if you want to remove all volumes: +podman-compose down -v --remove-orphans +``` -## Overview -The LME architecture consists of 3 groups of computers, as summarized in the following diagram: -![High level overview](/docs/imgs/OverviewDiagram.png) +### install/get podlet: +``` +#https://github.com/containers/podlet/releases +wget https://github.com/containers/podlet/releases/download/v0.3.0/podlet-x86_64-unknown-linux-gnu.tar.xz +#add it to path: +cp ./podlet-x86_64-unknown-linux-gnu/podlet .local/bin/ +``` -

-Figure 1: The 3 primary groups of computers in the LME architecture, their descriptions and the operating systems / software run by each. -

+### generate the quadlet files: +[DOCS](https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html), [BLOG](https://mo8it.com/blog/quadlet/) -## Table of contents +``` +cd ~/LME-PRIV/quadlet -### Installation: - - [Prerequisites - Start deployment here](/docs/markdown/prerequisites.md) - - [Chapter 1 - Set up Windows Event Forwarding](/docs/markdown/chapter1/chapter1.md) - - [Chapter 2 โ€“ Sysmon Install](/docs/markdown/chapter2.md) - - [Chapter 3 โ€“ Database Install](/docs/markdown/chapter3/chapter3.md) - - [Chapter 4 - Post Install Actions ](/docs/markdown/chapter4.md) +for x in $(podman ps --filter label=io.podman.compose.project=lme-2-arch -a --format "{{.Names}}");do echo $x; podlet generate container $x > $x.container;done +``` -### Logging Guidance - - [Log Retention](/docs/markdown/logging-guidance/retention.md) - - [Additional Log Types](/docs/markdown/logging-guidance/other-logging.md) +### dealing with journalctl logs: +https://unix.stackexchange.com/questions/638432/clear-failed-states-or-all-old-logs-from-systemctl-status-service +``` +#delete all logs: +sudo rm /var/log/journal/$STRING_OF_HEX/user-1000* +``` -### Reference: - - [FAQ](/docs/markdown/reference/faq.md) - - [Troubleshooting](/docs/markdown/reference/troubleshooting.md) - - [Guide to Organizational Units](/docs/markdown/chapter1/guide_to_ous.md) +### debugging commands: +``` +systemctl --user stop lme.service +systemctl --user status lme* +systemctl --user restart lme.service +journalctl --user -u lme-fleet-server.service +systemctl --user status lme* +cp -r $CLONE_DIRECTORY/config/ /opt/lme && cp -r $CLONE_DIRECTORY/quadlet /opt/lme +systemctl --user daemon-reload && systemctl --user list-unit-files lme\* +systemctl --user reset-failed +podman volume rm -a -### Maintenance: - - [Backups](/docs/markdown/maintenance/backups.md) - - [Upgrading](/docs/markdown/maintenance/upgrading.md) - - [Certificates](/docs/markdown/maintenance/certificates.md) +###make sure all ports are free as well: +sudo ss -tulpn +``` diff --git a/config/caddy/Caddyfile b/config/caddy/Caddyfile new file mode 100644 index 00000000..dd3cacfa --- /dev/null +++ b/config/caddy/Caddyfile @@ -0,0 +1,22 @@ +{ + # Global options + admin off # Disable admin API for security + log { + output file /var/log/caddy/access.log + format json + } +} + +:80 { + redir https://{host}{uri} permanent +} + +:443 { + tls /etc/caddy/certs/caddy/caddy.crt /etc/caddy/certs/caddy/caddy.key + reverse_proxy https://lme-kibana:5601 { + transport http { + tls_trusted_ca_certs /etc/caddy/certs/ca/ca.crt + tls_insecure_skip_verify + } + } +} diff --git a/config/containers.txt b/config/containers.txt new file mode 100644 index 00000000..facf6b2e --- /dev/null +++ b/config/containers.txt @@ -0,0 +1,5 @@ +docker.io/caddy:2-alpine +docker.elastic.co/elasticsearch/elasticsearch:8.12.2 +docker.elastic.co/beats/elastic-agent:8.12.2 +docker.elastic.co/kibana/kibana:8.12.2 +docker.io/wazuh/wazuh-manager:4.7.5 diff --git a/config/example.env b/config/example.env new file mode 100644 index 00000000..4a022be9 --- /dev/null +++ b/config/example.env @@ -0,0 +1,95 @@ +# environment file for docker-compose + +#TODO: set this via a script: +#IP of your host machine +IPVAR=127.0.0.1 + +# ElasticSearch settings +######################## + +#TODO: this will be needed for scaling, not needed right now +# the names of the OS nodes +#ES_NODE1=es01 +# uncomment to create a cluster (more nodes can be added also) +# !!! do not forget to also adjust the docker-compose.yml file !!! +# ES_NODE2=es02 + +# Local Kibana URL +LOCAL_KBN_URL=https://127.0.0.1:5601 +# Local ES URL +LOCAL_ES_URL=https://127.0.0.1:9200 + +# Elastic settings +################# + +# Version of Elastic products +STACK_VERSION=8.12.2 +# Testing pre-releases? Use the SNAPSHOT option below: +# STACK_VERSION=8.11.0-SNAPSHOT +# +# Set the cluster name +CLUSTER_NAME=LME + +#User info: +#####TODO: make these podman secrets +ELASTIC_USERNAME=elastic +# Password for the 'elastic' user (at least 6 characters) +ELASTIC_PASSWORD=password1 +#Username used by kibana +ELASTICSEARCH_USERNAME=kibana_system +# Password for the 'kibana_system' user (at least 6 characters) +ELASTICSEARCH_PASSWORD=password1 + +#Fleet: +KIBANA_PASSWORD=password1 +KIBANA_FLEET_USERNAME=elastic +KIBANA_FLEET_PASSWORD=password1 + +#Wazuh: +WAZUH_PASSWORD=MyP@ssw0rd1# +INDEXER_USERNAME=elastic +INDEXER_PASSWORD=password1 +API_USERNAME=wazuh-wui +API_PASSWORD=MyP@ssw0rd1# + +# Set to "basic" or "trial" to automatically start the 30-day trial +LICENSE=basic + +#TODO: support these, right now they're static +# Port to expose Elasticsearch HTTP API to the host +ES_PORT=9200 +#ES_PORT=127.0.0.1:9200 +# Port to expose Kibana to the host +KIBANA_PORT=5601 +# Port to expose Fleet to the host +FLEET_PORT=8220 + +# Increase or decrease based on the available host memory (in bytes) +MEM_LIMIT=2073741824 + + +# Detection Settings: +################# +#TODO: integrate this into the ansible script +# Bulk Enable Detection Rules by OS - change to "1" if you want to enable + +LinuxDR=0 +WindowsDR=0 +MacOSDR=0 + +# Proxy Settings: +# LEAVE BLANK IF NO PROXY! +################# + +# Standard certificate location for ubuntu +#PROXY_CA_LOCATION=/etc/ssl/certs/ca-certificates.crt +# Proxy Server URL +#PROXY_URL= +# IPs and host names you want the proxy to ignore. Typically want all private IP's and Docker network hostnames / IP's ignored +# Example config: +# 127.0.0.1,localhost,10.,172.16.,172.17.,192.168.,*.local,.local,169.254/16,lme-elasticsearch,lme-kibana,lme-fleet-server,lme-wazuh-manager +#PROXY_IGNORE= +#set these as well: +#HTTP_PROXY= +#HTTPS_PROXY= +#NO_PROXY= diff --git a/config/kibana.yml b/config/kibana.yml new file mode 100644 index 00000000..ee77df11 --- /dev/null +++ b/config/kibana.yml @@ -0,0 +1,17 @@ +xpack.encryptedSavedObjects.encryptionKey: "thirty-two-or-more-random-characters" +server.host: "0.0.0.0" +telemetry.enabled: "true" +xpack.fleet.packages: + - name: fleet_server + version: latest + - name: system + version: latest +xpack.fleet.agentPolicies: + - name: Fleet-Server-Policy + id: fleet-server-policy + namespace: default + package_policies: + - name: fleet_server-1 + package: + name: fleet_server + diff --git a/config/setup/acct-init.sh b/config/setup/acct-init.sh new file mode 100644 index 00000000..03792cc7 --- /dev/null +++ b/config/setup/acct-init.sh @@ -0,0 +1,17 @@ +#!/bin/bash +set -euo pipefail + +CONFIG_DIR="/usr/share/elasticsearch/config" +CERTS_DIR="${CONFIG_DIR}/certs" +INSTANCES_PATH="${CONFIG_DIR}/setup/instances.yml" + +if [ ! -f "${CERTS_DIR}/ACCOUNTS_CREATED" ]; then + echo "Waiting for Elasticsearch availability"; + until curl -s --cacert config/certs/ca/ca.crt https://lme-elasticsearch:9200 | grep -q "missing authentication credentials"; do echo "WAITING"; sleep 30; done; + + echo "Setting kibana_system password"; + until curl -s -X POST --cacert config/certs/ca/ca.crt -u elastic:${ELASTIC_PASSWORD} -H "Content-Type: application/json" https://lme-elasticsearch:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 2; done; + + echo "All done!" | tee "${CERTS_DIR}/ACCOUNTS_CREATED" ; +fi +echo "Accounts kibana_system Created!" diff --git a/config/setup/init-setup.sh b/config/setup/init-setup.sh new file mode 100644 index 00000000..41a7b34a --- /dev/null +++ b/config/setup/init-setup.sh @@ -0,0 +1,29 @@ +#!/bin/bash +set -euo pipefail + +if [[ -z "${ELASTIC_PASSWORD:-}" || -z "${KIBANA_PASSWORD:-}" ]]; then + echo "ERROR: ELASTIC_PASSWORD and/or KIBANA_PASSWORD are missing." + exit 1 +fi + +CONFIG_DIR="/usr/share/elasticsearch/config" +CERTS_DIR="${CONFIG_DIR}/certs" +INSTANCES_PATH="${CONFIG_DIR}/setup/instances.yml" + +if [ ! -f "${CERTS_DIR}/ca.zip" ]; then + echo "Creating CA..." + elasticsearch-certutil ca --silent --pem --out "${CERTS_DIR}/ca.zip" + unzip -o "${CERTS_DIR}/ca.zip" -d "${CERTS_DIR}" +fi + +if [ ! -f "${CERTS_DIR}/certs.zip" ]; then + echo "Creating certificates..." + elasticsearch-certutil cert --silent --pem --in "${INSTANCES_PATH}" --out "${CERTS_DIR}/certs.zip" --ca-cert "${CERTS_DIR}/ca/ca.crt" --ca-key "${CERTS_DIR}/ca/ca.key" + unzip -o "${CERTS_DIR}/certs.zip" -d "${CERTS_DIR}" + cat "${CERTS_DIR}/elasticsearch/elasticsearch.crt" "${CERTS_DIR}/ca/ca.crt" > "${CERTS_DIR}/elasticsearch/elasticsearch.chain.pem" +fi + +echo "Setting file permissions..." +chown -R root:root "${CERTS_DIR}" +find "${CERTS_DIR}" -type d -exec chmod 750 {} \; +find "${CERTS_DIR}" -type f -exec chmod 640 {} \; diff --git a/config/setup/instances.yml b/config/setup/instances.yml new file mode 100644 index 00000000..fb45133c --- /dev/null +++ b/config/setup/instances.yml @@ -0,0 +1,51 @@ +# Add host IP address / domain names as needed. + +instances: + - name: "elasticsearch" + dns: + - "lme-elasticsearch" + - "localhost" + ip: + - "127.0.0.1" + + - name: "kibana" + dns: + - "lme-kibana" + - "localhost" + ip: + - "127.0.0.1" + + - name: "fleet-server" + dns: + - "lme-fleet-server" + - "localhost" + ip: + - "127.0.0.1" + + - name: "wazuh-manager" + dns: + - "lme-wazuh-manager" + - "localhost" + ip: + - "127.0.0.1" + + - name: "logstash" + dns: + - "logstash" + - "localhost" + ip: + - "127.0.0.1" + + - name: "curator" + dns: + - "curator" + - "localhost" + ip: + - "127.0.0.1" + + - name: "caddy" + dns: + - "lme-caddy" + - "localhost" + ip: + - "127.0.0.1" diff --git a/config/wazuh_cluster/wazuh_manager.conf b/config/wazuh_cluster/wazuh_manager.conf new file mode 100644 index 00000000..694213da --- /dev/null +++ b/config/wazuh_cluster/wazuh_manager.conf @@ -0,0 +1,385 @@ + + + + + yes + yes + no + no + no + smtp.example.wazuh.com + wazuh@example.wazuh.com + recipient@example.wazuh.com + 12 + alerts.log + 10m + 0 + + + + 3 + 12 + + + + + plain + + + + secure + 1514 + tcp + 131072 + + + + + no + yes + yes + yes + yes + yes + yes + yes + + + 43200 + + etc/rootcheck/rootkit_files.txt + etc/rootcheck/rootkit_trojans.txt + + yes + + + + yes + 1800 + 1d + yes + + wodles/java + wodles/ciscat + + + + + yes + yes + /var/log/osquery/osqueryd.results.log + /etc/osquery/osquery.conf + yes + + + + + no + 1h + yes + yes + yes + yes + yes + yes + yes + + + + 10 + + + + + yes + yes + 12h + yes + + + + yes + 5m + 6h + yes + + + + yes + trusty + xenial + bionic + focal + jammy + 1h + + + + + no + buster + bullseye + bookworm + 1h + + + + + no + 5 + 6 + 7 + 8 + 9 + 1h + + + + + no + amazon-linux + amazon-linux-2 + amazon-linux-2022 + 1h + + + + + no + 11-server + 11-desktop + 12-server + 12-desktop + 15-server + 15-desktop + 1h + + + + + no + 1h + + + + + no + 8 + 9 + 1h + + + + + yes + 1h + + + + + yes + 1h + + + + + + + no + + + 43200 + + yes + + + yes + + + no + + + /etc,/usr/bin,/usr/sbin + /bin,/sbin,/boot + + + /etc/mtab + /etc/hosts.deny + /etc/mail/statistics + /etc/random-seed + /etc/random.seed + /etc/adjtime + /etc/httpd/logs + /etc/utmpx + /etc/wtmpx + /etc/cups/certs + /etc/dumpdates + /etc/svc/volatile + + + .log$|.swp$ + + + /etc/ssl/private.key + + yes + yes + yes + yes + + + 10 + + + 50 + + + + yes + 5m + 10 + + + + + + 127.0.0.1 + ^localhost.localdomain$ + 172.31.0.2 + + + + disable-account + disable-account + yes + + + + restart-wazuh + restart-wazuh + + + + firewall-drop + firewall-drop + yes + + + + host-deny + host-deny + yes + + + + route-null + route-null + yes + + + + win_route-null + route-null.exe + yes + + + + netsh + netsh.exe + yes + + + + + + + command + df -P + 360 + + + + full_command + netstat -tulpn | sed 's/\([[:alnum:]]\+\)\ \+[[:digit:]]\+\ \+[[:digit:]]\+\ \+\(.*\):\([[:digit:]]*\)\ \+\([0-9\.\:\*]\+\).\+\ \([[:digit:]]*\/[[:alnum:]\-]*\).*/\1 \2 == \3 == \4 \5/' | sort -k 4 -g | sed 's/ == \(.*\) ==/:\1/' | sed 1,2d + netstat listening ports + 360 + + + + full_command + last -n 20 + 360 + + + + + ruleset/decoders + ruleset/rules + 0215-policy_rules.xml + etc/lists/audit-keys + etc/lists/amazon/aws-eventnames + etc/lists/security-eventchannel + + + etc/decoders + etc/rules + + + + yes + 1 + 64 + 15m + + + + + no + 1515 + no + yes + no + HIGH:!ADH:!EXP:!MD5:!RC4:!3DES:!CAMELLIA:@STRENGTH + + no + etc/sslmanager.cert + etc/sslmanager.key + no + + + + wazuh + node01 + master + + 1516 + 0.0.0.0 + + NODE_IP + + no + yes + + + + + + + syslog + /var/ossec/logs/active-responses.log + + + + syslog + /var/log/dpkg.log + + + diff --git a/docs/markdown/chapter3/chapter3.md b/docs/markdown/chapter3/chapter3.md index a62ddcc0..c963ca22 100644 --- a/docs/markdown/chapter3/chapter3.md +++ b/docs/markdown/chapter3/chapter3.md @@ -15,7 +15,7 @@ In this chapter you will: This section covers the installation and configuration of the Database and search functionality on a Linux server. We will install the โ€˜ELKโ€™ Stack from Elasticsearch for this portion. What is the ELK Stack? -"ELK" is the acronym for three free and open projects: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a serverโ€‘side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch. +"ELK" is the acronym for three open projects which come at no cost to users: Elasticsearch, Logstash, and Kibana. Elasticsearch is a search and analytics engine. Logstash is a serverโ€‘side data processing pipeline that ingests data from multiple sources simultaneously, transforms it, and then sends it to a "stash" like Elasticsearch. Kibana lets users visualize data with charts and graphs in Elasticsearch. ![Elkstack components](/docs/imgs/elkstack.jpg)

diff --git a/docs/markdown/maintenance/upgrading.md b/docs/markdown/maintenance/upgrading.md index 78ac8242..5f48ea70 100644 --- a/docs/markdown/maintenance/upgrading.md +++ b/docs/markdown/maintenance/upgrading.md @@ -6,8 +6,6 @@ Below you can find the upgrade paths that are currently supported and what steps Applying these changes is automated for any new installations. But, if you have an existing installation, you need to conduct some extra steps. **Before performing any of these steps it is advised to take a backup of the current installation using the method described [here](/docs/markdown/maintenance/backups.md).** -To upgrade to the latest version from Release 1.2.0 to Release 1.3.0 [go here](#6-upgrade-from-120-to-130). - ## 1. Finding your LME version (and the components versions) When reporting an issue or suggesting improvements, it is important to include the versions of all the components, where possible. This ensures that the issue has not already been fixed! @@ -26,10 +24,10 @@ When reporting an issue or suggesting improvements, it is important to include t ## 2. Upgrade from versions prior to v0.5 -LME does not support upgrading directly from versions prior to 0.5 to 1.0. Prior to switching to CISA's repo, first upgrade to the latest version of LME published by the NCSC (v0.5.1). Then follow the instructions above to upgrade to v1.0. +LME does not support upgrading directly from versions prior to v0.5 to v1.0. Prior to switching to CISA's repo, first upgrade to the latest version of LME published by the NCSC (v0.5.1). Then follow the instructions above to upgrade to v1.0. -## 3. Upgrade from v0.5 to 1.0.0 +## 3. Upgrade from v0.5 to v1.0.0 Since LME's transition from the NCSC to CISA, the location of the LME repository has changed from `https://github.com/ukncsc/lme` to `https://github.com/cisagov/lme`. To obtain any further updates to LME on the ELK server, you will need to transition to the new git repository. Because vital configuration files are stored within the same folder as the git repo, it's simpler to copy the old LME folder to a different location, clone the new repo, copy the files and folders unique to your system, and then optionally delete the old folder. You can do this by running the following commands: @@ -111,55 +109,40 @@ LME v1.0 made a minor change to the file structure used in the SYSVOL folder, so 3. Is the LME folder inside SYSVOL properly structured? Refer to the checklist listed at the end of chapter 2. 4. Are the events from all clients visible inside elastic? Refer to [4.1.2 Check you are receiving logs](/docs/markdown/chapter4.md#412-check-you-are-receiving-logs). +## 4. Upgrade to v1.3.1 -## 4. Upgrade from 1.0.0 to 1.1.0 -To fetch the latest changes, on the Linux server, run the following commands as root: -``` -cd /opt/lme -git pull -``` +This is a hotfix to the install script and some additional troubleshooting steps added to documentation on space management. Unless you're encountering problems with your current installation, or if your logs are running out of space, there's no need to upgrade to v1.3.1, as it doesn't offer any additional functionality changes. -To manually update the dashboards, see [How to update dashboards](/Chapter%204%20Files/dashboards#how-to-update-dashboards). +## 5. Upgrade to v1.3.2 -Additionally, to fix a potential file permission issue present in v1.0.0, run the following command on the Linux server: -``` -sudo chown -R 1000:1000 /opt/lme/backups -``` +This is a hotfix to address dashboards which failed to load on a fresh install of v1.3.1. If you are currently running v1.3.0, you do not need to upgrade at this time. If you are running versions **before** 1.3.0 or are running v1.3.1, we recommend you upgrade to the latest version. -See [Directory permission issues](/docs/markdown/reference/troubleshooting.md#directory-permission-issues) for more details. +Please refer to the [Upgrading to latest version](/docs/markdown/maintenance/upgrading.md#upgrading-to-latest-version) to apply the hotfix. +## 6. v1.3.3 - Update on data retention failure during LME install -## 5. Upgrade from 1.1.0 to 1.2.0 -To fetch the latest changes, on the Linux server, run the following commands as root: +This is a hotfix to address an error with data retention failure in the deploy.sh script during a fresh LME install. We recommend you upgrade to the latest version if you require disk sizes of 1TB or greater. + +If you've tried to install LME before, then run the following commands as root: ``` +git pull +git checkout main cd /opt/lme/Chapter\ 3\ Files/ sudo ./deploy.sh uninstall -cd /opt/lme -git pull -cd Chapter\ 3\ Files/ -sudo ./deploy.sh install +sudo docker volume rm lme-esdata +sudo docker volume rm lme-logstashdata +sudo ./deploy.sh install ``` -The deploy.sh script should have now created new files on the Linux server at location /opt/lme/files_for_windows.zip . This file needs to be copied across and used on the Windows Event Collector server like it was explained in Chapter 3 sections [3.2.4 & 3.3 ](/docs/markdown/chapter3/chapter3.md#324-download-files-for-windows-event-collector). - -Then reboot your Client computers & Windows Event Collector. On Windows Event Collector open services.msc as an administrator and make sure the winlogbeat service is set to start automatically, and is running. - -## 6. Upgrade from 1.2.0 to 1.3.0 -To fetch the latest changes, run the following commands as root on the Linux server: +## 7. Upgrade to latest version +To fetch the latest changes, on the Linux server, run the following commands as root: ``` +git pull +git checkout main cd /opt/lme/Chapter\ 3\ Files/ sudo ./deploy.sh uninstall -cd /opt/lme -git pull -cd Chapter\ 3\ Files/ sudo ./deploy.sh install ``` The deploy.sh script should have now created new files on the Linux server at location /opt/lme/files_for_windows.zip . This file needs to be copied across and used on the Windows Event Collector server like it was explained in Chapter 3 sections [3.2.4 & 3.3 ](/docs/markdown/chapter3/chapter3.md#324-download-files-for-windows-event-collector). -Then reboot your Client computers & Windows Event Collector. On Windows Event Collector open services.msc as an administrator and make sure the winlogbeat service is set to start automatically, and is running. - - - - - diff --git a/docs/markdown/prerequisites.md b/docs/markdown/prerequisites.md index fc54e515..f34e9ed0 100644 --- a/docs/markdown/prerequisites.md +++ b/docs/markdown/prerequisites.md @@ -29,7 +29,7 @@ Figure 1: High level overview, linking to documentation chapters The portions of this package developed by the United States government are distributed under the Creative Commons 0 ("CC0") license. Portions created by government contractors at the behest of CISA are provided with the explicit grant of right to use, modify, and redistribute the code subject to this statement and the existing license structure. All other portions, including new submissions from all others, are subject to the Apache License, Version 2.0. This project (scripts, documentation, and so on) is licensed under the [Apache License 2.0 and Creative Commons 0](../../LICENSE). -The design uses free and open software, we will maintain a pledge to ensure that no paid software licenses are needed above standard infrastructure costs (With the exception of Windows Operating system Licensing). +The design uses open software which comes at no cost to the user, we will maintain a pledge to ensure that no paid software licenses are needed above standard infrastructure costs (With the exception of Windows Operating system Licensing). You will need to pay for hosting, bandwidth and time; for an estimate of server specs that might be needed see this [blogpost from elasticsearch](https://www.elastic.co/blog/benchmarking-and-sizing-your-elasticsearch-cluster-for-logs-and-metrics). Then use your estimated server specs to determine a price for an on prem or cloud deployment. diff --git a/docs/markdown/reference/dashboard-descriptions.md b/docs/markdown/reference/dashboard-descriptions.md new file mode 100644 index 00000000..0848b9a5 --- /dev/null +++ b/docs/markdown/reference/dashboard-descriptions.md @@ -0,0 +1,40 @@ +# Dashboard Descriptions + +## Purpose +Logging Made Easy (LME) releases new dashboards on GitHub periodically. Here are the dashboard descriptions. + +## User Human Resources + +The User Human Resources Dashboard provides a comprehensive overview of network activity and displays domains, users, workstations, activity times and days of the week. It includes details on general logon events, logoff events and distinguishes between in-person and remote logons. Analogous to a security guard monitoring a camera, the dashboard facilitates network monitoring by revealing overall network traffic, user locations, peak hours and the ratio of remote-to-in-person logons. Users can filter and analyze individual or specific computer activity logs. + +## Computer Software Overview + +The Computer Software Overview Dashboard displays application usage on host computers, logging events for application failures, hangs and external connection attempts. Monitoring application usage is crucial for assessing network health, as frequent crashes may indicate larger issues, and applications making frequent external requests could signal malicious activity. + +## Security Log + +The Security Log Dashboard actively presents forwarded security log events, tallies failed logon attempts, identifies computers with failed logon events, specifies reasons for failed logons and distinguishes types of logons and reports on credential status (clear text or cached). It also discloses whether the event log or Windows Security audit log is cleared, highlights user account changes and notes the assignment of special privileges to a logon session. Users can quickly detect unusual events, prompting further investigation and remediation actions. + +## Process Explorer + +The Process Explorer Dashboard thoroughly monitors networks, tracks processes, users, processes per user, files, filenames in the download directory, Sysmon process creation and registry events. It offers user-friendly filtering for process names and process identifiers or PIDโ€™s. The download directory is often targeted for initial malware installations due to lenient write privileges. This dashboard investigates unusual registry changes and closely examine spikes in processes created by specific users, as these could indicate potential malicious activity. + +## Sysmon Summary + +The Sysmon Summary Dashboard highlights Sysmon events and features event count, event types, the percentage breakdown by event code and top hosts generating Sysmon data. Vigilance towards any deviations or shifts in activity levels helps administrators to promptly identify both desired and undesired activities. + +## User Security + +The User Security Dashboard provides a comprehensive view of network activity and showcases logon attempts, user logon/logoff events, logged-on computers and detailed network connections by country and protocol. Additionally, it highlights critical information such as PowerShell events, references to temporary files and Windows Defender alerts for malware detection and actions taken. The dashboard supports effective monitoring by allowing users to filter events based on users, domains and hosts. Understanding the nature and origin of network connections is vital, and the dashboard facilitates the identification of suspicious activities, enabling operators to target their inquiries for enhanced network health assessment. + +## Alert + +The Alert Dashboard enables users to define rules that detect complex conditions within networks/environments. It also uses trigger actions in case of suspicious activities. These alerts contain pre-built rules that detects suspicious activities. There are options that schedule how these suspicious activities are detected and actions taken when these conditions are detected. + +## Healthcheck + +The HealthCheck Dashboard gives users the ability to view different processes such as unexpected shutdowns, events by each machine, total hosts and total number of logged in admins with data that is based on a selected date range. Users can verify the health of their system by observing events such as if there are more admin users than expected or if an unexpected shutdown occurs. + + + +For more information or to seek additional help, [Click Here](https://github.com/cisagov/LME) diff --git a/docs/markdown/reference/troubleshooting.md b/docs/markdown/reference/troubleshooting.md index 45c597ac..140d9d87 100644 --- a/docs/markdown/reference/troubleshooting.md +++ b/docs/markdown/reference/troubleshooting.md @@ -327,3 +327,62 @@ sudo curl -X POST "https://127.0.0.1:9200/_security/user/elastic/_password" -H " Replace 'currentpassword' with your current password and 'newpassword' with the password you would like to change it to. Utilize environment variables in place of currentpassword and newpassword to avoid saving your password to console history. If not we recommend you clear your history after changing the password with ```history -c``` + +## Index Management + +If you are having issues with your hard disk filling up too fast you can use these steps to delete logs earlier than your current settings. + +1. **Log in to Elastic** + - Access the Elastic platform and log in with your credentials. + +2. **Navigate to Management Section** + - In the main menu, scroll down to "Management." + +3. **Access Stack Management** + - Within the Management section, select "Stack Management." + +4. **Select Index Lifecycle Policies** + - In Stack Management, find and choose "Index Lifecycle Policies." + +5. **Choose the Relevant ILM Policy** + - From the list, select `lme_ilm_policy` for editing. + +6. **Adjust the Hot Phase Settings** + - Navigate to the 'Hot Phase' section. + - Expand 'Advanced settings'. + - Uncheck "Use recommended defaults." + - Change the "Maximum age" setting to match your desired delete phase duration. + + > **Note:** Aligning the maximum age in the hot phase with the delete phase ensures consistency in data retention. + +7. **Adjust the Delete Phase Settings** + - Scroll to the 'Delete Phase' section. + - Find and adjust the "Move data into phase when:" setting. + - Ensure the delete phase duration matches the maximum age set in the hot phase. + + > **Note:** This setting determines the deletion timing of your logs. Ensure to back up necessary data before changes. + +8. **Save Changes** + - Save the adjustments you've made. + +9. **Verify the Changes** + - Review and ensure that the changes are functioning as intended. Indices may not delete immediately - allow time for job to run. + +10. **Document the Changes** + - Record the modifications for future reference. + +You can also manually delete an index from the GUI under Management > Index Managment or by using the following command: + +``` +curl -X DELETE "https://127.0.0.1:9200/your_index_name" -H "Content-Type: application/json" --cacert /opt/lme/Chapter\ 3\ Files/certs/root-ca.crt -u elastic:yourpassword +``` +> **Note:** Ensure this is not your current winlogbeat index in use. You should only delete indices that have already rolled over. i.e. if you have index winlogbeat-00001 and winlogbeat-00002 do NOT delete winlogbeat-00002. + +If you only have one index you can manually force a rollover with the following command: + +``` +curl -X POST "https://127.0.0.1:9200/winlogbeat-alias/_rollover" -H "Content-Type: application/json" --cacert /opt/lme/Chapter\ 3\ Files/certs/root-ca.crt -u elastic:yourpassword +``` + +This will rollover winlogbeat-00001 and create winlogbeat-00002. You can now manually delete 00001. + diff --git a/quadlet/lme-caddy.container b/quadlet/lme-caddy.container new file mode 100644 index 00000000..822fb08a --- /dev/null +++ b/quadlet/lme-caddy.container @@ -0,0 +1,22 @@ +# lme-caddy.container +[Unit] +Description=Caddy Container +Requires=lme-setup-certs.service +After=lme-setup-certs.service +PartOf=lme.service + +[Install] +WantedBy=default.target lme.service + +[Service] +Restart=always + +[Container] +ContainerName=lme-caddy +Image=localhost/caddy:LME_LATEST +Network=lme +PodmanArgs=--network-alias lme-caddy +PublishPort=80:80 +PublishPort=443:443 +Volume=/opt/lme/config/caddy:/etc/caddy/ +Volume=lme_certs:/etc/caddy/certs diff --git a/quadlet/lme-elasticsearch.container b/quadlet/lme-elasticsearch.container new file mode 100644 index 00000000..31b66689 --- /dev/null +++ b/quadlet/lme-elasticsearch.container @@ -0,0 +1,28 @@ +# lme-elasticsearch.container +[Unit] +Description=Elasticsearch Container Service +Requires=lme-network.service lme-setup-certs.service +After=lme-network.service lme-setup-certs.service +PartOf=lme.service + +[Service] +Restart=always + +[Install] +WantedBy=default.target lme.service + +[Container] +ContainerName=lme-elasticsearch +#TODO: set discovery mode/cluster.name via environment +Environment=node.name=lme-elasticsearch cluster.name=LME bootstrap.memory_lock=true discovery.type=single-node xpack.security.enabled=true xpack.security.http.ssl.enabled=true xpack.security.http.ssl.key=certs/elasticsearch/elasticsearch.key xpack.security.http.ssl.certificate=certs/elasticsearch/elasticsearch.chain.pem xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt xpack.security.http.ssl.verification_mode=certificate xpack.security.http.ssl.client_authentication=optional xpack.security.transport.ssl.enabled=true xpack.security.transport.ssl.key=certs/elasticsearch/elasticsearch.key xpack.security.transport.ssl.certificate=certs/elasticsearch/elasticsearch.crt xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt xpack.security.transport.ssl.verification_mode=certificate xpack.security.transport.ssl.client_authentication=optional xpack.license.self_generated.type=basic +#TODO: set password in here via script AND load via credential +EnvironmentFile=/opt/lme/lme-environment.env +Image=localhost/elasticsearch:LME_LATEST +Network=lme +PodmanArgs=--memory 8gb --network-alias lme-elasticsearch --health-interval=2s +PublishPort=9200:9200 +Ulimit=memlock=-1:-1 +Volume=lme_certs:/usr/share/elasticsearch/config/certs +Volume=lme_esdata01:/usr/share/elasticsearch/data +Notify=healthy +HealthCmd=CMD-SHELL curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials' diff --git a/quadlet/lme-fleet-server.container b/quadlet/lme-fleet-server.container new file mode 100644 index 00000000..b28761af --- /dev/null +++ b/quadlet/lme-fleet-server.container @@ -0,0 +1,25 @@ +# lme-fleet-server.container +[Unit] +Description=Fleet Container Service +Requires=lme-elasticsearch.service lme-kibana.service +After=lme-elasticsearch.service lme-kibana.service +PartOf=lme.service + +[Service] +Restart=always + +[Install] +WantedBy=default.target lme.service + +[Container] +ContainerName=lme-fleet-server +Environment=FLEET_ENROLL=1 FLEET_SERVER_POLICY_ID=fleet-server-policy FLEET_SERVER_ENABLE=1 KIBANA_FLEET_SETUP=1 KIBANA_HOST=https://lme-kibana:5601 FLEET_URL=https://lme-fleet-server:8220 FLEET_SERVER_ELASTICSEARCH_HOST=https://lme-elasticsearch:9200 FLEET_CA=/certs/ca/ca.crt FLEET_SERVER_CERT=/certs/fleet-server/fleet-server.crt FLEET_SERVER_CERT_KEY=/certs/fleet-server/fleet-server.key FLEET_SERVER_ELASTICSEARCH_CA=/certs/ca/ca.crt KIBANA_FLEET_CA=/certs/ca/ca.crt NODE_EXTRA_CA_CERTS=/etc/ssl/certs/ca-certificates.crt +#TODO: set password in here via script AND load via credential +EnvironmentFile=/opt/lme/lme-environment.env +Image=localhost/elastic-agent:LME_LATEST +Network=lme +HostName=lme-fleet-server +PodmanArgs=--network-alias lme-fleet-server --requires 'lme-elasticsearch,lme-kibana' +PublishPort=8220:8220 +User=root +Volume=lme_certs:/certs:z diff --git a/quadlet/lme-kibana.container b/quadlet/lme-kibana.container new file mode 100644 index 00000000..2267c5d1 --- /dev/null +++ b/quadlet/lme-kibana.container @@ -0,0 +1,29 @@ +# lme-kibana.container +[Unit] +Description=Kibana Container Service +Requires=lme-setup-accts.service lme-elasticsearch.service +After=lme-setup-accts.service lme-elasticsearch.service +PartOf=lme.service + +[Install] +WantedBy=default.target lme.service + +[Service] +Restart=always +TimeoutStartSec=900 #5 minutes, kibana can be slow + +[Container] +ContainerName=lme-kibana +Environment=SERVER_NAME=lme-kibana ELASTICSEARCH_HOSTS=https://lme-elasticsearch:9200 ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt SERVER_SSL_ENABLED=true SERVER_SSL_CERTIFICATE=config/certs/kibana/kibana.crt SERVER_SSL_KEY=config/certs/kibana/kibana.key SERVER_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt NODE_EXTRA_CA_CERTS=/etc/ssl/certs/ca-certificates.crt NODE_OPTIONS=--max-old-space-size=4096 +#TODO: set password in here via script AND load via credential +EnvironmentFile=/opt/lme/lme-environment.env +Image=localhost/kibana:LME_LATEST +Network=lme +PodmanArgs=--memory 4gb --network-alias lme-kibana --requires lme-elasticsearch --health-interval=2s +#PublishPort=5601:5601 +Volume=lme_certs:/usr/share/kibana/config/certs:z +Volume=lme_kibanadata:/usr/share/kibana/data +Volume=/opt/lme/config/kibana.yml:/usr/share/kibana/config/kibana.yml:Z +Volume=/etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt:ro +HealthCmd=CMD-SHELL curl -I -s --cacert config/certs/ca/ca.crt https://localhost:5601 | grep -q 'HTTP/1.1 302 Found' +Notify=healthy diff --git a/quadlet/lme-setup-accts.container b/quadlet/lme-setup-accts.container new file mode 100644 index 00000000..33f536b2 --- /dev/null +++ b/quadlet/lme-setup-accts.container @@ -0,0 +1,24 @@ +# lme-elasticsearch-security-setup.container +[Unit] +Requires=lme-network.service lme-setup-certs.service +After=lme-network.service lme-setup-certs.service +PartOf=lme.service + +[Service] +Type=oneshot +RemainAfterExit=yes + +[Install] +WantedBy=default.target + +[Container] +ContainerName=lme-setup-accts +EnvironmentFile=/opt/lme/lme-environment.env +Exec=/bin/bash /usr/share/elasticsearch/config/setup/acct-init.sh +Image=localhost/elasticsearch:LME_LATEST +Network=lme +PodmanArgs=--network-alias lme-setup --health-interval=2s +User=0 +Volume=lme_certs:/usr/share/elasticsearch/config/certs +Volume=/opt/lme/config/setup:/usr/share/elasticsearch/config/setup + diff --git a/quadlet/lme-setup-certs.container b/quadlet/lme-setup-certs.container new file mode 100644 index 00000000..3c03e726 --- /dev/null +++ b/quadlet/lme-setup-certs.container @@ -0,0 +1,24 @@ +# lme-elasticsearch-security-setup.container +[Unit] +Requires=lme-network.service +After=lme.service lme-network.service +PartOf=lme.service + +[Service] +Type=oneshot +RemainAfterExit=yes + +[Install] +WantedBy=default.target lme.service + +[Container] +ContainerName=lme-setup-certs +EnvironmentFile=/opt/lme/lme-environment.env +Exec=/bin/bash /usr/share/elasticsearch/config/setup/init-setup.sh +Image=localhost/elasticsearch:LME_LATEST +Network=lme +PodmanArgs=--network-alias lme-setup --health-interval=2s +User=0 +Volume=lme_certs:/usr/share/elasticsearch/config/certs +Volume=/opt/lme/config/setup:/usr/share/elasticsearch/config/setup + diff --git a/quadlet/lme-wazuh-manager.container b/quadlet/lme-wazuh-manager.container new file mode 100644 index 00000000..14263d5c --- /dev/null +++ b/quadlet/lme-wazuh-manager.container @@ -0,0 +1,47 @@ +# lme-wazuh-manager.container +[Unit] +Description=Wazuh Container Service +After=lme-elasticsearch.service lme-kibana.service +Requires=lme-elasticsearch.service +PartOf=lme.service + +[Service] +Restart=always +LimitNOFILE=655360 + + +[Install] +WantedBy=default.target lme.service + +[Container] +ContainerName=lme-wazuh-manager +Environment=INDEXER_URL=https://lme-elasticsearch:9200 FILEBEAT_SSL_VERIFICATION_MODE=full SSL_CERTIFICATE_AUTHORITIES=/etc/wazuh-manager/certs/ca/ca.crt SSL_CERTIFICATE=/etc/wazuh-manager/certs/wazuh-manager/wazuh-manager.crt SSL_KEY=/etc/wazuh-manager/certs/wazuh-manager/wazuh-manager.key +#TODO: set password in here via script AND load via credential +EnvironmentFile=/opt/lme/lme-environment.env +HostName=wazuh-manager +Image=localhost/wazuh-manager:LME_LATEST +Network=lme +PodmanArgs=--network-alias lme-wazuh-manager +PublishPort=1514:1514 +PublishPort=1515:1515 +PublishPort=514:514/udp +PublishPort=55000:55000 +Ulimit=memlock=-1:-1 +#Set above, leaving here for posterity, systemctl doesn't allow containers to set ulimits +#Ulimit=nofile=655360:655360 +Volume=lme_wazuh_api_configuration:/var/ossec/api/configuration +Volume=lme_wazuh_etc:/var/ossec/etc +Volume=lme_wazuh_logs:/var/ossec/logs +Volume=lme_wazuh_queue:/var/ossec/queue +Volume=lme_wazuh_logs:/var/ossec/logs +Volume=lme_wazuh_var_multigroups:/var/ossec/var/multigroups +Volume=lme_wazuh_integrations:/var/ossec/integrations +Volume=lme_wazuh_active_response:/var/ossec/active-response/bin +Volume=lme_wazuh_agentless:/var/ossec/agentless +Volume=lme_wazuh_wodles:/var/ossec/wodles +Volume=lme_filebeat_etc:/etc/filebeat +Volume=lme_filebeat_var:/var/lib/filebeat +Volume=/opt/lme/config/wazuh_cluster/wazuh_manager.conf:/wazuh-config-mount/etc/ossec.conf +Volume=lme_certs:/etc/wazuh-manager/certs:ro +Volume=/etc/ssl/certs/ca-certificates.crt:/etc/ssl/certs/ca-certificates.crt:ro + diff --git a/quadlet/lme.network b/quadlet/lme.network new file mode 100644 index 00000000..12780e02 --- /dev/null +++ b/quadlet/lme.network @@ -0,0 +1,7 @@ +# lme.network +[Network] +Driver=bridge +Gateway=10.89.4.1 +IPAMDriver=host-local +NetworkName=lme +Subnet=10.89.4.0/24 diff --git a/quadlet/lme.service b/quadlet/lme.service new file mode 100644 index 00000000..aa44c424 --- /dev/null +++ b/quadlet/lme.service @@ -0,0 +1,16 @@ +[Unit] +Description=LME service orchestrator runs all the service files + +[Install] +WantedBy=default.target + +[Service] +# Exits after it starts the service +Type=oneshot +# Execute dummy program +ExecStart=/bin/true +# This service shall be considered active after start +RemainAfterExit=yes + + + diff --git a/scripts/download.sh b/scripts/download.sh new file mode 100755 index 00000000..416cecb5 --- /dev/null +++ b/scripts/download.sh @@ -0,0 +1,36 @@ +#!/usr/bin/env bash +source .env +USER=elastic +PASSWORD=${ELASTIC_PASSWORD_ESCAPED} +PROTO=https +REMOTE=10.20.0.174:9200 + +#TODO: make this a cli flag +#------------ edit this----------- +#assumes files are INDEX_mapping.json + INDEX.json +# mapping + logs +DIR=/data/logs/ +INDICES=$(ls ${DIR} | cut -f -3 -d '.' | grep -v "_mapping"| grep -v "template"| sort | uniq) +#INDICES=$("elastalert_status" "elastalert_status_error" "elastalert_status_past" "elastalert_status_silence" "elastalert_status") + + +#------------ edit this ----------- + +echo -e "\n\ncheck \`podman logs -f CONTAINER_NAME\` for verbose output\n\n" +echo -e "\n--Uploading: --\n" +for x in ${INDICES}; +do + echo "podman runs for $x:" + podman run -it -d -v ${DIR}${x}_mapping.json:/tmp/data.json -e NODE_TLS_REJECT_UNAUTHORIZED=0 --userns="" --network=host elasticdump/elasticsearch-dump --output=/tmp/data.json --input=${PROTO}://${USER}:${PASSWORD}@localhost:9200/${x} --type=mapping + + podman run -v ${DIR}${x}:/tmp/ -e NODE_TLS_REJECT_UNAUTHORIZED=0 --userns="" --network=host --rm -ti elasticdump/elasticsearch-dump --input=http://${REMOTE}/${x} --output=/tmp/${x}.json --limit 5000 + echo "" +done + +## cleanup: +echo "--to cleanup when done:--" +echo "podman ps -a --format \"{{.Image}} {{.Names}}\" | grep -i "elasticdump" | awk \'{print $2}\' | xargs podman rm" + +tot=$(wc -l $(ls ${DIR} | grep -v "_mapping" | xargs -I{} echo ${DIR}{})) +echo -e "\n--Expected Log #:\n $tot--" + diff --git a/scripts/gen_cert.sh b/scripts/gen_cert.sh new file mode 100755 index 00000000..bec3fb8d --- /dev/null +++ b/scripts/gen_cert.sh @@ -0,0 +1,29 @@ +#!/usr/bin/env bash +source .env + +#set via cli arg +CERT_DIR=${1:-caddy/certs} + +## generate CA: +echo "creating CA CRT" +export CERT_STRING='/C=US/ST=DC/L=Washington/O=CISA' +openssl genrsa -out ${CERT_DIR}/root-ca.key 4096 +openssl req -new -key ${CERT_DIR}/root-ca.key -out ${CERT_DIR}/root-ca.csr -sha256 -subj "$CERT_STRING/CN=LME" +openssl x509 -req -days 3650 -in ${CERT_DIR}/root-ca.csr -signkey ${CERT_DIR}/root-ca.key -sha256 -out ${CERT_DIR}/root-ca.crt + +echo "creating caddy CRT" +openssl genrsa -out ${CERT_DIR}/caddy.key 4096 +openssl req -new -key ${CERT_DIR}/caddy.key -out ${CERT_DIR}/caddy.csr -sha256 -subj "$CERT_STRING/CN=caddy" + +#set openssl so that this cert can only perform server auth and cannot sign certs +{ + echo "[server]" + echo "authorityKeyIdentifier=keyid,issuer" + echo "basicConstraints = critical,CA:FALSE" + echo "extendedKeyUsage=serverAuth,clientAuth" + echo "keyUsage = critical, digitalSignature, keyEncipherment" + #echo "subjectAltName = DNS:elasticsearch, IP:127.0.0.1" + echo "subjectAltName = DNS:ls1, IP:127.0.0.1" + echo "subjectKeyIdentifier=hash" +} >${CERT_DIR}/caddy.cnf +openssl x509 -req -days 3650 -in ${CERT_DIR}/caddy.csr -sha256 -CA ${CERT_DIR}/root-ca.crt -CAkey ${CERT_DIR}/root-ca.key -CAcreateserial -out ${CERT_DIR}/caddy.crt -extfile ${CERT_DIR}/caddy.cnf -extensions server diff --git a/scripts/install_lme_local.yml b/scripts/install_lme_local.yml new file mode 100644 index 00000000..17f0461d --- /dev/null +++ b/scripts/install_lme_local.yml @@ -0,0 +1,216 @@ +--- +- name: Install LME on localhost + hosts: localhost + connection: local + become: no # Default to no privilege escalation + vars: + clone_directory: "{{ clone_dir | default('~/LME') }}" + install_user: "{{ ansible_user_id }}" + + tasks: + - name: Expand clone_directory path + set_fact: + clone_directory: "{{ clone_directory | expanduser }}" + + - name: Ensure /opt/lme directory exists + file: + path: /opt/lme + state: directory + owner: "{{ install_user }}" + group: "{{ install_user }}" + mode: '0700' + become: yes + + - name: Check if lme-environment.env exists + stat: + path: "{{ clone_directory }}/config/lme-environment.env" + register: env_file + + - name: Fail if lme-environment.env doesn't exist + fail: + msg: "lme-environment.env file not found in {{ clone_directory }}/config/. Please copy example.env to lme-environment.env in the config directory and edit it before running this playbook." + when: not env_file.stat.exists + + - name: Move lme-environment.env to /opt/lme + command: "mv {{ clone_directory }}/config/lme-environment.env /opt/lme/lme-environment.env" + become: yes + + - name: Set correct permissions for lme-environment.env + file: + path: /opt/lme/lme-environment.env + owner: "{{ install_user }}" + group: "{{ install_user }}" + mode: '0600' + become: yes + + - name: Check sudo setup + command: sudo -n true + register: sudo_check + ignore_errors: yes + changed_when: false + + - name: Display sudo information + debug: + msg: "{{ 'Passwordless sudo is available.' if sudo_check.rc == 0 else 'Sudo will require a password for privileged operations.' }}" + + - name: Ensure sudo access + command: sudo -n true + changed_when: false + + - name: Update apt cache + apt: + update_cache: yes + become: yes + + - name: Install required packages + apt: + name: + - jq + - uidmap + - nix-bin + - nix-setup-systemd + state: present + become: yes + + - name: Add Nix channel + command: nix-channel --add https://nixos.org/channels/nixpkgs-unstable nixpkgs + become: yes + + - name: Update Nix channel + command: nix-channel --update + become: yes + + - name: Add user to nix-users group + user: + name: "{{ install_user }}" + groups: nix-users + append: yes + become: yes + + - name: Restart Nix daemon + command: systemctl restart nix-daemon + become: yes + + - name: Update PATH for Ansible execution + set_fact: + ansible_env: "{{ ansible_env | combine({'PATH': ansible_env.PATH ~ ':/nix/var/nix/profiles/default/bin'}) }}" + + - name: Install Podman using Nix + command: nix-env -iA nixpkgs.podman + become: yes + environment: + PATH: "{{ ansible_env.PATH }}" + + - name: Update PATH in user's bashrc + lineinfile: + path: "~/.bashrc" + line: 'export PATH=$PATH:/nix/var/nix/profiles/default/bin' + create: yes + + - name: Update PATH in root's bashrc + lineinfile: + path: "/root/.bashrc" + line: 'export PATH=$PATH:/nix/var/nix/profiles/default/bin' + create: yes + become: yes + + - name: Set sysctl limits + command: "{{ clone_directory }}/scripts/set_sysctl_limits.sh" + environment: + NON_ROOT_USER: "{{ install_user }}" + become: yes + + - name: Link latest podman quadlet + command: "{{ clone_directory }}/scripts/link_latest_podman_quadlet.sh" + become: yes + + - name: Enable linger for user + command: "loginctl enable-linger {{ install_user }}" + become: yes + + - name: Copy config files + copy: + src: "{{ clone_directory }}/config/" + dest: /opt/lme/config/ + owner: "{{ install_user }}" + group: "{{ install_user }}" + mode: '0644' + become: yes + + - name: Copy quadlet files + copy: + src: "{{ clone_directory }}/quadlet/" + dest: /opt/lme/quadlet/ + owner: "{{ install_user }}" + group: "{{ install_user }}" + mode: '0644' + become: yes + + - name: Create containers config directory + file: + path: "~/.config/containers" + state: directory + + - name: Link quadlet to systemd + file: + src: /opt/lme/quadlet + dest: "~/.config/containers/systemd" + state: link + + - name: Create systemd user directory + file: + path: "~/.config/systemd/user" + state: directory + + - name: Link lme.service + file: + src: /opt/lme/quadlet/lme.service + dest: "~/.config/systemd/user/lme.service" + state: link + + - name: Reload systemd daemon + systemd: + daemon_reload: yes + scope: user + + - name: Create containers directory + file: + path: /etc/containers + state: directory + become: yes + + - name: Create policy.json + copy: + content: | + { + "default": [ + { + "type": "insecureAcceptAnything" + } + ] + } + dest: /etc/containers/policy.json + become: yes + + - name: Pull containers + command: "podman pull {{ item }}" + loop: "{{ lookup('file', clone_directory + '/config/containers.txt').splitlines() }}" + environment: + PATH: "{{ ansible_env.PATH }}" + + - name: Tag containers + command: "podman image tag {{ item }} {{ item.split('/')[-1].split(':')[0] }}:LME_LATEST" + loop: "{{ lookup('file', clone_directory + '/config/containers.txt').splitlines() }}" + environment: + PATH: "{{ ansible_env.PATH }}" + + - name: Reload systemd daemon (user) + systemd: + daemon_reload: yes + scope: user + + - name: Start LME service + systemd: + name: lme.service + state: started + scope: user \ No newline at end of file diff --git a/scripts/link_latest_podman_quadlet.sh b/scripts/link_latest_podman_quadlet.sh new file mode 100755 index 00000000..e8050730 --- /dev/null +++ b/scripts/link_latest_podman_quadlet.sh @@ -0,0 +1,29 @@ +#!/bin/bash + +# Find the latest podman version in the Nix store +latest_podman=$(find /nix/store -maxdepth 1 -name '*-podman-*' | + sed -n 's/.*-podman-\([0-9.]*\)$/\1/p' | + sort -V | + tail -n1) + +if [ -n "$latest_podman" ]; then + # Find the full path of the latest version + podman_path=$(find /nix/store -maxdepth 1 -name "*-podman-${latest_podman}") + + # Assign the result to a variable + LATEST_PODMAN_PATH="$podman_path" + + echo "Latest Podman version found: $latest_podman" + echo "Path: $LATEST_PODMAN_PATH" +else + echo "No Podman installation found in the Nix store." +fi + + +sudo ln -sf "$LATEST_PODMAN_PATH/lib/systemd/system-generators/podman-system-generator" /usr/lib/systemd/system-generators/podman-system-generator +sudo ln -sf "$LATEST_PODMAN_PATH/lib/systemd/user-generators/podman-user-generator" /usr/lib/systemd/user-generators/ +sudo ln -sf -t /usr/lib/systemd/system/ /nix/store/$LATEST_PODMAN_PATH/lib/systemd/system/* +sudo ln -sf -t /usr/lib/systemd/user/ /nix/store/$LATEST_PODMAN_PATH/lib/systemd/user/* + +echo "Linked the files in systemd" + diff --git a/scripts/set-fleet.sh b/scripts/set-fleet.sh new file mode 100755 index 00000000..a32528a1 --- /dev/null +++ b/scripts/set-fleet.sh @@ -0,0 +1,22 @@ +#!/bin/env bash + +HEADERS=( + -H "kbn-version: 8.12.2" + -H "kbn-xsrf: kibana" + -H 'Content-Type: application/json' +) + +set_fleet_values() { + fingerprint=$(podman exec -w /usr/share/elasticsearch/config/certs/ca lme-elasticsearch cat ca.crt | openssl x509 -nout -fingerprint -sha256 | cut -d "=" -f 2| tr -d : | head -n1) + printf '{"fleet_server_hosts": ["%s"]}' "https://${IPVAR}:${FLEET_PORT}" | curl -k --silent --user "${ELASTIC_USERNAME}:${ELASTICSEARCH_PASSWORD}" -XPUT "${HEADERS[@]}" "${LOCAL_KBN_URL}/api/fleet/settings" -d @- | jq + printf '{"hosts": ["%s"]}' "https://${IPVAR}:9200" | curl -k --silent --user "${ELASTIC_USERNAME}:${ELASTICSEARCH_PASSWORD}" -XPUT "${HEADERS[@]}" "${LOCAL_KBN_URL}/api/fleet/outputs/fleet-default-output" -d @- | jq + printf '{"ca_trusted_fingerprint": "%s"}' "${fingerprint}" | curl -k --silent --user "${ELASTIC_USERNAME}:${ELASTICSEARCH_PASSWORD}" -XPUT "${HEADERS[@]}" "${LOCAL_KBN_URL}/api/fleet/outputs/fleet-default-output" -d @- | jq + printf '{"config_yaml": "%s"}' "ssl.verification_mode: certificate" | curl -k --silent --user "${ELASTIC_USERNAME}:${ELASTICSEARCH_PASSWORD}" -XPUT "${HEADERS[@]}" "${LOCAL_KBN_URL}/api/fleet/outputs/fleet-default-output" -d @- | jq + policy_id=$(printf '{"name": "%s", "description": "%s", "namespace": "%s", "monitoring_enabled": ["logs","metrics"], "inactivity_timeout": 1209600}' "Endpoint Policy" "" "default" | curl -k --silent --user "${ELASTIC_USERNAME}:${ELASTICSEARCH_PASSWORD}" -XPOST "${HEADERS[@]}" "${LOCAL_KBN_URL}/api/fleet/agent_policies?sys_monitoring=true" -d @- | jq -r '.item.id') + pkg_version=$(curl -k --user "${ELASTIC_USERNAME}:${ELASTICSEARCH_PASSWORD}" -XGET "${HEADERS[@]}" "${LOCAL_KBN_URL}/api/fleet/epm/packages/endpoint" -d : | jq -r '.item.version') + printf "{\"name\": \"%s\", \"description\": \"%s\", \"namespace\": \"%s\", \"policy_id\": \"%s\", \"enabled\": %s, \"inputs\": [{\"enabled\": true, \"streams\": [], \"type\": \"ENDPOINT_INTEGRATION_CONFIG\", \"config\": {\"_config\": {\"value\": {\"type\": \"endpoint\", \"endpointConfig\": {\"preset\": \"EDRComplete\"}}}}}], \"package\": {\"name\": \"endpoint\", \"title\": \"Elastic Defend\", \"version\": \"${pkg_version}\"}}" "Elastic Defend" "" "default" "${policy_id}" "true" | curl -k --silent --user "${ELASTIC_USERNAME}:${ELASTICSEARCH_PASSWORD}" -XPOST "${HEADERS[@]}" "${LOCAL_KBN_URL}/api/fleet/package_policies" -d @- | jq +} + +#main: +source /opt/lme/lme-environment.env +set_fleet_values diff --git a/scripts/set_sysctl_limits.sh b/scripts/set_sysctl_limits.sh new file mode 100755 index 00000000..cd0b87fe --- /dev/null +++ b/scripts/set_sysctl_limits.sh @@ -0,0 +1,63 @@ +#!/bin/bash + +# Check if the script is run as root +if [[ $EUID -ne 0 ]]; then + echo "This script must be run as root" + exit 1 +fi + +# Check if NON_ROOT_USER is set +if [ -z ${NON_ROOT_USER+x} ]; then + echo "var NON_ROOT_USER is unset" + exit 1 +else + echo "NON_ROOT_USER='$NON_ROOT_USER'" +fi + +# Function to update or add a sysctl setting +update_sysctl() { + local key=$1 + local value=$2 + local file="/etc/sysctl.conf" + + if grep -qE "^$key\s*=" "$file"; then + sed -i "s/^$key\s*=.*/$key = $value/" "$file" + echo "Updated $key in $file" + elif grep -qE "^#\s*$key\s*=" "$file"; then + sed -i "s/^#\s*$key\s*=.*/$key = $value/" "$file" + echo "Uncommented and updated $key in $file" + else + echo "$key = $value" >> "$file" + echo "Added $key to $file" + fi +} + +# Update sysctl settings +update_sysctl "net.ipv4.ip_unprivileged_port_start" "80" +update_sysctl "vm.max_map_count" "262144" +update_sysctl "net.core.rmem_max" "7500000" +update_sysctl "net.core.wmem_max" "7500000" + +# Apply sysctl changes +sysctl -p + +# Update limits.conf +limits_file="/etc/security/limits.conf" +limits_entry="$NON_ROOT_USER soft nofile 655360 +$NON_ROOT_USER hard nofile 655360" + +if grep -qE "^$NON_ROOT_USER\s+soft\s+nofile" "$limits_file"; then + echo "$limits_file already configured for $NON_ROOT_USER. No changes needed." +else + echo "$limits_entry" >> "$limits_file" + echo "Updated $limits_file for $NON_ROOT_USER" +fi + +# Display current values +echo "Current sysctl values:" +sysctl net.ipv4.ip_unprivileged_port_start +sysctl vm.max_map_count +sysctl net.core.rmem_max +sysctl net.core.wmem_max + +echo "Script execution completed." \ No newline at end of file diff --git a/scripts/upload.sh b/scripts/upload.sh new file mode 100755 index 00000000..895320db --- /dev/null +++ b/scripts/upload.sh @@ -0,0 +1,33 @@ +#!/usr/bin/env bash +source .env +USER=elastic +PASSWORD=${ELASTIC_PASSWORD_ESCAPED} +PROTO=https + +#TODO: make this a cli flag +#------------ edit this----------- +#assumes files are INDEX_mapping.json + INDEX.json +# mapping + logs +DIR=/data/alerts/ +INDICES=$(ls ${DIR} | cut -f -3 -d '.' | grep -v "_mapping"| grep -v "template"| sort | uniq) + +#------------ edit this ----------- + +echo -e "\n\ncheck \`podman logs -f CONTAINER_NAME\` for verbose output\n\n" +echo -e "\n--Uploading: --\n" +for x in ${INDICES}; +do + echo "podman runs for $x:" + podman run -it -d -v ${DIR}${x}_mapping.json:/tmp/data.json -e NODE_TLS_REJECT_UNAUTHORIZED=0 --userns="" --network=host elasticdump/elasticsearch-dump --input=/tmp/data.json --output=${PROTO}://${USER}:${PASSWORD}@localhost:9200/${x} --type=mapping + + podman run -it -d -v ${DIR}${x}.json:/tmp/data.json -e NODE_TLS_REJECT_UNAUTHORIZED=0 --userns="" --network=host elasticdump/elasticsearch-dump --input=/tmp/data.json --output=${PROTO}://${USER}:${PASSWORD}@localhost:9200/${x} --limit=5000 + echo "" +done + +## cleanup: +echo "--to cleanup when done:--" +echo "podman ps -a --format \"{{.Image}} {{.Names}}\" | grep -i "elasticdump" | awk \'{print $2}\' | xargs podman rm" + +tot=$(wc -l $(ls ${DIR} | grep -v "_mapping" | xargs -I{} echo ${DIR}{})) +echo -e "\n--Expected Log #:\n $tot--" + diff --git a/testing/InstallTestbed.ps1 b/testing/InstallTestbed.ps1 new file mode 100644 index 00000000..6e7b8be0 --- /dev/null +++ b/testing/InstallTestbed.ps1 @@ -0,0 +1,402 @@ +param ( + [Alias("g")] + [Parameter(Mandatory = $true)] + [string]$ResourceGroup, + + [Alias("w")] + [string]$DomainController = "DC1", + + [Alias("l")] + [string]$LinuxVM = "LS1", + + [Alias("n")] + [int]$NumClients = 2, + + [Alias("m")] + [Parameter( + HelpMessage = "(minimal) Only install the linux server. Useful for testing the linux server without the windows clients" + )] + [switch]$LinuxOnly, + + [Alias("v")] + [string]$Version = $false, + + [Alias("b")] + [string]$Branch = $false +) + +# If you were to need the password from the SetupTestbed.ps1 script, you could use this: +# $Password = Get-Content "${ResourceGroup}.password.txt" + + +$ProcessSeparator = "`n----------------------------------------`n" + +# Define our library path +$LibraryPath = Join-Path -Path $PSScriptRoot -ChildPath "configure\azure_scripts\lib\utilityFunctions.ps1" + +# Check if the library file exists +if (Test-Path -Path $LibraryPath) { + # Dot-source the library script + . $LibraryPath +} +else { + Write-Error "Library script not found at path: $LibraryPath" +} + +if ($Version -ne $false -and -not ($Version -match '^[0-9]+\.[0-9]+\.[0-9]+$')) { + Write-Host "Invalid version format: $Version. Expected format: X.Y.Z (e.g., 1.3.0)" + exit 1 +} + +# Create a container to keep files for the VM +Write-Output "Creating a container to keep files for the VM..." +$createBlobResponse = ./configure/azure_scripts/create_blob_container.ps1 ` + -ResourceGroup $ResourceGroup +Write-Output $createBlobResponse +Write-Output $ProcessSeparator + +# Source the variables from the file +Write-Output "`nSourcing the variables from the file..." +. ./configure/azure_scripts/config.ps1 + +# Remove old code if it exists +if (Test-Path ./configure.zip) { + Remove-Item ./configure.zip -Force -Confirm:$false -ErrorAction SilentlyContinue +} + +Write-Output $ProcessSeparator + +# Zip up the installer scripts for the VM +Write-Output "`nZipping up the installer scripts for the VMs..." +./configure/azure_scripts/zip_my_parents_parent.ps1 +Write-Output $ProcessSeparator + +# Upload the zip file to the container and get a key to download it +Write-Output "`nUploading the zip file to the container and getting a key to download it..." +$FileDownloadUrl = ./configure/azure_scripts/copy_file_to_container.ps1 ` + -LocalFilePath "configure.zip" ` + -ContainerName $ContainerName ` + -StorageAccountName $StorageAccountName ` + -StorageAccountKey $StorageAccountKey + +Write-Output "File download URL: $FileDownloadUrl" +Write-Output $ProcessSeparator + +Write-Output "`nChanging directory to the azure scripts..." +Set-Location configure/azure_scripts +Write-Output $ProcessSeparator + +if (-Not $LinuxOnly) { + Write-Output "`nInstalling on the windows clients..." + # Make our directory on the VM + Write-Output "`nMaking our directory on the VM..." + $createDirResponse = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --name $DomainController ` + --resource-group $ResourceGroup ` + --scripts "if (-not (Test-Path -Path 'C:\lme')) { New-Item -Path 'C:\lme' -ItemType Directory }" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$createDirResponse") + Write-Output $ProcessSeparator + + # Download the zip file to the VM + Write-Output "`nDownloading the zip file to the VM..." + $downloadZipFileResponse = .\download_in_container.ps1 ` + -VMName $DomainController ` + -ResourceGroup $ResourceGroup ` + -FileDownloadUrl "$FileDownloadUrl" ` + -DestinationFilePath "configure.zip" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$downloadZipFileResponse") + Write-Output $ProcessSeparator + + # Extract the zip file + Write-Output "`nExtracting the zip file..." + $extractArchiveResponse = .\extract_archive.ps1 ` + -VMName $DomainController ` + -ResourceGroup $ResourceGroup ` + -FileName "configure.zip" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$extractArchiveResponse") + Write-Output $ProcessSeparator + + # Run the install script for chapter 1 + Write-Output "`nRunning the install script for chapter 1..." + $installChapter1Response = .\run_script_in_container.ps1 ` + -ResourceGroup $ResourceGroup ` + -VMName $DomainController ` + -ScriptPathOnVM "C:\lme\configure\install_chapter_1.ps1" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$installChapter1Response") + Write-Output $ProcessSeparator + + # Update the group policy on the remote machines + Write-Output "`nUpdating the group policy on the remote machines..." + Invoke-GPUpdateOnVMs -ResourceGroup $ResourceGroup -numberOfClients $NumClients + Write-Output $ProcessSeparator + + # Wait for the services to start + Write-Output "`nWaiting for the services to start..." + Start-Sleep 10 + + # See if we can see the forwarding computers in the DC + write-host "`nChecking if we can see the forwarding computers in the DC..." + $listForwardingComputersResponse = .\run_script_in_container.ps1 ` + -ResourceGroup $ResourceGroup ` + -VMName $DomainController ` + -ScriptPathOnVM "C:\lme\configure\list_computers_forwarding_events.ps1" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$listForwardingComputersResponse") + Write-Output $ProcessSeparator + + # Install the sysmon service on DC1 from chapter 2 + Write-Output "`nInstalling the sysmon service on DC1 from chapter 2..." + $installChapter2Response = .\run_script_in_container.ps1 ` + -ResourceGroup $ResourceGroup ` + -VMName $DomainController ` + -ScriptPathOnVM "C:\lme\configure\install_chapter_2.ps1" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$installChapter2Response") + Write-Output $ProcessSeparator + + # Update the group policy on the remote machines + Write-Output "`nUpdating the group policy on the remote machines..." + Invoke-GPUpdateOnVMs -ResourceGroup $ResourceGroup -numberOfClients $NumClients + Write-Output $ProcessSeparator + + # Wait for the services to start + Write-Output "`nWaiting for the services to start. Generally they don't show..." + Start-Sleep 10 + + # See if you can see sysmon running on the machine + Write-Output "`nSeeing if you can see sysmon running on a machine..." + $showSysmonResponse = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --name "C1" ` + --resource-group $ResourceGroup ` + --scripts 'Get-Service | Where-Object { $_.DisplayName -like "*Sysmon*" }' + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$showSysmonResponse") + Write-Output $ProcessSeparator +} + +Write-Output "`nInstalling on the linux server..." +# Download the installers on LS1 +Write-Output "`nDownloading the installers on LS1..." +$downloadLinuxZipFileResponse = .\download_in_container.ps1 ` + -VMName $LinuxVM ` + -ResourceGroup $ResourceGroup ` + -FileDownloadUrl "$FileDownloadUrl" ` + -DestinationFilePath "configure.zip" ` + -os "linux" +Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$downloadLinuxZipFileResponse") +Write-Output $ProcessSeparator + +# Install unzip on LS1 +Write-Output "`nInstalling unzip on LS1..." +$installUnzipResponse = az vm run-command invoke ` + --command-id RunShellScript ` + --name $LinuxVM ` + --resource-group $ResourceGroup ` + --scripts 'apt-get install unzip -y' +Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$installUnzipResponse") +Write-Output $ProcessSeparator + +# Unzip the file on LS1 +Write-Output "`nUnzipping the file on LS1..." +$extractLinuxArchiveResponse = .\extract_archive.ps1 ` + -VMName $LinuxVM ` + -ResourceGroup $ResourceGroup ` + -FileName "configure.zip" ` + -Os "Linux" +Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$extractLinuxArchiveResponse") +Write-Output $ProcessSeparator + +Write-Output "`nMaking the installer files executable and updating the system packages on LS1..." +$updateLinuxResponse = az vm run-command invoke ` + --command-id RunShellScript ` + --name $LinuxVM ` + --resource-group $ResourceGroup ` + --scripts 'chmod +x /home/admin.ackbar/lme/configure/* && /home/admin.ackbar/lme/configure/linux_update_system.sh' +Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$updateLinuxResponse") +Write-Output $ProcessSeparator + +$versionArgument = "" +if ($Branch -ne $false) { + $versionArgument = " -b '$($Branch)'" +} elseif ($Version -ne $false) { + $versionArgument = " -v $Version" +} +Write-Output "`nRunning the lme installer on LS1..." +$installLmeResponse = az vm run-command invoke ` + --command-id RunShellScript ` + --name $LinuxVM ` + --resource-group $ResourceGroup ` + --scripts "/home/admin.ackbar/lme/configure/linux_install_lme.sh $versionArgument" +Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$installLmeResponse") +Write-Output $ProcessSeparator + +# Check if the response contains the need to reboot +$rebootCheckstring = $installLmeResponse | Out-String +if ($rebootCheckstring -match "reboot is required in order to proceed with the install") { + # Have to check for the reboot thing here + Write-Output "`nRebooting ${LinuxVM}..." + az vm restart ` + --resource-group $ResourceGroup ` + --name $LinuxVM + Write-Output $ProcessSeparator + + Write-Output "`nRunning the lme installer on LS1..." + $installLmeResponse = az vm run-command invoke ` + --command-id RunShellScript ` + --name $LinuxVM ` + --resource-group $ResourceGroup ` + --scripts "/home/admin.ackbar/lme/configure/linux_install_lme.sh $versionArgument" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$installLmeResponse") + Write-Output $ProcessSeparator +} + +# Capture the output of the install script +Write-Output "`nCapturing the output of the install script for ES passwords..." +$getElasticsearchPasswordsResponse = az vm run-command invoke ` + --command-id RunShellScript ` + --name $LinuxVM ` + --resource-group $ResourceGroup ` + --scripts 'sed -n "/^## elastic/,/^####################/p" "/opt/lme/Chapter 3 Files/output.log"' + +Write-Output $ProcessSeparator + +if (-Not $LinuxOnly){ + # Generate key using expect on linux + Write-Output "`nGenerating key using expect on linux..." + $generateKeyResponse = az vm run-command invoke ` + --command-id RunShellScript ` + --name $LinuxVM ` + --resource-group $ResourceGroup ` + --scripts '/home/admin.ackbar/lme/configure/linux_make_private_key.exp' + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$generateKeyResponse") + Write-Output $ProcessSeparator + + # Add the public key to the authorized_keys file on LS1 + Write-Output "`nAdding the public key to the authorized_keys file on LS1..." + $authorizePrivateKeyResponse = az vm run-command invoke ` + --command-id RunShellScript ` + --name $LinuxVM ` + --resource-group $ResourceGroup ` + --scripts '/home/admin.ackbar/lme/configure/linux_authorize_private_key.sh' + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$authorizePrivateKeyResponse") + Write-Output $ProcessSeparator + + # Cat the private key and capture that to the azure shell + Write-Output "`nCat the private key and capture that to the azure shell..." + $jsonResponse = az vm run-command invoke ` + --command-id RunShellScript ` + --name $LinuxVM ` + --resource-group $ResourceGroup ` + --scripts 'cat /home/admin.ackbar/.ssh/id_rsa' + $privateKey = Get-PrivateKeyFromJson -jsonResponse "$jsonResponse" + + # Save the private key to a file + Write-Output "`nSaving the private key to a file..." + $privateKeyPath = ".\id_rsa" + Set-Content -Path $privateKeyPath -Value $privateKey + Write-Output $ProcessSeparator + + # Upload the private key to the container and get a key to download it + Write-Output "`nUploading the private key to the container and getting a key to download it..." + $KeyDownloadUrl = ./copy_file_to_container.ps1 ` + -LocalFilePath "id_rsa" ` + -ContainerName $ContainerName ` + -StorageAccountName $StorageAccountName ` + -StorageAccountKey $StorageAccountKey + + # Download the private key to DC1 + Write-Output "`nDownloading the private key to DC1..." + $downloadPrivateKeyResponse = .\download_in_container.ps1 ` + -VMName $DomainController ` + -ResourceGroup $ResourceGroup ` + -FileDownloadUrl "$KeyDownloadUrl" ` + -DestinationFilePath "id_rsa" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$downloadPrivateKeyResponse") + Write-Output $ProcessSeparator + + # Change the ownership of the private key file on DC1 + Write-Output "`nChanging the ownership of the private key file on DC1..." + $chownPrivateKeyResponse = .\run_script_in_container.ps1 ` + -ResourceGroup $ResourceGroup ` + -VMName $DomainController ` + -ScriptPathOnVM "C:\lme\configure\chown_dc1_private_key.ps1" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$chownPrivateKeyResponse") + Write-Output $ProcessSeparator + + # Remove the private key from the local machine + Remove-Item -Path $privateKeyPath + + # Use the azure shell to run scp on DC1 to copy the files from LS1 to DC1 + Write-Output "`nUsing the azure shell to run scp on DC1 to copy the files from LS1 to DC1..." + $scpResponse = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --name $DomainController ` + --resource-group $ResourceGroup ` + --scripts 'scp -o StrictHostKeyChecking=no -i "C:\lme\id_rsa" admin.ackbar@ls1:/home/admin.ackbar/files_for_windows.zip "C:\lme\"' + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$scpResponse") + Write-Output $ProcessSeparator + + # Extract the files on DC1 + Write-Output "`nExtracting the files on DC1..." + $extractFilesForWindowsResponse = .\extract_archive.ps1 ` + -VMName $DomainController ` + -ResourceGroup $ResourceGroup ` + -FileName "files_for_windows.zip" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$extractFilesForWindowsResponse") + Write-Output $ProcessSeparator + + # Install winlogbeat on DC1 + Write-Output "`nInstalling winlogbeat on DC1..." + $installWinlogbeatResponse = .\run_script_in_container.ps1 ` + -ResourceGroup $ResourceGroup ` + -VMName $DomainController ` + -ScriptPathOnVM "C:\lme\configure\winlogbeat_install.ps1" + + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$installWinlogbeatResponse") + Write-Output $ProcessSeparator +} + + +Write-Output "`nRunning the tests for lme on LS1..." +$runTestResponse = az vm run-command invoke ` + --command-id RunShellScript ` + --name $LinuxVM ` + --resource-group $ResourceGroup ` + --scripts '/home/admin.ackbar/lme/configure/linux_test_install.sh' | ConvertFrom-Json + +$message = $runTestResponse.value[0].message +Write-Host "$message`n" +Write-Host "--------------------------------------------" + +# Check if there is stderr content in the message field +if ($message -match '\[stderr\]\n(.+)$') { + Write-Host "Tests failed" + exit 1 +} else { + Write-Host "Tests succeeded" +} + +Write-Output "`nInstall completed." + +$EsPasswords = (Format-AzVmRunCommandOutput -JsonResponse "$getElasticsearchPasswordsResponse")[0].StdOut +# Output the passwords +$EsPasswords + +# Write the passwords to a file +$PasswordPath = "..\..\${ResourceGroup}.password.txt" +$EsPasswords | Out-File -Append -FilePath $PasswordPath + +# Constructing a string that will hold all the command-line parameters to be written to the file +$paramsToWrite = @" +ResourceGroup: $ResourceGroup +DomainController: $DomainController +LinuxVM: $LinuxVM +NumClients: $NumClients +LinuxOnly: $($LinuxOnly.IsPresent) +Version: $Version +Branch: $Branch +"@ + +# Output the parameters to the end of the password file +$paramsToWrite | Out-File -Append -FilePath $PasswordPath + +Get-Content -Path $PasswordPath \ No newline at end of file diff --git a/testing/Readme.md b/testing/Readme.md index 45301981..8577bf09 100644 --- a/testing/Readme.md +++ b/testing/Readme.md @@ -13,14 +13,16 @@ Using the Azure CLI, it creates the following: This script does not install LME; it simply creates a fresh environment that's ready to have LME installed. ## Usage -| **Parameter** | **Alias** | **Description** | **Required** | -|------------------------|-----------|----------------------------------------------------------------------------------------|---------------------------------------| -| $ResourceGroup | -g | The name of the resource group that will be created for storing all testbed resources. | Yes | -| $NumClients | -n | The number of Windows clients to create; maximum 16; defaults to 1 | No | -| $AutoShutdownTime | | The auto-shutdown time in UTC (HHMM, e.g. 2230, 0000, 1900); auto-shutdown not configured if not provided | No | -| $AutoShutdownEmail | | An email to be notified if a VM is auto-shutdown. | No | -| $AllowedSources | -s | Comma-Separated list of CIDR prefixes or IP ranges, e.g. XX.XX.XX.XX/YY,XX.XX.XX.XX/YY,etc..., that are allowed to connect to the VMs via RDP and ssh. | Yes | -| $NoPrompt | -y | Switch, run the script with no prompt (useful for automated runs). By default, the script will prompt the user to review paramters and confirm before continuing. | No | +| **Parameter** | **Alias** | **Description** | **Required** | +|--------------------|-----------|-------------------------------------------------------------------------------------------------------------------------------------------------------------------|--------------| +| $ResourceGroup | -g | The name of the resource group that will be created for storing all testbed resources. | Yes | +| $NumClients | -n | The number of Windows clients to create; maximum 16; defaults to 2 | No | +| $AutoShutdownTime | | The auto-shutdown time in UTC (HHMM, e.g. 2230, 0000, 1900); auto-shutdown not configured if not provided | No | +| $AutoShutdownEmail | | An email to be notified if a VM is auto-shutdown. | No | +| $AllowedSources | -s | Comma-Separated list of CIDR prefixes or IP ranges, e.g. XX.XX.XX.XX/YY,XX.XX.XX.XX/YY,etc..., that are allowed to connect to the VMs via RDP and ssh. | Yes | +| $Location | -l | The region you would like to build the assets in. Defaults to westus | No | +| $NoPrompt | -y | Switch, run the script with no prompt (useful for automated runs). By default, the script will prompt the user to review paramters and confirm before continuing. | No | +| $LinuxOnly | -m | Run a minimal install of only the linux server | No | Example: ``` @@ -28,14 +30,14 @@ Example: ``` ## Running Using Azure Shell -| **#** | **Step** | **Screenshot** | -|-------|--------------------------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------| -| 1 | Open a cloud shell by navigating to portal.azure.com and clicking the shell icon. | ![image](/docs/imgs/testing-screenshots/shell.png) | -| 2 | Select PowerShell. | ![image](/docs/imgs/testing-secreenshots/shell2.png) | -| 3 | Upload `SetupTestbed.ps1` by clicking the "Upload/Download files" icon | ![image](/docs/imgs/testing-screenshots/shell3.png) | -| 4 | Run the script, providing values for the parameters when promoted (see [Usage](#usage)). The script will take ~20 minutes to run to completion. | ![image](/docs/imgs/testing-screenshots/shell4.png) | -| 5 | Save the login credentials printed to the terminal at the end. At this point you can login to each VM using RDP (for the Windows servers) or SSH (for the Linux server). | ![image](/docs/imgs/testing-screenshots/shell5.png) | -| 6 | When you're done testing, simply delete the resource group to clean up all resources created. | ![image](/docs/imgs/testing-screenshots/delete.png) | +| **#** | **Step** | **Screenshot** | +|-------|-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------------------| +| 1 | Open a cloud shell by navigating to portal.azure.com and clicking the shell icon. | ![image](/docs/imgs/testing-screenshots/shell.png) | +| 2 | Select PowerShell. | ![image](/docs/imgs/testing-secreenshots/shell2.png) | +| 3 | Clone the repo `git clone https://github.com/cisagov/LME.git` and then `cd LME\testing` | | +| 4 | Run the script, providing values for the parameters when promoted (see [Usage](#usage)). The script will take ~20 minutes to run to completion. | ![image](/docs/imgs/testing-screenshots/shell4.png) | +| 5 | Save the login credentials printed to the terminal at the end (They will also be in a file called `<$ResourceGroup>.password.txt`). At this point you can login to each VM using RDP (for the Windows servers) or SSH (for the Linux server). | ![image](/docs/imgs/testing-screenshots/shell5.png) | +| 6 | When you're done testing, simply delete the resource group to clean up all resources created. | ![image](/docs/imgs/testing-screenshots/delete.png) | # Extra Functionality: @@ -55,3 +57,36 @@ Flags: - enable: deletes the DENYINTERNET/DENYLOADBALANCER rules - NSG: sets NSG to a custom NSG if desired [NSG1 default] +## Install LME on the cluster: +### InstallTestbed.ps1 +## Usage +| **Parameter** | **Alias** | **Description** | **Required** | +|-------------------|-----------|----------------------------------------------------------------------------------------|--------------| +| $ResourceGroup | -g | The name of the resource group that will be created for storing all testbed resources. | Yes | +| $NumClients | -n | The number of Windows clients you have created; defaults to 2 | No | +| $DomainController | -w | The name of the domain controller in the cluster; defaults to "DC1" | No | +| $LinuxVm | -l | The name of the linux server in the cluster; defaults to "LS1" | No | +| $LinuxOnly | -m | Run a minimal install of only the linux server | No | +| $Version | -v | Optionally provide a version to install if you want a specific one. `-v 1.3.2` | No | +| $Branch | -b | Optionally provide a branch to install if you want a specific one `-b your_branch` | No | + +Example: +``` +./InstallTestbed.ps1 -ResourceGroup YourResourceGroup +# Or if you want to save the output to a file +./InstallTestbed.ps1 -ResourceGroup YourResourceGroup | Tee-Object -FilePath "./YourResourceGroup.output.log" +``` +| **#** | **Step** | **Screenshot** | +|-------|-----------------------------------------------------------------------------------------------------------------------------------------------------------|-------------------------------------------------------| +| 1 | Open a cloud shell by navigating to portal.azure.com and clicking the shell icon. | ![image](/docs/imgs/testing-screenshots/shell.png) | +| 2 | Select PowerShell. | ![image](/docs/imgs/testing-secreenshots/shell2.png) | +| 3.a | If you have already cloned the LME repo then make sure you are in the `LME\testing` directory and run git pull before changing to the testing directory. | | +| 3.b | If you haven't cloned it, clone the github repo in the home directory. `git clone https://github.com/cisagov/LME.git` and then `cd LME\testing`. | | +| 4 | Now you can run one of the commands from the Examples above. | | +| 5 | Save the login credentials printed to the terminal at the end. *See note* | | +| 6 | When you're done testing, simply delete the resource group to clean up all resources created. | | + +Note: When the script finishes you will be in the azure_scripts directory, and you should see the elasticsearch credentials printed to the terminal. +You will need to `cd ../../` to get back to the LME directory. All the passwords should also be in the `<$ResourceGroup>.password.txt` file. + + diff --git a/testing/SetupTestbed.ps1 b/testing/SetupTestbed.ps1 index 4c5a347b..59dc856b 100644 --- a/testing/SetupTestbed.ps1 +++ b/testing/SetupTestbed.ps1 @@ -4,7 +4,7 @@ Creates the following: - A resource group - A virtual network, subnet, and network security group - - 2 VMs: "DC1," a Windows server, and "LS1," a Linux server + - 2 VMs: "DC1," a Windows server, and "LS1," a Linux server. You can use -m for only the linux server - Client VMs: Windows clients "C1", "C2", etc. up to 16 based on user input - Promotes DC1 to a domain controller - Adds "C" clients to the managed domain @@ -18,45 +18,66 @@ #> param ( - [Parameter( - HelpMessage="Auto-Shutdown time in UTC (HHMM, e.g. 2230, 0000, 1900). Convert timezone as necesary: (e.g. 05:30 pm ET -> 9:30 pm UTC -> 21:30 -> 2130)" - )] - $AutoShutdownTime=$null, - - [Parameter( - HelpMessage="Auto-shutdown notification email" - )] - $AutoShutdownEmail=$null, - - [Alias("l")] - [Parameter( - HelpMessage="Location where the cluster will be built. Default westus" - )] - [string]$Location="westus", - - [Alias("g")] - [Parameter(Mandatory=$true)] - [string]$ResourceGroup, - - [Alias("n")] - [Parameter( - HelpMessage="Number of clients to create (Max: 16)" - )] - [int]$NumClients=1, - - [Alias("s")] - [Parameter(Mandatory=$true, - HelpMessage="XX.XX.XX.XX/YY,XX.XX.XX.XX/YY,etc... Comma-Separated list of CIDR prefixes or IP ranges" - )] - [string]$AllowedSources, - - [Alias("y")] - [Parameter( - HelpMessage="Run the script with no prompt (useful for automated runs)" - )] - [switch]$NoPrompt + [Parameter( + HelpMessage = "Auto-Shutdown time in UTC (HHMM, e.g. 2230, 0000, 1900). Convert timezone as necesary: (e.g. 05:30 pm ET -> 9:30 pm UTC -> 21:30 -> 2130)" + )] + $AutoShutdownTime = $null, + + [Parameter( + HelpMessage = "Auto-shutdown notification email" + )] + $AutoShutdownEmail = $null, + + [Alias("l")] + [Parameter( + HelpMessage = "Location where the cluster will be built. Default westus" + )] + [string]$Location = "westus", + + [Alias("g")] + [Parameter(Mandatory = $true)] + [string]$ResourceGroup, + + [Alias("n")] + [Parameter( + HelpMessage = "Number of clients to create (Max: 16)" + )] + [int]$NumClients = 2, + + [Alias("s")] + [Parameter(Mandatory = $true, + HelpMessage = "XX.XX.XX.XX/YY,XX.XX.XX.XX/YY,etc... Comma-Separated list of CIDR prefixes or IP ranges" + )] + [string]$AllowedSources, + + [Alias("y")] + [Parameter( + HelpMessage = "Run the script with no prompt (useful for automated runs)" + )] + [switch]$NoPrompt, + + [Alias("m")] + [Parameter( + HelpMessage = "(minimal) Only install the linux server. Useful for testing the linux server without the windows clients" + )] + [switch]$LinuxOnly ) +$ProcessSeparator = "`n----------------------------------------`n" + +# Define our library path +$libraryPath = Join-Path -Path $PSScriptRoot -ChildPath "configure\azure_scripts\lib\utilityFunctions.ps1" + +# Check if the library file exists +if (Test-Path -Path $libraryPath) { + # Dot-source the library script + . $libraryPath +} +else { + Write-Error "Library script not found at path: $libraryPathCreating Network Port 22 rule..." +} + + #DEFAULTS: #Desired Netowrk Mapping: $VNetPrefix = "10.1.0.0/16" @@ -72,10 +93,14 @@ $VMAdmin = "admin.ackbar" $DomainName = "lme.local" #Port options: https://learn.microsoft.com/en-us/cli/azure/network/nsg/rule?view=azure-cli-latest#az-network-nsg-rule-create -$Ports = 22,3389 -$Priorities = 1001,1002 -$Protocols = "Tcp","Tcp" +$Ports = 22, 3389, 443, 9200, 5044 +$Priorities = 1001, 1002, 1003, 1004, 1005 +$Protocols = "Tcp", "Tcp", "Tcp", "Tcp", "Tcp" +# Variables used for Azure tags +$CurrentUser = $(az account show | ConvertFrom-Json).user.name +$Today = $(Get-Date).ToString("yyyy-MM-dd") +$Project = "LME" function Get-RandomPassword { param ( @@ -105,53 +130,60 @@ function Set-AutoShutdown { Write-Output "`nCreating Auto-Shutdown Rule for $VMName at time $AutoShutdownTime..." if ($null -ne $AutoShutdownEmail) { - az vm auto-shutdown ` - -g $ResourceGroup ` - -n $VMName ` - --time $AutoShutdownTime ` - --email $AutoShutdownEmail + $autoShutdownResponse = az vm auto-shutdown ` + -g $ResourceGroup ` + -n $VMName ` + --time $AutoShutdownTime ` + --email $AutoShutdownEmail + Write-Output $autoShutdownResponse } else { - az vm auto-shutdown ` - -g $ResourceGroup ` - -n $VMName ` - --time $AutoShutdownTime + $autoShutdownResponse = az vm auto-shutdown ` + -g $ResourceGroup ` + -n $VMName ` + --time $AutoShutdownTime + Write-Output $autoShutdownResponse } } function Set-NetworkRules { - param ( - [Parameter(Mandatory)] - $AllowedSourcesList - ) - - if ($Ports.length -ne $Priorities.length){ - Write-Output "Priorities and Ports length should be equal!" - exit -1 - } - if ($Ports.length -ne $Protocols.length){ - Write-Output "Protocols and Ports length should be equal!" - exit -1 - } - - for ($i = 0; $i -le $Ports.length - 1 ; $i++) { - $port=$Ports[$i] - $priority=$Priorities[$i] - $protocol=$Protocols[$i] - Write-Output "`nCreating Network Port $port rule..." - - az network nsg rule create --name Network_Port_Rule_$port ` - --resource-group $ResourceGroup ` - --nsg-name NSG1 ` - --priority $priority ` - --direction Inbound ` - --access Allow ` - --protocol $protocol ` - --source-address-prefixes $AllowedSourcesList ` - --destination-address-prefixes '*' ` - --destination-port-ranges $port ` - --description "Allow inbound from $sources on $port via $protocol connections." - } + param ( + [Parameter(Mandatory)] + $AllowedSourcesList + ) + + if ($Ports.length -ne $Priorities.length) { + Write-Output "Priorities and Ports length should be equal!" + Exit 1 + } + if ($Ports.length -ne $Protocols.length) { + Write-Output "Protocols and Ports length should be equal!" + Exit 1 + } + + for ($i = 0; $i -le $Ports.length - 1; $i++) { + $port = $Ports[$i] + $priority = $Priorities[$i] + $protocol = $Protocols[$i] + Write-Output "`nCreating Network Port $port rule..." + $command = "az network nsg rule create --name Network_Port_Rule_$port " + + "--resource-group $ResourceGroup " + + "--nsg-name NSG1 " + + "--priority $priority " + + "--direction Inbound " + + "--access Allow " + + "--protocol $protocol " + + "--source-address-prefixes $AllowedSourcesList " + + "--destination-address-prefixes '*' " + + "--destination-port-ranges $port " + + "--description 'Allow inbound from $sources on $port via $protocol connections.' " + + Write-Output "Running command: $command" + + $networkRuleResponse = Invoke-Expression $command + Write-Output $networkRuleResponse + + } } @@ -159,23 +191,23 @@ function Set-NetworkRules { # Validation of Globals # ######################## $AllowedSourcesList = $AllowedSources -Split "," -if ($AllowedSourcesList.length -lt 1){ - Write-Output "**ERROR**: Variable AllowedSources must be set (set with -AllowedSources or -s)" - exit -1 +if ($AllowedSourcesList.length -lt 1) { + Write-Output "**ERROR**: Variable AllowedSources must be set (set with -AllowedSources or -s)" + Exit 1 } if ($null -ne $AutoShutdownTime) { - if ( -not ( $AutoShutdownTime -match '^([01][0-9]|2[0-3])[0-5][0-9]$' ) ){ + if (-not ( $AutoShutdownTime -match '^([01][0-9]|2[0-3])[0-5][0-9]$')) { Write-Output "**ERROR** Invalid time" Write-Output "Enter the Auto-Shutdown time in UTC (HHMM, e.g. 2230, 0000, 1900), `n`tConvert timezone as necesary: (e.g. 05:30 pm ET -> 9:30 pm UTC -> 21:30 -> 2130)" - exit -1 - } + Exit 1 + } } -if ($NumClients -lt 1 -or $NumClients -gt 16) { - Write-Output "The number of clients must be at least 1 and no more than 16." +if (($NumClients -lt 1 -or $NumClients -gt 16) -and -Not $LinuxOnly) { + Write-Output "The number of clients must be at least 1 and no more than 16." $NumClients = $NumClients -as [int] - exit -1 + Exit 1 } ################ @@ -189,39 +221,49 @@ Write-Output "Number of clients: $NumClients" Write-Output "Allowed sources (IP's): $AllowedSourcesList" Write-Output "Auto-shutdown time: $AutoShutdownTime" Write-Output "Auto-shutdown e-mail: $AutoShutdownEmail" +if ($LinuxOnly) { + Write-Output "Creating a linux server only" +} if (-Not $NoPrompt) { - do { - $Proceed = Read-Host "`nProceed? (Y/n)" - } until ($Proceed -eq "y" -or $Proceed -eq "Y" -or $Proceed -eq "n" -or $Proceed -eq "N") - - if ($Proceed -eq "n" -or $Proceed -eq "N") { - Write-Output "Setup canceled" - exit - } + do { + $Proceed = Read-Host "`nProceed? (Y/n)" + } until ($Proceed -eq "y" -or $Proceed -eq "Y" -or $Proceed -eq "n" -or $Proceed -eq "N") + + if ($Proceed -eq "n" -or $Proceed -eq "N") { + Write-Output "Setup canceled" + Exit + } } ######################## # Setup resource group # ######################## Write-Output "`nCreating resource group..." -az group create --name $ResourceGroup --location $Location +$createResourceGroupResponse = az group create --name $ResourceGroup ` + --location $Location ` + --tags project=$Project created=$Today createdBy=$CurrentUser +Write-Output $createResourceGroupResponse ################# # Setup network # ################# Write-Output "`nCreating virtual network..." -az network vnet create --resource-group $ResourceGroup ` +$createVirtualNetworkResponse = az network vnet create --resource-group $ResourceGroup ` --name VNet1 ` --address-prefix $VNetPrefix ` --subnet-name SNet1 ` - --subnet-prefix $SubnetPrefix + --subnet-prefix $SubnetPrefix ` + --tags project=$Project created=$Today createdBy=$CurrentUser +Write-Output $createVirtualNetworkResponse Write-Output "`nCreating nsg..." -az network nsg create --name NSG1 ` +$createNsgResponse = az network nsg create --name NSG1 ` --resource-group $ResourceGroup ` - --location $Location + --location $Location ` + --tags project=$Project created=$Today createdBy=$CurrentUser +Write-Output $createNsgResponse Set-NetworkRules -AllowedSourcesList $AllowedSourcesList @@ -229,24 +271,12 @@ Set-NetworkRules -AllowedSourcesList $AllowedSourcesList # Create the VMs # ################## $VMPassword = Get-RandomPassword 12 -Write-Output "`nWriting $VMAdmin password to password.txt" -echo $VMPassword > password.txt +Write-Output "`nWriting $VMAdmin password to ${ResourceGroup}.password.txt" +$VMPassword | Out-File -FilePath "${ResourceGroup}.password.txt" -Encoding UTF8 -Write-Output "`nCreating DC1..." -az vm create ` - --name DC1 ` - --resource-group $ResourceGroup ` - --nsg NSG1 ` - --image Win2019Datacenter ` - --admin-username $VMAdmin ` - --admin-password $VMPassword ` - --vnet-name VNet1 ` - --subnet SNet1 ` - --public-ip-sku Standard ` - --private-ip-address $DcIP Write-Output "`nCreating LS1..." -az vm create ` +$createLs1Response = az vm create ` --name LS1 ` --resource-group $ResourceGroup ` --nsg NSG1 ` @@ -258,12 +288,14 @@ az vm create ` --public-ip-sku Standard ` --size Standard_E2d_v4 ` --os-disk-size-gb 128 ` - --private-ip-address $LsIP - -for ($i = 1; $i -le $NumClients; $i++) { - Write-Output "`nCreating C$i..." - az vm create ` - --name C$i ` + --private-ip-address $LsIP ` + --tags project=$Project created=$Today createdBy=$CurrentUser +Write-Output $createLs1Response + +if (-Not $LinuxOnly){ + Write-Output "`nCreating DC1..." + $createDc1Response = az vm create ` + --name DC1 ` --resource-group $ResourceGroup ` --nsg NSG1 ` --image Win2019Datacenter ` @@ -271,7 +303,25 @@ for ($i = 1; $i -le $NumClients; $i++) { --admin-password $VMPassword ` --vnet-name VNet1 ` --subnet SNet1 ` - --public-ip-sku Standard + --public-ip-sku Standard ` + --private-ip-address $DcIP ` + --tags project=$Project created=$Today createdBy=$CurrentUser + Write-Output $createDc1Response + for ($i = 1; $i -le $NumClients; $i++) { + Write-Output "`nCreating C$i..." + $createClientResponse = az vm create ` + --name C$i ` + --resource-group $ResourceGroup ` + --nsg NSG1 ` + --image Win2019Datacenter ` + --admin-username $VMAdmin ` + --admin-password $VMPassword ` + --vnet-name VNet1 ` + --subnet SNet1 ` + --public-ip-sku Standard ` + --tags project=$Project created=$Today createdBy=$CurrentUser + Write-Output $createClientResponse + } } ########################### @@ -279,100 +329,166 @@ for ($i = 1; $i -le $NumClients; $i++) { ########################### if ($null -ne $AutoShutdownTime) { - Set-AutoShutdown "DC1" Set-AutoShutdown "LS1" - for ($i = 1; $i -le $NumClients; $i++) { - Set-AutoShutdown "C$i" + if (-Not $LinuxOnly){ + Set-AutoShutdown "DC1" + for ($i = 1; $i -le $NumClients; $i++) { + Set-AutoShutdown "C$i" + } } } #################### # Setup the domain # #################### -Write-Output "`nInstalling AD Domain services on DC1..." -az vm run-command invoke ` - --command-id RunPowerShellScript ` - --resource-group $ResourceGroup ` - --name DC1 ` - --scripts "Add-WindowsFeature AD-Domain-Services -IncludeManagementTools" - -Write-Output "`nRestarting DC1..." -az vm restart ` - --resource-group $ResourceGroup ` - --name DC1 ` - -Write-Output "`nCreating the ADDS forest..." -az vm run-command invoke ` - --command-id RunPowerShellScript ` - --resource-group $ResourceGroup ` - --name DC1 ` - --scripts "`$Password = ConvertTo-SecureString `"$VMPassword`" -AsPlainText -Force; ` -Install-ADDSForest -DomainName $DomainName -Force -SafeModeAdministratorPassword `$Password" - -Write-Output "`nRestarting DC1..." -az vm restart ` - --resource-group $ResourceGroup ` - --name DC1 ` - -for ($i = 1; $i -le $NumClients; $i++) { - Write-Output "`nAdding DC IP address to C$i host file..." - az vm run-command invoke ` +if (-Not $LinuxOnly){ + Write-Output "`nInstalling AD Domain services on DC1..." + $addDomainServicesResponse = az vm run-command invoke ` --command-id RunPowerShellScript ` --resource-group $ResourceGroup ` - --name C$i ` - --scripts "Add-Content -Path `$env:windir\System32\drivers\etc\hosts -Value `"`n$DcIP`t$DomainName`" -Force" + --name DC1 ` + --scripts "Add-WindowsFeature AD-Domain-Services -IncludeManagementTools" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$addDomainServicesResponse") - Write-Output "`nSetting C$i DNS server to DC1..." - az vm run-command invoke ` +# Write-Output "`nRestarting DC1..." +# az vm restart ` +# --resource-group $ResourceGroup ` +# --name DC1 ` + + Write-Output "`nCreating the ADDS forest..." + $installAddsForestResponse = az vm run-command invoke ` --command-id RunPowerShellScript ` --resource-group $ResourceGroup ` - --name C$i ` - --scripts "Get-Netadapter | Set-DnsClientServerAddress -ServerAddresses $DcIP" + --name DC1 ` + --scripts "`$Password = ConvertTo-SecureString `"$VMPassword`" -AsPlainText -Force; ` + Install-ADDSForest -DomainName $DomainName -Force -SafeModeAdministratorPassword `$Password" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$installAddsForestResponse") - Write-Output "`nRestarting C$i..." + Write-Output "`nRestarting DC1..." az vm restart ` --resource-group $ResourceGroup ` - --name C$i ` + --name DC1 ` - Write-Output "`nAdding C$i to the domain..." - az vm run-command invoke ` - --command-id RunPowerShellScript ` + for ($i = 1; $i -le $NumClients; $i++) { + Write-Output "`nAdding DC IP address to C$i host file..." + $addIpResponse = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --resource-group $ResourceGroup ` + --name C$i ` + --scripts "Add-Content -Path `$env:windir\System32\drivers\etc\hosts -Value `"`n$DcIP`t$DomainName`" -Force" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$addIpResponse") + + Write-Output "`nSetting C$i DNS server to DC1..." + $setDnsResponse = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --resource-group $ResourceGroup ` + --name C$i ` + --scripts "Get-Netadapter | Set-DnsClientServerAddress -ServerAddresses $DcIP" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$setDnsResponse") + + Write-Output "`nRestarting C$i..." + az vm restart ` --resource-group $ResourceGroup ` --name C$i ` - --scripts "`$Password = ConvertTo-SecureString `"$VMPassword`" -AsPlainText -Force; ` - `$Credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $DomainName\$VMAdmin, `$Password; ` - Add-Computer -DomainName $DomainName -Credential `$Credential -Restart" - # The following command fixes this issue: - # https://serverfault.com/questions/754012/windows-10-unable-to-access-sysvol-and-netlogon - Write-Output "`nModifying C$i register to allow access to sysvol..." - az vm run-command invoke ` - --command-id RunPowerShellScript ` - --resource-group $ResourceGroup ` - --name C$i ` - --scripts "cmd.exe /c `"%COMSPEC% /C reg add HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths /v \\*\SYSVOL /d RequireMutualAuthentication=0 /t REG_SZ`"" + Write-Output "`nAdding C$i to the domain..." + $addToDomainResponse = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --resource-group $ResourceGroup ` + --name C$i ` + --scripts "`$Password = ConvertTo-SecureString `"$VMPassword`" -AsPlainText -Force; ` + `$Credential = New-Object -TypeName System.Management.Automation.PSCredential -ArgumentList $DomainName\$VMAdmin, `$Password; ` + Add-Computer -DomainName $DomainName -Credential `$Credential -Restart" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$addToDomainResponse") + + # The following command fixes this issue: + # https://serverfault.com/questions/754012/windows-10-unable-to-access-sysvol-and-netlogon + Write-Output "`nModifying C$i register to allow access to sysvol..." + $addToSysvolResponse = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --resource-group $ResourceGroup ` + --name C$i ` + --scripts "cmd.exe /c `"%COMSPEC% /C reg add HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\NetworkProvider\HardenedPaths /v \\*\SYSVOL /d RequireMutualAuthentication=0 /t REG_SZ`"" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$addToSysvolResponse") + } } +Write-Output $ProcessSeparator Write-Output "`nVM login info:" -Write-Output "Username: $($VMAdmin)" -Write-Output "Password: $($VMPassword)" +Write-Output "ResourceGroup: $( $ResourceGroup )" +Write-Output "Username: $( $VMAdmin )" +Write-Output "Password: $( $VMPassword )" Write-Output "SAVE THE ABOVE INFO`n" +Write-Output $ProcessSeparator + +if (-Not $LinuxOnly){ + Write-Output "`nAdding DNS entry for Linux server..." + Write-Warning "NOTE: To verify, log on to DC1 and run 'Resolve-DnsName ls1' in PowerShell. + If it returns NXDOMAIN, you'll need to add it manually." + Write-Output "The time is $( Get-Date )." + # Define the PowerShell script with the DomainName variable interpolated + $scriptContent = @" +`$scriptBlock = { + Add-DnsServerResourceRecordA -Name LS1 -ZoneName $DomainName. -AllowUpdateAny -IPv4Address $LsIP -TimeToLive 01:00:00 -AsJob +} +`$job = Start-Job -ScriptBlock `$scriptBlock +`$timeout = 120 +if (Wait-Job -Job `$job -Timeout `$timeout) { + Receive-Job -Job `$job + Write-Host 'The script completed within the timeout period.' +} else { + Stop-Job -Job `$job + Remove-Job -Job `$job + Write-Host 'The script timed out after `$timeout seconds.' +} +"@ + + # Convert the script to a Base64-encoded string + $bytes = [System.Text.Encoding]::Unicode.GetBytes($scriptContent) + $encodedScript = [Convert]::ToBase64String($bytes) + + + # Run the encoded script on the Azure VM + Write-Output "`nAdding script to add DNS entry for Linux server. No output expected..." + $createDnsScriptResponse = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --name DC1 ` + --resource-group $ResourceGroup ` + --scripts "Set-Content -Path 'C:\AddDnsRecord.ps1' -Value ([System.Text.Encoding]::Unicode.GetString([System.Convert]::FromBase64String('$encodedScript')))" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$createDnsScriptResponse") -Write-Output "`nAdding DNS entry for Linux server..." -Write-Warning "NOTE: Sometimes this final call hangs indefinitely. -Haven't figured out why. If it doesn't finish after a few minutes, -hit ctrl+c to kill the process. Even if it didn't exit normally, -it is likely that the DNS entry was still successfully added. To -verify, log on to DC1 and run 'Resolve-DnsName ls1' in PowerShell. -If it returns NXDOMAIN, you'll need to add it manually." -Write-Output "The time is $(Get-Date)." -az vm run-command create ` + Write-Output "`nRunning script to add DNS entry for Linux server. It could time out or not. Check output of the next command..." + $addDnsRecordResponse = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --name DC1 ` + --resource-group $ResourceGroup ` + --scripts "C:\AddDnsRecord.ps1" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$addDnsRecordResponse") + + Write-Output "`nAdding ls1 to hosts file..." + $writeToHostsFileResponse = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --name DC1 ` --resource-group $ResourceGroup ` - --location $Location ` - --run-as-user $DomainName\$VMAdmin ` - --run-as-password $VMPassword ` - --run-command-name "addDNSRecord" ` - --vm-name DC1 ` - --script "Add-DnsServerResourceRecordA -Name `"LS1`" -ZoneName $DomainName -AllowUpdateAny -IPv4Address $LsIP -TimeToLive 01:00:00" + --scripts "Add-Content -Path 'C:\windows\system32\drivers\etc\hosts' -Value '$LsIP ls1.$DomainName ls1'" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$writeToHostsFileResponse") + + Write-Host "Checking if ls1 resolves. This should resolve to ls1.lme.local->${LsIP}, not another domain..." + $resolveLs1Response = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --resource-group $ResourceGroup ` + --name DC1 ` + --scripts "Resolve-DnsName ls1" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$resolveLs1Response") + + Write-Host "Removing the Dns script. No output expected..." + $removeDnsRecordScriptResponse = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --name DC1 ` + --resource-group $ResourceGroup ` + --scripts "Remove-Item -Path 'C:\AddDnsRecord.ps1' -Force" + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$removeDnsRecordScriptResponse") + +} Write-Output "Done." diff --git a/testing/configure/azure_scripts/copy_file_to_container.ps1 b/testing/configure/azure_scripts/copy_file_to_container.ps1 new file mode 100644 index 00000000..bac7ec33 --- /dev/null +++ b/testing/configure/azure_scripts/copy_file_to_container.ps1 @@ -0,0 +1,81 @@ +<# +.SYNOPSIS +Uploads a file to an Azure Blob Storage container and outputs the SAS URL. + +.DESCRIPTION +This script uploads a specified file to a given Azure Blob Storage container and generates a Shared Access Signature (SAS) URL for the uploaded item. +It requires the local file path, container name, storage account name, and storage account key as mandatory parameters. +This script is useful for automating the process of uploading files to Azure Blob Storage and obtaining a SAS URL for accessing the uploaded file. + +.PARAMETER LocalFilePath +The full local file path of the file to be uploaded. + +.PARAMETER ContainerName +The name of the Azure Blob Storage container where the file will be uploaded. + +.PARAMETER StorageAccountName +The name of the Azure Storage account. + +.PARAMETER StorageAccountKey +The key for the Azure Storage account. + +.OUTPUTS +Shared Access Signature (SAS) URL of the uploaded file. + +.EXAMPLE +.\copy_file_to_container.ps1 -LocalFilePath "C:\path\to\file.txt" -ContainerName "examplecontainer" -StorageAccountName "examplestorageaccount" -StorageAccountKey "examplekey" + +This example uploads 'file.txt' from the local path to 'examplecontainer' in the Azure Storage account named 'examplestorageaccount' and outputs the SAS URL for the uploaded file. + +.NOTES +- Ensure that the Azure CLI is installed and configured with the necessary permissions to access the specified Azure Storage account and container. +- The SAS URL provides access to the file with read permissions and is valid for 1 day. +#> + +param( + [Parameter(Mandatory = $true)] + [string]$LocalFilePath, + + [Parameter(Mandatory = $true)] + [string]$ContainerName, + + [Parameter(Mandatory = $true)] + [string]$StorageAccountName, + + [Parameter(Mandatory = $true)] + [string]$StorageAccountKey +) + +# Upload file to the blob container +$UploadResponse = az storage blob upload ` + --container-name $ContainerName ` + --file $LocalFilePath ` + --name (Split-Path $LocalFilePath -Leaf) ` + --account-name $StorageAccountName ` + --account-key $StorageAccountKey ` + --overwrite + +# Write the upload response to the standard output stream +Write-Host $UploadResponse + +$BlobName = (Split-Path $LocalFilePath -Leaf) +$ExpiryTime = (Get-Date).AddDays(1).ToString('yyyy-MM-ddTHH:mm:ssZ') + +# Generate SAS URL for the blob +$SasUrl = az storage blob generate-sas ` + --account-name $StorageAccountName ` + --account-key $StorageAccountKey ` + --container-name $ContainerName ` + --name $BlobName ` + --permissions r ` + --expiry $ExpiryTime ` + --output tsv + +# Write the SAS URL generation response to the standard output stream +Write-Host "SAS URL generated successfully." + +# Set the full url var for returning to the user for use in the next script +$FullUrl = "https://${StorageAccountName}.blob.core.windows.net/${ContainerName}/${BlobName}?${SasUrl}" + +# Output the FullUrl to the success output stream +Write-Output $FullUrl diff --git a/testing/configure/azure_scripts/create_blob_container.ps1 b/testing/configure/azure_scripts/create_blob_container.ps1 new file mode 100644 index 00000000..1590a702 --- /dev/null +++ b/testing/configure/azure_scripts/create_blob_container.ps1 @@ -0,0 +1,101 @@ +<# +.SYNOPSIS +This script creates a new Azure Storage Account and Blob Container within a specified Azure Resource Group. + +.DESCRIPTION +Automates the creation of a unique Azure Storage Account and Blob Container. +Requires the Azure Resource Group name as a mandatory argument. +Generates unique names for the storage account and container, creates the storage account, retrieves the storage account key, +creates a blob container, and saves the configuration to a 'config.ps1' file in the script's directory. + +.PARAMETER ResourceGroup +The name of the Azure Resource Group for the storage account and blob container. + +.EXAMPLE +.\create_blob_container.ps1 -ResourceGroup "YourResourceGroupName" + +Replace "YourResourceGroupName" with the name of your Azure Resource Group. + +.NOTES +- Requires Azure CLI and Azure account login. +- Ensure appropriate permissions in Azure. +- Handle the generated 'config.ps1' file securely. + +#> + + +param( + [Parameter(Mandatory=$true)] + [string]$ResourceGroup +) + +function New-AzureName { + param ( + [Parameter(Mandatory=$true)] + [string]$Prefix + ) + + # Ensuring the prefix is lowercase as Azure Storage Account names must be all lowercase + $Prefix = $Prefix.ToLower() + + # Generate a string of random lowercase letters and numbers + $randomCharacters = -join ((48..57) + (97..122) | Get-Random -Count (24 - $Prefix.Length) | ForEach-Object { [char]$_ }) + + return $Prefix + $randomCharacters +} + +# Get the location of the resource group +$Location = (az group show --name $ResourceGroup --query location --output tsv) + +# Generate a unique storage account name +$StorageAccountName = New-AzureName -Prefix "st" + +# Generate a container name +$ContainerName = New-AzureName -Prefix "container" + +# Variables used for Azure tags +$CurrentUser = $(az account show | ConvertFrom-Json).user.name +$Today = $(Get-Date).ToString("yyyy-MM-dd") +$Project = "LME" + +# Create a new storage account +az storage account create ` + --name $StorageAccountName ` + --resource-group $ResourceGroup ` + --location $Location ` + --sku Standard_LRS ` + --tags project=$Project created=$Today createdBy=$CurrentUser + +# Wait for a moment to ensure the storage account is available +Start-Sleep -Seconds 10 + +# Get the storage account key +$StorageAccountKey = (az storage account keys list ` + --resource-group $ResourceGroup ` + --account-name $StorageAccountName ` + --query '[0].value' ` + --output tsv) + +# Create a blob container +az storage container create ` + --name $ContainerName ` + --account-name $StorageAccountName ` + --account-key $StorageAccountKey + +# Output the created resources' details +Write-Output "Created Storage Account: $StorageAccountName" +Write-Output "StorageAccountKey: $StorageAccountKey" +Write-Output "Created Container: $ContainerName" + +# Define the file path in the same directory as the running script +$filePath = Join-Path -Path $PSScriptRoot -ChildPath "config.ps1" + +# Write the variables as PowerShell script to the file +@" +`$StorageAccountName = '$StorageAccountName' +`$StorageAccountKey = '$StorageAccountKey' +`$ContainerName = '$ContainerName' +"@ | Set-Content -Path $filePath + + + diff --git a/testing/configure/azure_scripts/download_in_container.ps1 b/testing/configure/azure_scripts/download_in_container.ps1 new file mode 100644 index 00000000..33926357 --- /dev/null +++ b/testing/configure/azure_scripts/download_in_container.ps1 @@ -0,0 +1,106 @@ +<# +.SYNOPSIS +This script automates the file download process on a specified VM based on its OS type. + +.DESCRIPTION +The script takes parameters for VM name, resource group, file URL, destination file path, username, and OS type. It processes these parameters to download a file to a VM, either running Windows or Linux. The script determines the appropriate command to create a directory (if necessary) and download the file to the specified VM, handling differences in command syntax and file path conventions based on the OS. + +.PARAMETER VMName +The name of the Virtual Machine where the file will be downloaded. + +.PARAMETER ResourceGroup +The name of the Azure resource group where the VM is located. + +.PARAMETER FileDownloadUrl +The URL of the file to be downloaded. + +.PARAMETER DestinationFilePath +The complete path where the file should be downloaded on the VM. This path is processed to extract just the filename. + +.PARAMETER UserName +The username for the VM, used in constructing the file path for Linux systems. Default is 'admin.ackbar'. + +.PARAMETER Os +The operating system type of the VM. Accepts 'Windows', 'Linux', or 'linux'. Default is 'Windows'. + +.EXAMPLE +.\download_in_container.ps1 ` + -VMName "MyVM" ` + -ResourceGroup "MyResourceGroup" ` + -FileDownloadUrl "http://example.com/file.zip" ` + -DestinationFilePath "C:\path\to\file.zip" ` + -UserName "admin.ackbar" ` + -Os "Windows" ` + +This example downloads a file from 'http://example.com/file.zip' to 'C:\path\to\file.zip' + on the VM named 'MyVM' in the 'MyResourceGroup'. + +.NOTES +- Ensure that the Azure CLI is installed and configured with the necessary permissions to access and run commands on the specified Azure VM. +- The specified script must exist on the VM and the VM should have the necessary permissions to execute it. +#> + +param( + [Parameter(Mandatory=$true)] + [string]$VMName, + + [Parameter(Mandatory=$true)] + [string]$ResourceGroup, + + [Parameter(Mandatory=$true)] + [string]$FileDownloadUrl, + + [Parameter(Mandatory=$true)] + [string]$DestinationFilePath, # This will be stripped to only the filename + + [Parameter()] + [string]$UserName = "admin.ackbar", + + [Parameter()] + [ValidateSet("Windows","Linux","linux")] + [string]$Os = "Windows" +) + +# Convert the OS parameter to lowercase for consistent comparison +$Os = $Os.ToLower() + +# Extract just the filename from the destination file path +$DestinationFileName = Split-Path -Leaf $DestinationFilePath + +# Set the destination path depending on the OS +if ($Os -eq "linux") { + $DestinationPath = "/home/$UserName/lme/$DestinationFileName" + # Create the lme directory if it doesn't exist + $DirectoryCreationScript = "mkdir -p '/home/$UserName/lme'" + # TODO: We don't want to output this until we fix it so we can put all of the output from thw whole script into one json object + # We are just ignoring the output for now + $CreateDirectoryResponse = az vm run-command invoke ` + --command-id RunShellScript ` + --resource-group $ResourceGroup ` + --name $VMName ` + --scripts $DirectoryCreationScript +} else { + $DestinationPath = "C:\lme\$DestinationFileName" +} + +# The download script +$DownloadScript = if ($Os -eq "linux") { + "curl -o '$DestinationPath' '$FileDownloadUrl'" +} else { + "Invoke-WebRequest -Uri '$FileDownloadUrl' -OutFile '$DestinationPath'" +} + +# Execute the download script with the appropriate command based on OS +if ($Os -eq "linux") { + az vm run-command invoke ` + --command-id RunShellScript ` + --resource-group $ResourceGroup ` + --name $VMName ` + --scripts $DownloadScript +} else { + az vm run-command invoke ` + --command-id RunPowerShellScript ` + --resource-group $ResourceGroup ` + --name $VMName ` + --scripts $DownloadScript +} diff --git a/testing/configure/azure_scripts/extract_archive.ps1 b/testing/configure/azure_scripts/extract_archive.ps1 new file mode 100644 index 00000000..4deabcb0 --- /dev/null +++ b/testing/configure/azure_scripts/extract_archive.ps1 @@ -0,0 +1,90 @@ +<# +.SYNOPSIS +Unzips a file on a specified Azure Virtual Machine. + +.DESCRIPTION +This script unzips a specified zip file on an Azure Virtual Machine (VM). It takes the VM's username and a filename (with optional path), +strips the path, constructs the full paths in the VM's 'Downloads' directory, strips the extension from the filename for the extraction path, +and unzips the file. The script requires the VM name, resource group name, username on the VM, and the filename of the zip file. + +.PARAMETER VMName +The name of the Azure Virtual Machine where the file will be unzipped. + +.PARAMETER ResourceGroup +The name of the Azure Resource Group that contains the VM. + +.PARAMETER Filename +The name (and optional path) of the zip file to be unzipped. + +.EXAMPLE +.\extract_archive.ps1 ` + -VMName "DC1" ` + -ResourceGroup "YourResourceGroupName" ` + -Filename "filename.zip" ` + -UserName "admin.ackbar" ` + -Os "Windows" + +This example unzips 'filename.zip' from the 'Downloads' directory of the user 'username' on the VM "DC1" in the resource group "YourResourceGroupName", and extracts it to a subdirectory named 'filename'. + +.NOTES +- Ensure that the Azure CLI is installed and configured with the necessary permissions to access and run commands on the specified Azure VM. +- The VM should have the necessary permissions to read the zip file and write to the extraction directory. +#> + +param( + [Parameter(Mandatory=$true)] + [string]$VMName, + + [Parameter(Mandatory=$true)] + [string]$ResourceGroup, + + [Parameter(Mandatory=$true)] + [string]$Filename, + + [Parameter()] + [string]$UserName = "admin.ackbar", + + [Parameter()] + [ValidateSet("Windows","Linux","linux")] + [string]$Os = "Windows" +) + +# Convert the OS parameter to lowercase for consistent comparison +$Os = $Os.ToLower() + +# Extract just the filename (ignoring any provided path) +$JustFilename = Split-Path -Leaf $Filename + +# Set paths depending on the OS +if ($Os -eq "linux") { + $ZipFilePath = "/home/$UserName/lme/$JustFilename" + $FileBaseName = [System.IO.Path]::GetFileNameWithoutExtension($JustFilename) + $ExtractToPath = "/home/$UserName/lme/$FileBaseName" # Extract to a subdirectory + + $UnzipScript = @" + unzip '$ZipFilePath' -d '$ExtractToPath' +"@ +} else { + $ZipFilePath = "C:\lme\$JustFilename" + $FileBaseName = [System.IO.Path]::GetFileNameWithoutExtension($JustFilename) + $ExtractToPath = "C:\lme\$FileBaseName" # Extract to a subdirectory + + $UnzipScript = @" + Expand-Archive -Path '$ZipFilePath' -DestinationPath '$ExtractToPath' +"@ +} + +# Execute the unzip script with the appropriate command based on OS +if ($Os -eq "linux") { + az vm run-command invoke ` + --command-id RunShellScript ` + --resource-group $ResourceGroup ` + --name $VMName ` + --scripts $UnzipScript +} else { + az vm run-command invoke ` + --command-id RunPowerShellScript ` + --resource-group $ResourceGroup ` + --name $VMName ` + --scripts $UnzipScript +} diff --git a/testing/configure/azure_scripts/lib/utilityFunctions.ps1 b/testing/configure/azure_scripts/lib/utilityFunctions.ps1 new file mode 100644 index 00000000..838d157c --- /dev/null +++ b/testing/configure/azure_scripts/lib/utilityFunctions.ps1 @@ -0,0 +1,143 @@ +function Format-AzVmRunCommandOutput { + param ( + [Parameter(Mandatory = $true)] + [string]$JsonResponse + ) + + $results = @() + + try { + $responseObj = $JsonResponse | ConvertFrom-Json +# Write-Output "Converted JSON object: $responseObj" + + if ($responseObj -and $responseObj.value) { + $stdout = "" + $stderr = "" + + foreach ($item in $responseObj.value) { +# Write-Output "Processing item: $($item.code)" + + # Check for StdOut and StdErr + if ($item.code -like "ComponentStatus/StdOut/*") { + $stdout += $item.message + "`n" + } elseif ($item.code -like "ComponentStatus/StdErr/*") { + $stderr += $item.message + "`n" + } + + # Additional case to handle other types of 'code' + # This ensures that all messages are captured + else { + $stdout += $item.message + "`n" + } + } + + if ($stdout -or $stderr) { + $results += New-Object PSObject -Property @{ + StdOut = $stdout + StdErr = $stderr + } + } + } + } catch { + $errorMessage = $_.Exception.Message + Write-Output "Error: $errorMessage" + $results += New-Object PSObject -Property @{ + StdOut = "Error: $errorMessage" + StdErr = "" + } + } + + if (-not $results) { + $results += New-Object PSObject -Property @{ + StdOut = "No data or invalid data received." + StdErr = "" + } + } + + return $results +} + +function Show-FormattedOutput { + param ( + [Parameter(Mandatory = $true)] + [Object[]]$FormattedOutput + ) + + foreach ($item in $FormattedOutput) { + if ($item -is [string]) { + # Handle string messages (like error or informational messages) + Write-Output $item + } + elseif ($item -is [PSCustomObject]) { + # Handle custom objects with StdOut and StdErr + if (![string]::IsNullOrWhiteSpace($item.StdOut)) { + Write-Output "Output (stdout):" + Write-Output $item.StdOut + } + if (![string]::IsNullOrWhiteSpace($item.StdErr)) { + Write-Output "Error (stderr):" + Write-Output $item.StdErr + } + } + } +} + +function Get-PrivateKeyFromJson { + param ( + [Parameter(Mandatory = $true)] + [string]$jsonResponse + ) + + try { + # Convert the JSON string to a PowerShell object + $responseObj = $jsonResponse | ConvertFrom-Json + + # Extract the 'message' field + $message = $responseObj.value[0].message + + # Define the start and end markers for the private key + $startMarker = "-----BEGIN OPENSSH PRIVATE KEY-----" + $endMarker = "-----END OPENSSH PRIVATE KEY-----" + + # Find the positions of the start and end markers + $startPosition = $message.IndexOf($startMarker) + $endPosition = $message.IndexOf($endMarker) + + if ($startPosition -lt 0 -or $endPosition -lt 0) { + Write-Error "Private key markers not found in the JSON response." + return $null + } + + # Extract the private key, including the markers + $privateKey = $message.Substring($startPosition, $endPosition - $startPosition + $endMarker.Length) + + # Return the private key + return $privateKey + } + catch { + Write-Error "An error occurred while extracting the private key: $_" + return $null + } +} + +function Invoke-GPUpdateOnVMs { + param( + [Parameter(Mandatory = $true)] + [string]$ResourceGroup, + [int]$numberOfClients = 2 + ) + + for ($i = 1; $i -le $numberOfClients; $i++) { + $vmName = "C$i" # Dynamically create VM name + + # Invoke the command on the VM + $gpupdateResponse = az vm run-command invoke ` + --command-id RunPowerShellScript ` + --name $vmName ` + --resource-group $ResourceGroup ` + --scripts "gpupdate /force" + + # Call the existing Show-FormattedOutput function + Show-FormattedOutput -FormattedOutput (Format-AzVmRunCommandOutput -JsonResponse "$gpupdateResponse") + } +} diff --git a/testing/configure/azure_scripts/run_script_in_container.ps1 b/testing/configure/azure_scripts/run_script_in_container.ps1 new file mode 100644 index 00000000..67d15c5f --- /dev/null +++ b/testing/configure/azure_scripts/run_script_in_container.ps1 @@ -0,0 +1,59 @@ +<# +.SYNOPSIS +Executes a specified PowerShell script with arguments on an Azure Virtual Machine. + +.DESCRIPTION +This script remotely executes a PowerShell script that is already present on an Azure Virtual Machine (VM), +passing specified arguments to it. It uses Azure's 'az vm run-command invoke' to run the specified script +located on the VM. The script requires the VM name, resource group name, the full path of the script on the VM, +and a string of arguments to pass to the script. + +.PARAMETER ResourceGroup +The name of the Azure Resource Group that contains the VM. + +.PARAMETER VMName +The name of the Azure Virtual Machine where the script will be executed. + +.PARAMETER ScriptPathOnVM +The full path of the PowerShell script on the Azure VM that needs to be executed. + +.PARAMETER ScriptArguments +A string of arguments that will be passed to the script. + +.EXAMPLE +.\run_script_in_container.ps1 ` + -ResourceGroup "YourResourceGroupName" ` + -VMName "VMName" ` + -ScriptPathOnVM "C:\path\to\your\script.ps1" ` + -ScriptArguments "-Arg1 value1 -Arg2 value2" + +This example executes a script located at 'C:\path\to\your\script.ps1' on the VM named "VMName" + in the resource group "YourResourceGroup", passing it the arguments "-Arg1 value1 -Arg2 value2". + +.NOTES +- Ensure that the Azure CLI is installed and configured with the necessary permissions to access and run commands on the specified Azure VM. +- The specified script must exist on the VM and the VM should have the necessary permissions to execute it. +#> + +param( + [Parameter(Mandatory=$true)] + [string]$ResourceGroup, + + [Parameter(Mandatory=$true)] + [string]$VMName, + + [Parameter(Mandatory=$true)] + [string]$ScriptPathOnVM, # The full path of the script on the VM + + [string]$ScriptArguments # Arguments to pass to the script +) + +$InvokeScriptCommand = @" +& '$ScriptPathOnVM' $ScriptArguments +"@ + +az vm run-command invoke ` + --command-id RunPowerShellScript ` + --resource-group $ResourceGroup ` + --name $VMName ` + --scripts $InvokeScriptCommand diff --git a/testing/configure/azure_scripts/zip_my_parents_parent.ps1 b/testing/configure/azure_scripts/zip_my_parents_parent.ps1 new file mode 100644 index 00000000..ef034496 --- /dev/null +++ b/testing/configure/azure_scripts/zip_my_parents_parent.ps1 @@ -0,0 +1,34 @@ +<# +.SYNOPSIS +Zips the parent of the parent directory of the script and outputs the path of the ZIP file. + +.DESCRIPTION +This script compresses the parent directory of the parent of its location into a ZIP file. +It then outputs the full path of the created ZIP file. This is useful for quickly archiving the contents of the parent directory. + +.EXAMPLE +This example demonstrates how to execute the script and capture the path of the created ZIP file. +# Define the path to this zip script +$zipScriptPath = "C:\path\to\zip_my_parents_parent.ps1" + +# Execute the zip script and capture the output (filename of the zip file) +$zipFilePath = & $zipScriptPath + +.NOTES +- Ensure that PowerShell 5.0 or later is installed, as this script uses the Compress-Archive cmdlet. +- The script assumes read and write permissions in the script's and its parent directory. +#> +# Get the full path of the script's parent directory +$scriptParentDir = Split-Path -Parent $PSScriptRoot + +# Get the name of the parent directory +$parentDirName = Split-Path -Leaf $scriptParentDir + +# Define the destination path for the zip file (adjacent to the parent directory) +$destinationZipPath = Join-Path -Path (Split-Path -Parent $scriptParentDir) -ChildPath ("$parentDirName.zip") + +# Create the zip file +Compress-Archive -Path "$scriptParentDir\*" -DestinationPath $destinationZipPath -Force + +# Output the path of the created zip file +$destinationZipPath diff --git a/testing/configure/chown_dc1_private_key.ps1 b/testing/configure/chown_dc1_private_key.ps1 new file mode 100644 index 00000000..77aa76f3 --- /dev/null +++ b/testing/configure/chown_dc1_private_key.ps1 @@ -0,0 +1,21 @@ +# Path to the private key +$PrivateKeyPath = "C:\lme\id_rsa" + +# Define the SYSTEM account +$SystemAccount = New-Object System.Security.Principal.NTAccount("NT AUTHORITY", "SYSTEM") + +# Get the current ACL of the file +$Acl = Get-Acl -Path $PrivateKeyPath + +# Clear any existing Access Rules +$Acl.SetAccessRuleProtection($true, $false) +$Acl.Access | ForEach-Object { $Acl.RemoveAccessRule($_) | Out-Null } + +# Create a new Access Rule granting FullControl to SYSTEM +$accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule($SystemAccount, "FullControl", "Allow") + +# Add the Access Rule to the ACL +$Acl.AddAccessRule($accessRule) + +# Set the updated ACL back to the file +Set-Acl -Path $PrivateKeyPath -AclObject $Acl diff --git a/testing/configure/create_lme_directory.ps1 b/testing/configure/create_lme_directory.ps1 new file mode 100644 index 00000000..3a4bf5ed --- /dev/null +++ b/testing/configure/create_lme_directory.ps1 @@ -0,0 +1,27 @@ +# Define the directory path +param( + [string]$DirectoryPath = "C:\lme" +) + +# Create the directory if it doesn't already exist +if (-not (Test-Path -Path $DirectoryPath)) { + New-Item -Path $DirectoryPath -ItemType Directory +} + +# Define the security principal for 'All Users' +$allUsers = New-Object System.Security.Principal.SecurityIdentifier("S-1-1-0") + +# Get the current ACL of the directory +$acl = Get-Acl -Path $DirectoryPath + +# Define the rights (read and execute) +$rights = [System.Security.AccessControl.FileSystemRights]::ReadAndExecute + +# Create the rule (allowing read and execute access) +$accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule($allUsers, $rights, 'ContainerInherit, ObjectInherit', 'None', 'Allow') + +# Add the rule to the ACL +$acl.AddAccessRule($accessRule) + +# Set the ACL back to the directory +Set-Acl -Path $DirectoryPath -AclObject $acl diff --git a/testing/configure/create_ou.ps1 b/testing/configure/create_ou.ps1 new file mode 100644 index 00000000..5b546f3c --- /dev/null +++ b/testing/configure/create_ou.ps1 @@ -0,0 +1,23 @@ +param( + [string]$Domain = "lme.local", + [string]$ClientOUCustomName = "LMEClients" +) + +Import-Module ActiveDirectory + +# Split the domain into parts and construct the ParentContainerDN +$DomainParts = $Domain -split "\." +$ParentContainerDN = ($DomainParts | ForEach-Object { "DC=$_" }) -join "," + + +# Define the distinguished name (DN) for the new OU +$NewOUDN = "OU=$ClientOUCustomName,$ParentContainerDN" + +# Check if the OU already exists +if (-not (Get-ADOrganizationalUnit -Filter "DistinguishedName -eq '$NewOUDN'" -ErrorAction SilentlyContinue)) { + # Create the new OU + New-ADOrganizationalUnit -Name $ClientOUCustomName -Path $ParentContainerDN + Write-Output "Organizational Unit '$ClientOUCustomName' created successfully under $ParentContainerDN." +} else { + Write-Output "Organizational Unit '$ClientOUCustomName' already exists under $ParentContainerDN." +} diff --git a/testing/configure/download_files.ps1 b/testing/configure/download_files.ps1 new file mode 100644 index 00000000..1bc6588e --- /dev/null +++ b/testing/configure/download_files.ps1 @@ -0,0 +1,23 @@ +param( + [string]$Directory = $env:USERPROFILE +) + +# Base directory path - use provided username or default to USERPROFILE +$BaseDirectoryPath = if ($Directory -and ($Directory -ne $env:USERPROFILE)) { + "C:\$Directory" +} else { + "$env:USERPROFILE\Downloads\" +} + +# Todo: Allow for downloading a version by adding a parameter for the version number +$ApiUrl = "https://api.github.com/repos/cisagov/LME/releases/latest" +$latestRelease = Invoke-RestMethod -Uri $ApiUrl +$zipFileUrl = $latestRelease.assets | Where-Object { $_.content_type -eq 'application/zip' } | Select-Object -ExpandProperty browser_download_url +$downloadPath = "$BaseDirectoryPath\" + $latestRelease.name + ".zip" +$extractPath = "$BaseDirectoryPath\LME" + +Invoke-WebRequest -Uri $zipFileUrl -OutFile $downloadPath +if (-not (Test-Path -Path $extractPath)) { + New-Item -ItemType Directory -Path $extractPath +} +Expand-Archive -LiteralPath $downloadPath -DestinationPath $extractPath diff --git a/testing/configure/install_chapter_1.ps1 b/testing/configure/install_chapter_1.ps1 new file mode 100644 index 00000000..ee16a0a4 --- /dev/null +++ b/testing/configure/install_chapter_1.ps1 @@ -0,0 +1,65 @@ +param ( + [Parameter( + HelpMessage="Path to the configuration directory. Default is 'C:\lme\configure'." + )] + [string]$ConfigurePath = "C:\lme\configure", + [Parameter( + HelpMessage="Path to the root install directory. Default is 'C:\lme'." + )] + [string]$RootInstallDir = "C:\lme" + +) + +# Exit the script on any error +$ErrorActionPreference = 'Stop' +$ProcessSeparator = "`n----------------------------------------`n" + +# Change directory to the configure directory +Set-Location -Path $ConfigurePath + +# Run the scripts and check for failure +Write-Output "Creating the configurePath directory..." +.\create_lme_directory.ps1 -DirectoryPath $RootInstallDir +Write-Output $ProcessSeparator + +Write-Output "Downloading the files..." +.\download_files.ps1 -Directory lme +Write-Output $ProcessSeparator + +Write-Output "Importing the GPOs..." +.\wec_import_gpo.ps1 -Directory lme +Write-Output $ProcessSeparator + +Start-Sleep 10 +Write-Output "Updating the GPO server name..." +.\wec_gpo_update_server_name.ps1 +Write-Output $ProcessSeparator + +Write-Output "Creating the OU..." +.\create_ou.ps1 +Write-Output $ProcessSeparator + +Write-Output "Linking the GPOs..." +.\wec_link_gpo.ps1 +Write-Output $ProcessSeparator + +Write-Output "Provisioning the WEC service..." +.\wec_service_provisioner.ps1 +Write-Output $ProcessSeparator + +# Run the wevtutil and wecutil commands +Write-Output "Running wevtutil and wecutil commands to start the wec service manually..." +wevtutil set-log ForwardedEvents /q:true /e:true +Write-Output $ProcessSeparator + +Write-Output "Running wecutil restart command..." +wecutil rs lme +Write-Output $ProcessSeparator + +Write-Output "Running wecutil gr command..." +wecutil gr lme +Write-Output $ProcessSeparator + +# Run the move_computers_to_ou script +Write-Output "Moving the computers to the OU..." +.\move_computers_to_ou.ps1 diff --git a/testing/configure/install_chapter_2.ps1 b/testing/configure/install_chapter_2.ps1 new file mode 100644 index 00000000..d85a9d6b --- /dev/null +++ b/testing/configure/install_chapter_2.ps1 @@ -0,0 +1,28 @@ +param ( + [Parameter( + HelpMessage="Path to the configuration directory. Default is 'C:\lme\configure'." + )] + [string]$ConfigurePath = "C:\lme\configure" +) + +# Exit the script on any error +$ErrorActionPreference = 'Stop' +$ProcessSeparator = "`n----------------------------------------`n" + +# Change directory to the configure directory +Set-Location -Path $ConfigurePath + +Write-Output "Installing Sysmon..." +.\sysmon_install_in_sysvol.ps1 +Write-Output $ProcessSeparator + +Write-Output "Importing the gpo..." +.\sysmon_import_gpo.ps1 -Directory lme +Write-Output $ProcessSeparator + +Write-Output "Updating the gpo variables.." +.\sysmon_gpo_update_vars.ps1 +Write-Output $ProcessSeparator + +Write-Output "Linking the gpo..." +.\sysmon_link_gpo.ps1 diff --git a/testing/configure/lib/functions.sh b/testing/configure/lib/functions.sh new file mode 100644 index 00000000..11d1e6b5 --- /dev/null +++ b/testing/configure/lib/functions.sh @@ -0,0 +1,47 @@ +extract_credentials() { + local file_path=${1:-'/opt/lme/Chapter 3 Files/output.log'} + + if [ ! -f "$file_path" ]; then + echo "File not found: $file_path" + return 1 + fi + + # Use sed to extract the lines containing the credentials + credentials=$(sed -n '/^## [a-zA-Z_]*:/p' "$file_path") + + # Loop through the extracted lines and assign the values to variables + while IFS=: read -r key value; do + key=$(echo "$key" | sed 's/^## //g' | tr -d '[:space:]') + value=$(echo "$value" | tr -d '\r\n') + export "$key"="$value" + done <<< "$credentials" + + export ELASTIC_PASSWORD=$elastic +} + +write_credentials_to_file() { + local file_path=$1 + # exit if file path is not provided + if [ -z "$file_path" ]; then + echo "File path is required" + return 1 + fi + # Write credentials to the file + echo "export elastic=$elastic" > "$file_path" + echo "export kibana=$kibana" >> "$file_path" + echo "export logstash_system=$logstash_system" >> "$file_path" + echo "export logstash_writer=$logstash_writer" >> "$file_path" + echo "export dashboard_update=$dashboard_update" >> "$file_path" +} + + +extract_ls1_ip() { + local file_path=$1 + # exit if file path is not provided + if [ -z "$file_path" ]; then + echo "File path is required" + return 1 + fi + publicIpAddress=$(sed -n '/Creating LS1.../,/}/p' $file_path | awk -F'"' '/publicIpAddress/{print $4}') + export LS1_IP=$publicIpAddress +} \ No newline at end of file diff --git a/testing/configure/linux_authorize_private_key.sh b/testing/configure/linux_authorize_private_key.sh new file mode 100755 index 00000000..c699d816 --- /dev/null +++ b/testing/configure/linux_authorize_private_key.sh @@ -0,0 +1,4 @@ +#!/usr/bin/env bash +cat /home/admin.ackbar/.ssh/id_rsa.pub >> /home/admin.ackbar/.ssh/authorized_keys +sudo chown admin.ackbar:admin.ackbar /home/admin.ackbar/.ssh/* +perl -p -i -e 's/root\@LS1/admin.ackbar\@DC1/' /home/admin.ackbar/.ssh/authorized_keys diff --git a/testing/configure/linux_install_lme.exp b/testing/configure/linux_install_lme.exp new file mode 100755 index 00000000..1ba53a1c --- /dev/null +++ b/testing/configure/linux_install_lme.exp @@ -0,0 +1,81 @@ +#!/usr/bin/expect + +# Change to the LME directory containing files for the Linux server +cd /opt/lme/Chapter\ 3\ Files/ + +# Adjust the timeout if necessary +set timeout 60 +set expect_out(buffer_size) 1000000 + +log_file -a output.log + +spawn ./deploy.sh install +sleep 1 +expect { + -re {.*OK.*} { + send "\r" + } + -re {.*Proceed.*} { + send "y\r" + } +} + + +expect { + -re {.*Please reboot and re-run this script to finish the install.*} { + send_user "Reboot required. Exiting...\n" + exit + } + -re "Enter the IP of this Linux server.*" { + sleep 1 + send "\r" + } +} + +sleep 1 +expect -re {Windows Event Collector} +sleep 1 +send "ls1.lme.local\r" + +sleep 1 + +expect -re {.*ntinue with self signed certificates.*: y} +sleep 1 +send "\r" +sleep 1 + +expect -re {.*ip Docker Install\? \(\[y\]es/\[n\]o\): n} +sleep 1 +send "\r" + + + +set timeout 310 +expect { + -re {Waiting for Elasticsearch to be ready} { + puts " Elasticsearch is being prepared" + exp_continue + } + -re {\.} { + puts " . " + exp_continue + } + -re {We think your main disk is} { + puts " Disk message received" + exp_continue + } + -re {Bootstrapping} { + puts " Bootstrapping in progress" + exp_continue + } + -re {Uploading Kibana dashboards} { + puts " Kibana dashboards are being uploaded" + exp_continue + } +} + +expect eof + +log_file + +exec cat output.log diff --git a/testing/configure/linux_install_lme.sh b/testing/configure/linux_install_lme.sh new file mode 100755 index 00000000..f0d215c6 --- /dev/null +++ b/testing/configure/linux_install_lme.sh @@ -0,0 +1,111 @@ +#!/bin/bash + +# Change to the directory where the script is located +script_dir=$(dirname "$0") +cd $script_dir || exit 1 +# We need to get the full path of the script dir for below +script_dir=$(pwd) + +# Default username +username="admin.ackbar" + +# Process command line arguments +while getopts "u:v:b:" opt; do + case $opt in + u) username=$OPTARG ;; + v) version=$OPTARG ;; + b) branch=$OPTARG ;; + \?) echo "Invalid option -$OPTARG" >&2; exit 1 ;; + esac +done + +# Check if version matches the pattern +if [[ -n "$version" && ! $version =~ ^[0-9]+\.[0-9]+\.[0-9]+$ ]]; then + echo "Invalid version format. Version should match \d+.\d+.\d+" + exit 1 +fi + +# Remove any existing LME directories +sudo rm -rf /opt/cisagov-LME-* /opt/lme + +# Get the tarball URL for the specified version +get_tarball_url() { + echo "https://api.github.com/repos/cisagov/LME/tarball/v$1" +} + +if [ -n "$branch" ]; then + # Clone from the specified branch + git clone --branch "$branch" https://github.com/cisagov/LME.git /opt/lme +else + echo "Getting the code from GitHub" + # Check if a version is provided + if [ -n "$version" ]; then + tarball_url=$(get_tarball_url "$version") + else + tarball_url=$(curl -s https://api.github.com/repos/cisagov/LME/releases/latest | jq -r '.tarball_url') + fi + + # Get the version from the tarball URL + v_version=$(basename "$tarball_url") + + echo "Downloading $tarball_url to file: $v_version" + curl -L "$tarball_url" -o "$v_version" + + # extracts it to a folder like cisagov-LME-3412897 + sudo tar -xzpf "$v_version" -C /opt + rm -rf "$v_version" + + extracted_filename=$(sudo ls -ltd /opt/cisagov-LME-* | grep "^d" | head -n 1 | awk '{print $NF}') + + echo "Extracted to $extracted_filename" + + echo "Renaming directory to /opt/lme" + sudo mv "$extracted_filename" /opt/lme +fi + +# Change the way we check disk usage in the old versions +perl -pi -e 's/DISK_SIZE="\$\(echo "\$DF_OUTPUT".+?\)"/DISK_SIZE=130/' /opt/lme/Chapter\ 3\ Files/deploy.sh +perl -pi -e 's/DISK_80=\$\(\(DISK_SIZE_ROUND \* 80 \/ 100\)\)/DISK_80=91/g' /opt/lme/Chapter\ 3\ Files/deploy.sh + + +echo 'export DEBIAN_FRONTEND=noninteractive' >> ~/.bashrc +echo 'export NEEDRESTART_MODE=a' >> ~/.bashrc +. ~/.bashrc + +# Set the noninteractive modes for root +echo 'export DEBIAN_FRONTEND=noninteractive' | sudo tee -a /root/.bashrc +echo 'export NEEDRESTART_MODE=a' | sudo tee -a /root/.bashrc + +#get interface name of default route +DEFAULT_IF="$(route | grep '^default' | grep -o '[^ ]*$')" + +#get ip of the interface +EXT_IP="$(/sbin/ifconfig "$DEFAULT_IF" | awk -F ' *|:' '/inet /{print $3}')" + +function installdocker() { + echo -e "\e[32m[X]\e[0m Installing Docker" + curl -fsSL https://get.docker.com -o get-docker.sh >/dev/null + sudo sh get-docker.sh >/dev/null + echo "Starting docker" + sudo service docker start + sleep 5 +} + +# Pull the images so you don't have to wait for them in expect +installdocker + +echo -e "\e[32m[X]\e[0m Pulling the images. This may take some time." +docker compose -f /opt/lme/Chapter\ 3\ Files/docker-compose-stack.yml pull --quiet + +# Execute script with root privileges +# Todo: We could put a switch here for different versions and just run different expect scripts +sudo -E bash -c ". /root/.bashrc && $script_dir/linux_install_lme.exp" + +sudo chmod ugo+w "/opt/lme/Chapter 3 Files/output.log" + +if [ -f "/opt/lme/files_for_windows.zip" ]; then + sudo cp /opt/lme/files_for_windows.zip /home/"$username"/ + sudo chown "$username":"$username" /home/"$username"/files_for_windows.zip +else + echo "files_for_windows.zip does not exist. Probably because a reboot is required in order to proceed with the install" +fi diff --git a/testing/configure/linux_make_private_key.exp b/testing/configure/linux_make_private_key.exp new file mode 100755 index 00000000..16fd6ec9 --- /dev/null +++ b/testing/configure/linux_make_private_key.exp @@ -0,0 +1,16 @@ +#!/usr/bin/expect + +spawn ssh-keygen -t rsa -b 4096 +sleep 1 +expect -re {Enter file in which to save the key} +send "/home/admin.ackbar/.ssh/id_rsa\r" +sleep 1 +expect -re {empty for no passphrase} +send "\r" +sleep 1 +expect -re {Enter same passphrase again} +send "\r" + +set timeout 60 + +expect eof diff --git a/testing/configure/linux_test_install.sh b/testing/configure/linux_test_install.sh new file mode 100755 index 00000000..3dda731d --- /dev/null +++ b/testing/configure/linux_test_install.sh @@ -0,0 +1,119 @@ +#!/usr/bin/env bash +set -e + +# Get the full path to the directory containing the current script +script_dir=$(dirname "$(realpath "${BASH_SOURCE[0]}")") + +source "${script_dir}/lib/functions.sh" +extract_credentials '/opt/lme/Chapter 3 Files/output.log' + +check_variable() { + local var_name="$1" + local var_value="$2" + + if [ -z "$var_value" ]; then + echo "Error: '$var_name' is not set or is empty" + return 1 # Return a non-zero status to indicate failure + fi +} + +# Perform the checks +check_variable "elastic" "$elastic" || exit 1 +check_variable "kibana" "$kibana" || exit 1 +check_variable "logstash_system" "$logstash_system" || exit 1 +check_variable "logstash_writer" "$logstash_writer" || exit 1 +check_variable "dashboard_update" "$dashboard_update" || exit 1 + +echo "All variables are set correctly." + +# Get the list of containers and their health status +container_statuses=$(docker ps --format "{{.Names}}: {{.Status}}" | grep -v "CONTAINER ID") + +# Check each container's status +unhealthy=false +while read -r line; do + container_name=$(echo "$line" | awk -F': ' '{print $1}') + health_status=$(echo "$line" | awk -F': ' '{print $2}') + + if [[ $health_status != *"(healthy)"* ]]; then + echo "Container $container_name is not healthy: $health_status" + unhealthy=true + exit 1 + fi +done <<< "$container_statuses" + +# Final check +if [ "$unhealthy" = false ]; then + echo "All containers are healthy." +fi + +ELASTICSEARCH_HOST="localhost" +ELASTICSEARCH_PORT="9200" + +# Get list of all indexes +indexes=$(curl -sk -u "elastic:$elastic" "https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/_cat/indices?v" | awk '{print $3}') + +# Check if winlogbeat index exists +if echo "$indexes" | grep -q "winlogbeat"; then + echo "Index 'winlogbeat' exists." +else + echo "Index 'winlogbeat' does not exist." >&2 + exit 1 +fi + +# Check if we can query the winlogbeat index +response=$(curl -sk -u "elastic:$elastic" "https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/winlogbeat-*/_search" -H "Content-Type: application/json" -d '{ + "size": 1, + "query": { + "match_all": {} + } +}') + +# Check if the curl command was successful +if [ $? -eq 0 ]; then + echo "Querying winlogbeat executed successfully." +else + echo "Error executing the query of winlogbeat." >&2 + exit 1 +fi + +# Check the kibana saved objects. +# response=$(curl -sk -u "elastic:$elastic" "https://${ELASTICSEARCH_HOST}:${ELASTICSEARCH_PORT}/.kibana/_search" -H "Content-Type: application/json" -d '{ +# "size": 1000, +# "query": { +# "term": { +# "type": "dashboard" +# } +# } +# }') +# echo $response + + +response=$(curl -sk -u "elastic:$elastic" "https://${ELASTICSEARCH_HOST}/api/kibana/management/saved_objects/_find?perPage=500&page=1&type=dashboard&sortField=updated_at&sortOrder=desc") + +#!/bin/bash + +# List of dashboard names to check +declare -a names_to_check=( + "User Security" + "User HR" + "Sysmon Summary" + "Security Dashboard - Security Log" + "Process Explorer" + "Computer Software Overview" + "Alerting Dashboard" + "HealthCheck Dashboard - Overview" +) + +# Extract dashboard names from the JSON response stored in the variable +dashboard_names=$(echo "$response" | jq -r '.saved_objects[] | select(.type == "dashboard") | .meta.title') + +# Check each name +for name in "${names_to_check[@]}"; do + if grep -qF "$name" <<< "$dashboard_names"; then + echo "Dashboard found: $name" + else + echo "Dashboard NOT found: $name" >&2 + exit 1 + fi +done diff --git a/testing/configure/linux_update_system.sh b/testing/configure/linux_update_system.sh new file mode 100755 index 00000000..602e185d --- /dev/null +++ b/testing/configure/linux_update_system.sh @@ -0,0 +1,3 @@ +# Install Git client to be able to clone the LME repository +sudo apt update +sudo DEBIAN_FRONTEND=noninteractive NEEDRESTART_MODE=a apt install git curl zip net-tools jq nodejs expect python3-venv -y \ No newline at end of file diff --git a/testing/configure/list_computers_forwarding_events.ps1 b/testing/configure/list_computers_forwarding_events.ps1 new file mode 100644 index 00000000..ecf8ff3a --- /dev/null +++ b/testing/configure/list_computers_forwarding_events.ps1 @@ -0,0 +1,27 @@ +๏ปฟ# Execute 'wecutil gr lme' command and capture the output +$wecutilOutput = wecutil gr lme + +# Split the output into individual lines +$lines = $wecutilOutput -split "`r`n" | Where-Object { $_ -match "\S" } # Exclude empty lines + +# Initialize a list to store active computer names +$activeComputers = @() + +# Process each line to extract computer names with active status +$isActive = $false +foreach ($line in $lines) { + if ($line -match "RunTimeStatus: Active") { + $isActive = $true + } elseif ($line -match "\.local") { + if ($isActive) { + if ($line -match "(\S+\.local)") { + $activeComputers += $matches[1] + } + } + $isActive = $false + } +} + +# Display the active computer names +Write-Output "Active Computers Forwarding Events:" +$activeComputers | ForEach-Object { Write-Output $_ } diff --git a/testing/configure/move_computers_to_ou.ps1 b/testing/configure/move_computers_to_ou.ps1 new file mode 100644 index 00000000..4ef36c49 --- /dev/null +++ b/testing/configure/move_computers_to_ou.ps1 @@ -0,0 +1,38 @@ +๏ปฟparam( + [string]$Domain = "lme.local", + [string]$ClientOUCustomName = "LMEClients", + [string]$CurrentCN = "Computers" +) + +# Import the Active Directory module +Import-Module ActiveDirectory + +# Split the domain into its parts +$domainParts = $Domain -split '\.' + +# Construct the domain DN, starting with 'DC=' +$domainDN = 'DC=' + ($domainParts -join ',DC=') + +# Define the DN of the existing Computers container +$computersContainerDN = "CN=$CurrentCN,$domainDN" + +# Define the DN of the target OU +$targetOUDN = "OU=$ClientOUCustomName,$domainDN" + +# Output the DNs for verification +Write-Output "Current Computers Container DN: $computersContainerDN" +Write-Output "Target OU DN: $targetOUDN" + +# Get the computer accounts in the Computers container +$computers = Get-ADComputer -Filter * -SearchBase $computersContainerDN + +# Move each computer to the target OU +foreach ($computer in $computers) { + try { + # Move the computer to the target OU + Move-ADObject -Identity $computer.DistinguishedName -TargetPath $targetOUDN + Write-Output "Moved $($computer.Name) to $targetOUDN" + } catch { + Write-Output "Failed to move $($computer.Name): $_" + } +} \ No newline at end of file diff --git a/testing/configure/sysmon_gpo_update_vars.ps1 b/testing/configure/sysmon_gpo_update_vars.ps1 new file mode 100644 index 00000000..9707039a --- /dev/null +++ b/testing/configure/sysmon_gpo_update_vars.ps1 @@ -0,0 +1,43 @@ +param( + [string]$GpoName = "LME-Sysmon-Task", + [string]$DomainName = "lme.local" +) + +# Get the FQDN of the current server +$fqdn = [System.Net.Dns]::GetHostByName($env:COMPUTERNAME).HostName + +# Get the GPO object +$gpo = Get-GPO -Name $GpoName + +# Check if GPO is found +if ($null -eq $gpo) { + Write-Output "GPO not found" + exit +} + +# Get the GUID of the GPO +$gpoGuid = $gpo.Id + +# Define the path to the XML file +$xmlFilePath = "C:\Windows\SYSVOL\sysvol\$DomainName\Policies\{$gpoGuid}\Machine\Preferences\ScheduledTasks\ScheduledTasks.xml" + +# Get current time and add 5 minutes +$newStartTime = (Get-Date).AddMinutes(5).ToString("yyyy-MM-ddTHH:mm:ss") + +# Load the XML file +$xml = [xml](Get-Content -Path $xmlFilePath) + +# Find the task with name "LME-Sysmon-Task" +$task = $xml.ScheduledTasks.TaskV2 | Where-Object { $_.Properties.name -eq "LME-Sysmon-Task" } + +# Update the start time in the XML +$task.Properties.Task.Triggers.CalendarTrigger.StartBoundary = $newStartTime + +# Update the command path +$task.Properties.Task.Actions.Exec.Command = "\\$fqdn\sysvol\$DomainName\LME\Sysmon\update.bat" + +# Save the modified XML back to the file +$xml.Save($xmlFilePath) + +# Output the new start time for verification +Write-Output "New start time set to: $newStartTime" \ No newline at end of file diff --git a/testing/configure/sysmon_import_gpo.ps1 b/testing/configure/sysmon_import_gpo.ps1 new file mode 100644 index 00000000..2f3eb109 --- /dev/null +++ b/testing/configure/sysmon_import_gpo.ps1 @@ -0,0 +1,34 @@ +param( + [string]$Directory = $env:USERPROFILE +) + +# Determine the base directory path based on the provided username +$baseDirectoryPath = if ($Directory -and ($Directory -ne $env:USERPROFILE)) { + "C:\$Directory" +} else { + "$env:USERPROFILE\Downloads" +} + +$GPOBackupPath = "$baseDirectoryPath\LME\Chapter 2 Files\GPO Deployment\Group Policy Objects" + +$gpoNames = @("LME-Sysmon-Task") + +foreach ($gpoName in $gpoNames) { + $gpo = Get-GPO -Name $gpoName -ErrorAction SilentlyContinue + if (-not $gpo) { + New-GPO -Name $gpoName | Out-Null + Write-Output "Created GPO: $gpoName" + } else { + Write-Output "GPO $gpoName already exists." + } + + try { + Import-GPO -BackupGpoName $gpoName -TargetName $gpoName -Path $GPOBackupPath -CreateIfNeeded -ErrorAction Stop + Write-Output "Imported settings into GPO: $gpoName" + } catch { + Throw "Failed to import GPO: $gpoName. The GPODisplayName in bkupinfo.xml may not match or other import error occurred." + } +} + +Write-Output "LME Sysmon GPOs have been created and imported successfully." + diff --git a/testing/configure/sysmon_install_in_sysvol.ps1 b/testing/configure/sysmon_install_in_sysvol.ps1 new file mode 100644 index 00000000..41b59777 --- /dev/null +++ b/testing/configure/sysmon_install_in_sysvol.ps1 @@ -0,0 +1,69 @@ +๏ปฟparam( + [string]$DomainName = "lme.local" # Default domain name +) + +# Define the SYSVOL path +$destinationPath = "C:\Windows\SYSVOL\SYSVOL\$DomainName\LME\Sysmon" +$tempPath = Join-Path $env:TEMP "SysmonTemp" + +# Create the LME and Sysmon directories +New-Item -ItemType Directory -Path $destinationPath -Force +New-Item -ItemType Directory -Path $tempPath -Force + +# Copy update.bat from the user's download directory +$updateBatSource = "C:\lme\LME\Chapter 2 Files\GPO Deployment\update.bat" +Copy-Item -Path $updateBatSource -Destination $destinationPath + +# Download URL for Sysmon +$url = "https://download.sysinternals.com/files/Sysmon.zip" + +# Download file path +$zipFilePath = Join-Path $tempPath "Sysmon.zip" + +# Download the file +Invoke-WebRequest -Uri $url -OutFile $zipFilePath + +# Unzip the file to temp directory +Expand-Archive -Path $zipFilePath -DestinationPath $tempPath + +# Copy only Sysmon64.exe to destination +Copy-Item -Path "$tempPath\Sysmon64.exe" -Destination $destinationPath + +# Clean up: remove temp directory and zip file +Remove-Item -Path $tempPath -Recurse -Force + +# Download URL for the Sysmon configuration file +$xmlUrl = "https://raw.githubusercontent.com/SwiftOnSecurity/sysmon-config/master/sysmonconfig-export.xml" + +# Destination file path for the Sysmon configuration file +$xmlFilePath = Join-Path $destinationPath "sysmon.xml" + +# Download and rename the file +Invoke-WebRequest -Uri $xmlUrl -OutFile $xmlFilePath + +# Define the destination path for Sigcheck +$sigcheckDestinationPath = "C:\Windows\SYSVOL\SYSVOL\$DomainName\LME" + +# Download URL for Sigcheck +$sigcheckUrl = "https://download.sysinternals.com/files/Sigcheck.zip" + +# Temporary path for Sigcheck zip file +$sigcheckTempPath = Join-Path $env:TEMP "SigcheckTemp" + +# Ensure the temporary directory exists +New-Item -ItemType Directory -Path $sigcheckTempPath -Force + +# Download file path for Sigcheck +$sigcheckZipFilePath = Join-Path $sigcheckTempPath "Sigcheck.zip" + +# Download the Sigcheck zip file +Invoke-WebRequest -Uri $sigcheckUrl -OutFile $sigcheckZipFilePath + +# Unzip the Sigcheck file to temporary directory +Expand-Archive -Path $sigcheckZipFilePath -DestinationPath $sigcheckTempPath + +# Copy only Sigcheck64.exe to the destination +Copy-Item -Path "$sigcheckTempPath\sigcheck64.exe" -Destination $sigcheckDestinationPath + +# Clean up: remove temporary directory and zip file +Remove-Item -Path $sigcheckTempPath -Recurse -Force diff --git a/testing/configure/sysmon_link_gpo.ps1 b/testing/configure/sysmon_link_gpo.ps1 new file mode 100644 index 00000000..a29aa9eb --- /dev/null +++ b/testing/configure/sysmon_link_gpo.ps1 @@ -0,0 +1,18 @@ +param( + [string]$Domain = "lme.local", + [string]$ClientOUCustomName = "LMEClients" +) + +Import-Module ActiveDirectory + +$domainDN = $Domain -replace '\.', ',DC=' -replace '^', 'DC=' +$OUDistinguishedName = "OU=$ClientOUCustomName,$domainDN" + +$GPOName = "LME-Sysmon-Task" + +try { + New-GPLink -Name $GPOName -Target $OUDistinguishedName + Write-Output "GPO '$GPOName' linked to OU '$ClientOUCustomName'." +} catch { + Write-Output "Error linking GPO '$GPOName' to OU '$ClientOUCustomName': $_" +} diff --git a/testing/configure/trust_ls1_ssh_key.ps1 b/testing/configure/trust_ls1_ssh_key.ps1 new file mode 100644 index 00000000..0f3d0a41 --- /dev/null +++ b/testing/configure/trust_ls1_ssh_key.ps1 @@ -0,0 +1,66 @@ +param ( + [string]$SshHost = "ls1" +) + +$SshDirectory = "C:\Windows\System32\config\systemprofile\.ssh" +$KnownHostsFile = Join-Path -Path $SshDirectory -ChildPath "known_hosts" + +# Ensure the .ssh directory exists +if (-not (Test-Path -Path $SshDirectory)) { + New-Item -ItemType Directory -Path $SshDirectory +} + +# Function to set ACL for the directory, granting FullControl to SYSTEM and applying inheritance +function Set-SystemOnlyAclForDirectory { + param ( + [string]$path + ) + + $systemAccount = New-Object System.Security.Principal.NTAccount("NT AUTHORITY", "SYSTEM") + $acl = Get-Acl -Path $path + $acl.SetAccessRuleProtection($true, $false) # Enable ACL protection, disable inheritance + $acl.Access | ForEach-Object { $acl.RemoveAccessRule($_) | Out-Null } # Clear existing rules + + # Create and add the Access Rule for SYSTEM with inheritance + $accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule($systemAccount, "FullControl", "ContainerInherit,ObjectInherit", "None", "Allow") + $acl.AddAccessRule($accessRule) + + # Apply the updated ACL to the directory + Set-Acl -Path $path -AclObject $acl +} + +# Function to set ACL for a file, granting FullControl only to SYSTEM +function Set-SystemOnlyAclForFile { + param ( + [string]$path + ) + + $systemAccount = New-Object System.Security.Principal.NTAccount("NT AUTHORITY", "SYSTEM") + $acl = Get-Acl -Path $path + $acl.SetAccessRuleProtection($true, $false) # Enable ACL protection, disable inheritance + $acl.Access | ForEach-Object { $acl.RemoveAccessRule($_) | Out-Null } # Clear existing rules + + # Create and add the Access Rule for SYSTEM + $accessRule = New-Object System.Security.AccessControl.FileSystemAccessRule($systemAccount, "FullControl", "Allow") + $acl.AddAccessRule($accessRule) + + # Apply the updated ACL to the file + Set-Acl -Path $path -AclObject $acl +} + +# Set ACL for the .ssh directory with inheritance +Set-SystemOnlyAclForDirectory -path $SshDirectory + +# Ensure the known_hosts file exists +if (-not (Test-Path -Path $KnownHostsFile)) { + New-Item -ItemType File -Path $KnownHostsFile +} + +# Set ACL for the known_hosts file without inheritance +Set-SystemOnlyAclForFile -path $KnownHostsFile + +# Run ssh-keyscan and append output to known_hosts +ssh-keyscan $SshHost | Out-File -FilePath $KnownHostsFile -Append -Encoding UTF8 + +# Output the contents of the known_hosts file +Get-Content -Path $KnownHostsFile diff --git a/testing/configure/wec_firewall.ps1 b/testing/configure/wec_firewall.ps1 new file mode 100644 index 00000000..3e3bb129 --- /dev/null +++ b/testing/configure/wec_firewall.ps1 @@ -0,0 +1,18 @@ +# Asks user to provide subnet - then creates a inbound allow firewall rule for 5985. Run on WEC server. +param ( + [string]$InboundRuleName = "WinRM TCP In 5985", + [string]$ClientSubnet = "10.1.0.0/24", + [string]$LocalPort = "5985" +) + +if (-not (Get-NetFirewallRule -Name $InboundRuleName -ErrorAction SilentlyContinue)) { + New-NetFirewallRule -DisplayName $InboundRuleName ` + -Direction Inbound -Protocol TCP ` + -LocalPort $LocalPort -Action Allow ` + -RemoteAddress $ClientSubnet ` + -Description "Allow inbound TCP ${LocalPort} for WinRM from clients subnet" +} else { + Write-Output "Inbound rule '$InboundRuleName' already exists." +} + +Write-Output "Inbound WinRM rule has been configured." diff --git a/testing/configure/wec_gpo_update_server_name.ps1 b/testing/configure/wec_gpo_update_server_name.ps1 new file mode 100644 index 00000000..6557042b --- /dev/null +++ b/testing/configure/wec_gpo_update_server_name.ps1 @@ -0,0 +1,42 @@ +<# +.SYNOPSIS +This script sets and retrieves a Group Policy (GP) registry value for Windows Event Log Event Forwarding. + +.DESCRIPTION +The script is used to configure the Subscription Manager URL for Windows Event Log Event Forwarding in a Group Policy setting. It sets the registry value for the Subscription Manager URL using the specified domain, port, and protocol, and then retrieves the value to confirm the setting. This is useful in environments where centralized event log management is required. + +.PARAMETER domain +The domain for the Subscription Manager URL. Default is 'dc1.lme.local'. + +.PARAMETER port +The port number for the Subscription Manager URL. Default is 5985. + +.PARAMETER protocol +The protocol for the Subscription Manager URL. Default is 'http'. + +.EXAMPLE +.\wec_gpo_update_server_name.ps1 +Executes the script with default parameters. + +.EXAMPLE +.\wec_gpo_update_server_name.ps1 -Domain "customdomain.local" -Port 1234 -Protocol "https" +Executes the script with custom domain, port, and protocol. + +#> + +param( + [string]$Domain = "dc1.lme.local", + [int]$Port = 5985, + [string]$Protocol = "http" +) + +# Construct the Subscription Manager URL using the provided parameters +$subscriptionManagerUrl = "Server=${Protocol}://${Domain}:${Port}/wsman/SubscriptionManager/WEC,Refresh=60" +Set-GPRegistryValue -Name "LME-WEC-Client" -Key "HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\EventLog\EventForwarding\SubscriptionManager" -Value $subscriptionManagerUrl -Type String + +# To get the GP registry value to confirm it's set +$registryValue = Get-GPRegistryValue -Name "LME-WEC-Client" -Key "HKEY_LOCAL_MACHINE\SOFTWARE\Policies\Microsoft\Windows\EventLog\EventForwarding\SubscriptionManager" + +# Output the retrieved registry value +Write-Output "Set the subscription manager url value to: " +$registryValue diff --git a/testing/configure/wec_import_gpo.ps1 b/testing/configure/wec_import_gpo.ps1 new file mode 100644 index 00000000..8906ce2d --- /dev/null +++ b/testing/configure/wec_import_gpo.ps1 @@ -0,0 +1,34 @@ +param( + [string]$Directory = $env:USERPROFILE +) + +# Determine the base directory path based on the provided username +$BaseDirectoryPath = if ($Directory -and ($Directory -ne $env:USERPROFILE)) { + "C:\$Directory" +} else { + "$env:USERPROFILE\Downloads" +} + +$GPOBackupPath = "$BaseDirectoryPath\LME\Chapter 1 Files\Group Policy Objects" + +$GpoNames = @("LME-WEC-Client", "LME-WEC-Server") + +foreach ($gpoName in $GpoNames) { + $gpo = Get-GPO -Name $gpoName -ErrorAction SilentlyContinue + if (-not $gpo) { + New-GPO -Name $gpoName | Out-Null + Write-Output "Created GPO: $gpoName" + } else { + Write-Output "GPO $gpoName already exists." + } + + try { + Import-GPO -BackupGpoName $gpoName -TargetName $gpoName -Path $GPOBackupPath -CreateIfNeeded -ErrorAction Stop + Write-Output "Imported settings into GPO: $gpoName" + } catch { + Throw "Failed to import GPO: $gpoName. The GPODisplayName in bkupinfo.xml may not match or other import error occurred." + } +} + +Write-Output "LME GPOs have been created and imported successfully." + diff --git a/testing/configure/wec_link_gpo.ps1 b/testing/configure/wec_link_gpo.ps1 new file mode 100644 index 00000000..3c8c4921 --- /dev/null +++ b/testing/configure/wec_link_gpo.ps1 @@ -0,0 +1,27 @@ +param( + [string]$Domain = "lme.local", + [string]$ClientOUCustomName = "LMEClients" +) + +Import-Module ActiveDirectory + +$DomainDN = $Domain -replace '\.', ',DC=' -replace '^', 'DC=' +$ClientOUDistinguishedName = "OU=$ClientOUCustomName,$DomainDN" + +$GPONameClient = "LME-WEC-Client" +$GPONameServer = "LME-WEC-Server" +$ServerOUDistinguishedName = "OU=Domain Controllers,$DomainDN" + +try { + New-GPLink -Name $GPONameClient -Target $ClientOUDistinguishedName + Write-Output "GPO '$GPONameClient' linked to OU '$ClientOUCustomName'." +} catch { + Write-Output "Error linking GPO '$GPONameClient' to OU '$ClientOUCustomName': $_" +} + +try { + New-GPLink -Name $GPONameServer -Target $ServerOUDistinguishedName + Write-Output "GPO '$GPONameServer' linked to OU 'Domain Controllers'." +} catch { + Write-Output "Error linking GPO '$GPONameServer' to OU 'Domain Controllers': $_" +} diff --git a/testing/configure/wec_service_provisioner.ps1 b/testing/configure/wec_service_provisioner.ps1 new file mode 100644 index 00000000..4c0d1302 --- /dev/null +++ b/testing/configure/wec_service_provisioner.ps1 @@ -0,0 +1,24 @@ +# PowerShell script to configure Windows Event Collector + +param( + [string]$XmlFilePath = "C:\lme\LME\Chapter 1 Files\lme_wec_config.xml" +) + +# Check if Windows Event Collector Service is running and start it if not +$wecService = Get-Service -Name "Wecsvc" +if ($wecService.Status -ne 'Running') { + Start-Service -Name "Wecsvc" + Write-Output "Windows Event Collector Service started." +} else { + Write-Output "Windows Event Collector Service is already running." +} + +# Check if the XML configuration file exists +if (Test-Path -Path $XmlFilePath) { + # Run the wecutil command to configure the collector + wecutil cs $XmlFilePath + Write-Output "wecutil command executed successfully with config file: $XmlFilePath" +} else { + Write-Output "Configuration file not found at $XmlFilePath" +} + diff --git a/testing/configure/wec_start_service.ps1 b/testing/configure/wec_start_service.ps1 new file mode 100644 index 00000000..7677185c --- /dev/null +++ b/testing/configure/wec_start_service.ps1 @@ -0,0 +1,19 @@ +# Start WEC using custom wec xml file + +try { + Start-Service -Name "Wecsvc" + Write-Output "WEC service started successfully." +} catch { + Write-Output "Failed to start WEC service: $_" +} + +$ConfigFilePath = "$env:USERPROFILE\Downloads\LME\Chapter 1 Files\lme_wec_config.xml" + +try { + Start-Process -FilePath "wecutil.exe" -ArgumentList "cs `"$ConfigFilePath`"" -Verb RunAs + Write-Output "wecutil command executed successfully." +} catch { + Write-Output "Failed to execute wecutil command: $_" +} + + diff --git a/testing/configure/winlogbeat_install.ps1 b/testing/configure/winlogbeat_install.ps1 new file mode 100644 index 00000000..7e78566c --- /dev/null +++ b/testing/configure/winlogbeat_install.ps1 @@ -0,0 +1,84 @@ +param ( + [Parameter()] + [string]$BaseDirectory = "C:\lme", + + [Parameter()] + [string]$WinlogbeatVersion = "winlogbeat-8.5.0-windows-x86_64" +) + +# Source and destination directories +$SourceDir = "$BaseDirectory\files_for_windows\tmp" +$DestinationDir = "C:\Program Files" + +# Copying files from source to destination +Copy-Item -Path "$SourceDir\*" -Destination $DestinationDir -Recurse -Force + +# Winlogbeat url +$Url = "https://artifacts.elastic.co/downloads/beats/winlogbeat/$WinlogbeatVersion.zip" + +# Destination path where the file will be saved +$WinlogbeatDestination = "$BaseDirectory\$WinlogbeatVersion.zip" + +# Unzip destination +$UnzipDestination = "C:\Program Files\lme\$WinlogbeatVersion" + +# Define the path of the winlogbeat.yml file in C:\Program Files\lme +$WinlogbeatYmlSource = "C:\Program Files\lme\winlogbeat.yml" + +# Define the destination path of the winlogbeat.yml file +$WinlogbeatYmlDestination = Join-Path -Path $UnzipDestination -ChildPath "winlogbeat.yml" + +# Define the full path of the install script +$InstallScriptPath = Join-Path -Path $UnzipDestination -ChildPath "install-service-winlogbeat.ps1" + +# Create the base directory if it does not exist +if (-not (Test-Path $BaseDirectory)) { + New-Item -ItemType Directory -Path $BaseDirectory +} + +# Download the file +Invoke-WebRequest -Uri $Url -OutFile $WinlogbeatDestination + +# Unzip the file +Expand-Archive -LiteralPath $WinlogbeatDestination -DestinationPath $UnzipDestination + +# Define the nested directory path +$nestedDir = Join-Path -Path $UnzipDestination -ChildPath $WinlogbeatVersion + +# Move the contents of the nested directory up one level and remove the nested directory +if (Test-Path $nestedDir) { + Get-ChildItem -Path $nestedDir -Recurse | Move-Item -Destination $UnzipDestination + Remove-Item -Path $nestedDir -Force -Recurse +} + +# Move the winlogbeat.yml file to the destination directory, overwriting if it exists +Move-Item -Path $WinlogbeatYmlSource -Destination $WinlogbeatYmlDestination -Force + +# Set execution policy to Unrestricted for this process +Set-ExecutionPolicy Unrestricted -Scope Process + +# Check if the install script exists +if (Test-Path $InstallScriptPath) { + # Change directory to the unzip destination + Push-Location -Path $UnzipDestination + + # Run the install script + .\install-service-winlogbeat.ps1 + + # Return to the previous directory + Pop-Location +} +else { + Write-Output "The installation script was not found at $InstallScriptPath" +} + +Start-Sleep -Seconds 5 + +# Start the winlogbeat service +try { + Start-Service -Name "winlogbeat" + Write-Output "Winlogbeat service started successfully." +} +catch { + Write-Output "Failed to start Winlogbeat service: $_" +} diff --git a/testing/development/Dockerfile b/testing/development/Dockerfile new file mode 100644 index 00000000..5a3dd5c3 --- /dev/null +++ b/testing/development/Dockerfile @@ -0,0 +1,62 @@ +# Use Ubuntu 22.04 as base image +FROM ubuntu:22.04 +ARG USER_ID=1001 +ARG GROUP_ID=1001 + +# Set environment variable to avoid interactive dialogues during build +ENV DEBIAN_FRONTEND=noninteractive + +# Install necessary APT packages including Python and pip +RUN apt-get update && apt-get install -y \ + lsb-release \ + python3 \ + python3-venv \ + python3-pip \ + zip \ + git \ + curl \ + wget \ + sudo \ + cron \ + freerdp2-x11 \ + pkg-config \ + libcairo2-dev \ + libdbus-1-dev \ + distro-info \ + libgirepository1.0-dev \ + && wget -q "https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/packages-microsoft-prod.deb" \ + && dpkg -i packages-microsoft-prod.deb \ + && apt-get update \ + && apt-get install -y powershell \ + && rm -rf /var/lib/apt/lists/* \ + && curl -sL https://aka.ms/InstallAzureCLIDeb | bash \ + && wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb \ + && apt install -y ./google-chrome-stable_current_amd64.deb \ + && rm -rf google-chrome-stable_current_amd64.deb \ + && sudo apt-get install -f \ + && apt-get clean + + + +# Create a user and group 'admin.ackbar' with GID 1001 +RUN groupadd -g $GROUP_ID admin.ackbar \ + && useradd -m -u $USER_ID -g admin.ackbar --badnames admin.ackbar \ + && usermod -aG sudo admin.ackbar + +# Allow 'admin.ackbar' user to run sudo commands without a password +RUN echo "admin.ackbar ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers + +# Define the base directory as an environment variable +ENV BASE_DIR=/home/admin.ackbar/LME + +# Set work directory +WORKDIR $BASE_DIR + +# Change to non-root privilege +# USER admin.ackbar + +# Set timezone (optional) +ENV TZ=America/New_York + +# Keep the container running (This can be replaced by your application's main process) +CMD ["tail", "-f", "/dev/null"] \ No newline at end of file diff --git a/testing/development/README.md b/testing/development/README.md new file mode 100644 index 00000000..567c1c62 --- /dev/null +++ b/testing/development/README.md @@ -0,0 +1,162 @@ +# Development and pipeline files +### Table of contents +- [Merging version](#merging-version) +- List of files + - [build_cluster.ps1](#build_clusterps1) + - [Dockerfile](#dockerfile) + - [destroy_cluster.ps1](#destroy_clusterps1) + - [build_docker_lme_install.sh](#build_docker_lme_installsh) + - [docker-compose.yml](#docker-composeyml) + - [install_lme.ps1](#install_lmeps1) + - [upgrade_lme.sh](#upgrade_lmesh) +- [Workflows](#workflows) + - [Workflow Environment Vars](#workflow-environment-vars) + - [Capturing the responses of workflow steps](#capturing-the-responses-of-workflow-steps) +- [Containers in VSCode](#containers-in-vscode) + - [.vscode directory](#vscode-directory) + +## Merging version +In order to have the pipeline run the upgrade on the proper version, +you will need to edit the `testing\merging_version.sh` file and put +the version you are going to merge into. In other words, the version +that your code will be released with. It is used in the script `upgrade_lme.sh` +in the `upgrade.yml` workflow file. + +## List of files +### build_cluster.ps1 +This is a powershell script that will login to an az shell (given that you have the right environment variables) and run the SetupTestbed.ps1 script. It will require that you have +account credentials from a managed identity to be able to run commands remotely. +### Dockerfile +This builds a container that is compatible with the version of Ubuntu we are using and includes the necessary apt packages and tools to run builds and tests. +### destroy_cluster.ps1 +This file is used by the pipeline to take down the servers and assets created in Azure. +### build_docker_lme_install.sh +This script is used by the pipeline to install lme inside of a container. +### docker-compose.yml +Creates two containers, one for development and running tests, another for installing lme onto. +This docker compose file is used in both the local development environment as well as in the pipeline. +You will want to create a .env file in the development directory that states the UID and GID of the user you want to run as in the container. +This is vital to make sure you can read and write to all the files. If your host machine is running linux you can just cd to your home directory +and run an `ls -ln` and it will show you the uid and gid that you are running as. This hasn't been tested in windows as a host containers, so you will +need either a virtual machine running wsl or virtual box running ubuntu, or a similar option. Since some of the later commands will be docker in docker, +you should start with a Ubuntu host with docker installed. +### install_lme.ps1 +This script is used by the pipeline to install LME on a remote cluster. +### upgrade_lme.sh +This script is used by the pipeline to checkout a branch and run an upgrade inside of a running lme instance. + + +## Workflows +The pipeline for building the LME workflows consist of three different workflows. +One is to build a fresh install (cluster.yml), the other is build Linux only (linux_only.yml) and the other one is to build an upgrade (upgrade.yml). +The linux only version is built on the workflow runner machine in docker. +The other workflows are built on a cluster in azure. + +All of the builds create a couple of docker containers on the runner machine and then run commands +in the containers. This allows you to run any of the commands from the pipeline on your local +dev environment by bringing up the docker containers locally. +In the pipeline it is necessary to run the commands with a -p so that the containers don't step on each other. + +For example: +``` bash +docker compose -p ${{ env.UNIQUE_ID }} -f testing/development/docker-compose.yml build lme --no-cache +``` +To run them locally just remove the -p and id: + +``` bash +docker compose -f testing/development/docker-compose.yml build lme --no-cache +``` +This allows you to run your commands and debug them locally so you don't have to wait for a complete build of the pipeline. + +#### Workflow Environment Vars +In the workflows there are many environment vars and they get written to a `$GITHUB_ENV` file to be accessible from +the various workflow steps. Some environment files will be written to a password file or a `.env` file so that +the various scripts or tests can access them. +* Be very careful about what you write to the files to make sure that we are not exposing actual secrets as this +is a public repo + +#### Capturing the responses of workflow steps +It is quite challenging to capture the responses of a command that was run using docker compose and then a script that +may run another script on the cluster. The important thing is to test that if your command fails, it will propagate the +errors up to the pipeline and stop the step. So when building a step, make sure to check it for failure or success. +In the different steps in the different files, there are various permeations of ways to do this. +Seemingly, the best one is to output a unique string at the end of your script and check for that upon completion +of the docker compose command. + + +## Containers in VSCode +In vscode you can actually run inside of the containers. There is some documentation about how to do this in the +`testing/tests/README.md` file. +We are providing a setup that you can put under your `.vscode` directory that will help expedite setting up the +containers from the root directory of the repo. The documentation in the `testing/tests/README.md` file are specifically +for running VSCode environments that mount those test directories. This setup will mount the root directory of the +repo, which is more useful during normal development. + +### .vscode directory +You can create these files in the .vscode directory in the root of your repo and put the contents in them. `.vscode` is in the gitignore file so you should be ok. Best not to check these ones in. +* launch.json +``` +{ + "version": "0.2.0", + "configurations": [ + { + "name": "Python Debugger: Run API Tests", + "type": "debugpy", + "request": "launch", + "module": "pytest", + "args": [ + "${workspaceFolder}/testing/tests/api_tests" + ], + "console": "integratedTerminal", + "justMyCode": false, + "cwd": "${workspaceFolder}/testing/tests", + "envFile": "${workspaceFolder}/testing/tests/.env" + }, + { + "name": "Python Debugger: Run Selenium linux only Tests", + "type": "debugpy", + "request": "launch", + "module": "pytest", + "args": [ + "${workspaceFolder}/testing/tests/selenium_tests/linux_only" + ], + "console": "integratedTerminal", + "justMyCode": false, + "cwd": "${workspaceFolder}/testing/tests", + "envFile": "${workspaceFolder}/testing/tests/.env" + }, + { + "name": "Python Debugger: Run Selenium Tests", + "type": "debugpy", + "request": "launch", + "program": "${workspaceFolder}/testing/tests/selenium_tests.py", + "args": [ + "--domain", "lme" + ], + "console": "integratedTerminal", + "justMyCode": false, + "cwd": "${workspaceFolder}/testing/tests", + "envFile": "${workspaceFolder}/testing/tests/.env", + } + ] + } +``` + +* settings.json + +```{ + "python.testing.cwd": "${workspaceFolder}/testing/tests", + "python.testing.unittestEnabled": false, + "python.testing.nosetestsEnabled": false, + "python.testing.pytestEnabled": true, + "yaml.schemas": { + "https://json.schemastore.org/github-workflow.json": ".github/workflows/*.yml" + }, + "workbench.colorCustomizations": { + "tab.activeBackground": "#49215a" + }, + "python.defaultInterpreterPath": "${workspaceFolder}/testing/tests/venv/bin/python", + "terminal.integrated.defaultProfile.linux": "bash" +} + +``` \ No newline at end of file diff --git a/testing/development/build_cluster.ps1 b/testing/development/build_cluster.ps1 new file mode 100644 index 00000000..1236d535 --- /dev/null +++ b/testing/development/build_cluster.ps1 @@ -0,0 +1,18 @@ +param ( + [Parameter(Mandatory=$true)] + [string]$IPAddress +) + +$ErrorActionPreference = 'Stop' + +# Log in using Azure CLI +az login --service-principal -u $env:AZURE_CLIENT_ID -p $env:AZURE_SECRET --tenant $env:AZURE_TENANT + +# Construct the path to the target directory relative to the script's location +$targetDirectory = Join-Path -Path $PSScriptRoot -ChildPath "..\\" + +# Change to the target directory +Set-Location -Path $targetDirectory + +# Execute the SetupTestbed.ps1 script with parameters +.\SetupTestbed.ps1 -AllowedSources "$IPAddress/32" -l centralus -ResourceGroup $env:RESOURCE_GROUP -y | Tee-Object -FilePath "./$env:RESOURCE_GROUP.cluster.output.log" \ No newline at end of file diff --git a/testing/development/build_docker_lme_install.sh b/testing/development/build_docker_lme_install.sh new file mode 100755 index 00000000..cafd1fc2 --- /dev/null +++ b/testing/development/build_docker_lme_install.sh @@ -0,0 +1,46 @@ +#!/usr/bin/env bash + +# Parse command line arguments +while getopts ":b:v:" opt; do + case $opt in + b) + if [ -n "$version" ]; then + echo "Cannot use both -b and -v options simultaneously" >&2 + exit 1 + fi + branch="$OPTARG" + ;; + v) + if [ -n "$branch" ]; then + echo "Cannot use both -b and -v options simultaneously" >&2 + exit 1 + fi + version="$OPTARG" + ;; + \?) echo "Invalid option -$OPTARG" >&2; exit 1;; + esac +done + +cd testing/configure || exit + +sudo ./linux_update_system.sh + +# Pass the branch or version argument to linux_install_lme.sh +if [ -n "$branch" ]; then + sudo ./linux_install_lme.sh -b "$branch" +elif [ -n "$version" ]; then + sudo ./linux_install_lme.sh -v "$version" +else + sudo ./linux_install_lme.sh +fi + +. lib/functions.sh +extract_credentials +echo $elastic + +cd ../tests/ || exit + +python3 -m venv /home/admin.ackbar/venv_test +. /home/admin.ackbar/venv_test/bin/activate +pip install -r requirements.txt +sudo chown admin.ackbar:admin.ackbar /home/admin.ackbar/venv_test -R \ No newline at end of file diff --git a/testing/development/destroy_cluster.ps1 b/testing/development/destroy_cluster.ps1 new file mode 100644 index 00000000..ee5896b9 --- /dev/null +++ b/testing/development/destroy_cluster.ps1 @@ -0,0 +1,18 @@ +$ErrorActionPreference = 'Stop' + +# Check if the RESOURCE_GROUP environment variable has a value +if ([string]::IsNullOrWhiteSpace($env:RESOURCE_GROUP)) { + Write-Error "RESOURCE_GROUP environment variable is not set." + exit 1 +} + +# Check if the resource group exists +$resourceGroupExists = az group exists --name "$env:RESOURCE_GROUP" + +if ($resourceGroupExists -eq 'true') { + # Delete the resource group if it exists + az group delete --name "$env:RESOURCE_GROUP" --yes --no-wait + Write-Host "Deletion of resource group $($env:RESOURCE_GROUP) initiated." +} else { + Write-Host "Resource group $($env:RESOURCE_GROUP) does not exist. No action taken." +} diff --git a/testing/development/docker-compose.yml b/testing/development/docker-compose.yml new file mode 100644 index 00000000..a4a462d1 --- /dev/null +++ b/testing/development/docker-compose.yml @@ -0,0 +1,56 @@ +# Docker Compose file for setting up development environment for LME project. +# +# This file defines two services: +# 1. ubuntu: +# - Builds an Ubuntu container with the specified USER_ID and GROUP_ID arguments. +# - Mounts the parent directory to /lme in the container, allowing access to the LME project. +# - Sets the container name to "lme_development". +# - Sets the user to the specified HOST_UID and HOST_GID. +# - Runs the command "sleep infinity" to keep the container running indefinitely. +# +# 2. lme: +# - Builds a container using the Dockerfile located in ../../ directory. +# - Uses the specified USER_ID and GROUP_ID arguments. +# - Sets the container name to "lme". +# - Sets the user to the specified HOST_UID and HOST_GID. +# - Mounts the parent directory to /home/admin.ackbar/LME in the container, allowing access to the LME project. +# - Runs the command "sleep infinity" to keep the container running indefinitely. +# - Exposes the following ports: 443, 9200, 9300, 5000, 9600, 5601. +# +version: '3.8' + +services: + ubuntu: + build: + context: . + args: + USER_ID: "${HOST_UID:-1001}" + GROUP_ID: "${HOST_GID:-1001}" + container_name: lme_development + user: "${HOST_UID:-1001}:${HOST_GID:-1001}" + volumes: + - ../../../LME/:/lme + command: sleep infinity + + lme: + build: + context: ../../ + dockerfile: testing/development/Dockerfile + args: + USER_ID: "${HOST_UID:-1001}" + GROUP_ID: "${HOST_GID:-1001}" + # semgrep: allowlist + # semgrep: yaml.docker-compose.security.privileged-service.privileged-service + privileged: true + container_name: lme + user: "${HOST_UID:-1001}:${HOST_GID:-1001}" + volumes: + - ../../:/home/admin.ackbar/LME + command: sleep infinity + ports: + - "443:443" + - "9200:9200" + - "9300:9300" + - "5000:5000" + - "9600:9600" + - "5601:5601" \ No newline at end of file diff --git a/testing/development/install_lme.ps1 b/testing/development/install_lme.ps1 new file mode 100644 index 00000000..fda6e280 --- /dev/null +++ b/testing/development/install_lme.ps1 @@ -0,0 +1,40 @@ +param( + [switch]$m, + [string]$v, + [string]$b +) + +$ErrorActionPreference = 'Stop' + +# Check if -v and -b are mutually exclusive +if ($v -and $b) { + Write-Error "Error: -v and -b are mutually exclusive. Please provide only one of them." + exit 1 +} + +# Log in using Azure CLI +az login --service-principal -u $env:AZURE_CLIENT_ID -p $env:AZURE_SECRET --tenant $env:AZURE_TENANT + +# Construct the path to the target directory relative to the script's location +$targetDirectory = Join-Path -Path $PSScriptRoot -ChildPath "..\\" + +# Change to the target directory +Set-Location -Path $targetDirectory + +# Prepare the parameters for InstallTestbed.ps1 +$installTestbedParams = "" +if ($v) { + $installTestbedParams += " -v $v " +} +if ($b) { + $installTestbedParams += " -b $b " +} +if ($m) { + $installTestbedParams += " -m " +} + +# Prepare the command string +$command = ".\InstallTestbed.ps1 -ResourceGroup $env:RESOURCE_GROUP $installTestbedParams | Tee-Object -FilePath ./$env:RESOURCE_GROUP.output.log" + +# Execute the command +Invoke-Expression $command diff --git a/testing/development/upgrade_lme.sh b/testing/development/upgrade_lme.sh new file mode 100755 index 00000000..80bbcd22 --- /dev/null +++ b/testing/development/upgrade_lme.sh @@ -0,0 +1,30 @@ +#!/usr/bin/env bash + +set -e + +# Find out where I am +script_path=$(readlink -f "$0") +script_dir=$(dirname "$script_path") +# Move up to the testing directory +echo "Changig directory to $script_dir/../" +cd "$script_dir/../" || exit 1 + +git config --global --add safe.directory /home/admin.ackbar/LME +git config --global --add safe.directory /opt/lme + +#Get the branch I am working on +echo "Checking current branch" +export current_branch=$(git rev-parse --abbrev-ref HEAD) + + +# Get the version that we are going to upgrade to +. ./merging_version.sh + +# Checkout the version we are on +sudo echo "Current branch: $current_branch" +sudo echo "Forcing version: $FORCE_LATEST_VERSION" +sudo sh -c "cd '/opt/lme/' && git checkout 'Chapter\ 3\ Files/deploy.sh' && git checkout -t origin/$current_branch && git pull" +echo "Running the upgrade" +sudo sh -c "export TERM=dumb; export FORCE_LATEST_VERSION=$FORCE_LATEST_VERSION; cd '/opt/lme/Chapter 3 Files' && ./deploy.sh upgrade" + +echo "UPGRADE_SUCCESSFUL" \ No newline at end of file diff --git a/testing/merging_version.sh b/testing/merging_version.sh new file mode 100644 index 00000000..c02ca4a4 --- /dev/null +++ b/testing/merging_version.sh @@ -0,0 +1,2 @@ +# TODO: Change this to the latest version you are going to merge into +export FORCE_LATEST_VERSION=1.3.3 \ No newline at end of file diff --git a/testing/project_management/Dockerfile b/testing/project_management/Dockerfile new file mode 100644 index 00000000..a595d49c --- /dev/null +++ b/testing/project_management/Dockerfile @@ -0,0 +1,20 @@ +FROM python:3.9-slim-buster + +#WORKDIR /lme + +# Install the necessary dependencies +RUN apt-get update && apt-get install -y \ + git \ + bash + +# This ends up just being at the root of the file system + +# Clone the github-projects-burndown-chart repository +RUN git clone https://github.com/cisagov/github-projects-burndown-chart && \ + cd github-projects-burndown-chart && \ + pip install --no-cache-dir -r requirements.txt && \ + cp src/github_projects_burndown_chart/config/secrets.json.dist src/github_projects_burndown_chart/config/secrets.json && \ + cp src/github_projects_burndown_chart/config/config.json.dist src/github_projects_burndown_chart/config/config.json + + +CMD ["sleep", "infinity"] \ No newline at end of file diff --git a/testing/project_management/docker-compose.yml b/testing/project_management/docker-compose.yml new file mode 100644 index 00000000..89c40439 --- /dev/null +++ b/testing/project_management/docker-compose.yml @@ -0,0 +1,10 @@ +version: '3' +services: + burndown: + build: + context: . + dockerfile: Dockerfile + environment: + - BURNDOWN_TOKEN=${BURNDOWN_TOKEN} + volumes: + - ../../../LME/:/lme \ No newline at end of file diff --git a/testing/project_management/setup_config.sh b/testing/project_management/setup_config.sh new file mode 100755 index 00000000..01da9d65 --- /dev/null +++ b/testing/project_management/setup_config.sh @@ -0,0 +1,71 @@ +#!/usr/bin/env bash + +# Parse named arguments +while getopts ":s:e:f:v:" opt; do + case $opt in + s) start_date="$OPTARG";; + e) end_date="$OPTARG";; + f) file_path="$OPTARG";; + v) view="$OPTARG";; + \?) echo "Invalid option -$OPTARG" >&2; exit 1;; + esac +done + +# Validate start_date and end_date +if [ -z "$start_date" ]; then + echo "Start date is required. Use -s option to specify the start date." + exit 1 +fi + +if [ -z "$end_date" ]; then + echo "End date is required. Use -e option to specify the end date." + exit 1 +fi + +# Validate date format +date_regex="^[0-9]{4}-[0-9]{2}-[0-9]{2}$" + +if ! [[ $start_date =~ $date_regex ]]; then + echo "Invalid start date format. Please use the format YYYY-mm-dd." + exit 1 +fi + +if ! [[ $end_date =~ $date_regex ]]; then + echo "Invalid end date format. Please use the format YYYY-mm-dd." + exit 1 +fi + +# Set default file path if not provided +if [ -z "$file_path" ]; then + file_path="/github-projects-burndown-chart/src/github_projects_burndown_chart/config/config.json" +fi + +# Set default view if not provided +if [ -z "$view" ]; then + view=1 +fi + +# Create the directory if it doesn't exist +mkdir -p "$(dirname "$file_path")" + +# Generate the JSON content with the provided start_date, end_date, and view +echo '{ + "organization": { + "LME": { + "query_variables": { + "organization_name": "cisagov", + "project_number": 68, + "column_count": 7, + "max_cards_per_column_count": 100, + "labels_per_issue_count": 5, + "view_number": '"$view"' + }, + "settings": { + "sprint_start_date": "'"$start_date"'", + "sprint_end_date": "'"$end_date"'", + "points_label": "Points: ", + "version": 2 + } + } + } +}' > "$file_path" diff --git a/testing/tests/.env_example b/testing/tests/.env_example new file mode 100644 index 00000000..65efa408 --- /dev/null +++ b/testing/tests/.env_example @@ -0,0 +1,19 @@ +# Comes from an install using InstallTestbed.ps1 many tests use it +export elastic='yourelasticpassword' + +# For api tests that connect directly to elasticsearch +export ES_HOST="lme" # When running in docker and connecting from dev container +# export ES_HOST=xx.xx.xx.xxx # When you have a cluser installed in azure + +# Selenium tests folder. Connects to kibana +export KIBANA_HOST=lme # When running in docker and connecting from dev container +# export KIBANA_HOST=localhost # When running the tests inside of the lme container +# export KIBANA_HOST=xx.xx.xx.xxx # When you have a cluser installed in azure +export KIBANA_PORT=443 +export KIBANA_USER=elastic +export SELENIUM_TIMEOUT=60 +# debug, detached, headless +export SELENIUM_MODE=headless + +# selenium_tests.py +export ELASTIC_PASSWORD='yourelasticpassword' \ No newline at end of file diff --git a/testing/tests/.vscode/launch.json b/testing/tests/.vscode/launch.json new file mode 100644 index 00000000..9303ea46 --- /dev/null +++ b/testing/tests/.vscode/launch.json @@ -0,0 +1,16 @@ +{ + "version": "0.2.0", + "configurations": [ + { + "name": "Python Debugger: Run Tests", + "type": "debugpy", + "request": "launch", + "module": "pytest", + "args": [ + "${workspaceFolder}/api_tests" // Path to your tests + ], + "console": "integratedTerminal", + "justMyCode": false // Set this to false to allow debugging into external libraries + } + ] +} diff --git a/testing/tests/.vscode/settings.json b/testing/tests/.vscode/settings.json new file mode 100644 index 00000000..cb5d60c4 --- /dev/null +++ b/testing/tests/.vscode/settings.json @@ -0,0 +1,7 @@ +{ + "python.testing.pytestArgs": [ + "api_tests" + ], + "python.testing.unittestEnabled": false, + "python.testing.pytestEnabled": true +} \ No newline at end of file diff --git a/testing/tests/Dockerfile b/testing/tests/Dockerfile new file mode 100644 index 00000000..6f261c0c --- /dev/null +++ b/testing/tests/Dockerfile @@ -0,0 +1,22 @@ +# Use Ubuntu 22.04 as base image +FROM ubuntu:22.04 + +# Set environment variable to avoid interactive dialogues during build +ENV DEBIAN_FRONTEND=noninteractive + +# Install necessary APT packages including Python and pip +RUN apt-get update && apt-get install -y \ + python3 \ + python3-venv \ + python3-pip \ + zip \ + && rm -rf /var/lib/apt/lists/* + +# Set work directory +WORKDIR /app + +# Set timezone (optional) +ENV TZ=America/New_York + +# Keep the container running (This can be replaced by your application's main process) +CMD ["tail", "-f", "/dev/null"] diff --git a/testing/tests/README.md b/testing/tests/README.md new file mode 100644 index 00000000..7a60cc95 --- /dev/null +++ b/testing/tests/README.md @@ -0,0 +1,265 @@ +# Docker and VSCode Setup +### Table of Contents + +1. [Introduction](#introduction) +2. [Dev Containers](#dev-containers) +3. [Building Docker Containers](#building-the-docker-containers-to-use-your-local-username) + - [Options](#options) + - Python Development Option + - Python Tests Option + - [Running Tests in the Development Container](#running-tests-in-the-development-container-option) +4. [VSCode Extensions](#vscode-extensions) +5. [Environment Variables Setup](#environment-variables-setup) +6. [Python Virtual Environment Setup](#python-virtual-environment-setup) +7. [Running the Tests from the Command Line](#running-the-tests-from-the-command-line) +8. [Generating Test HTML Reports](#generating-test-html-reports) + + +## Introduction +This environment is set up to run on a computer with Docker installed and on Visual Studio Code (VSCode). + +## Dev Containers +On your host machine, you will want to install the Dev Containers extension in VSCode. With Docker installed on your host machine, you should be able to reopen this repository in a container and select different environment options. To open the repository in a container, press the blue connect button at the far bottom left of the VSCode window. This will prompt you with options to open in the different environments. + +## Building the docker containers to use your local username +The docker-compose file in the development contianer is set to use the `.env` file in the `/testing/development` folder. + +If you don't have a .env file, it will use the userid 1001 by default. +Check and see what your userid is in your host machine by running +```bash +ls -lna ~ +``` +This will tell you your user id and group id of the host machine. Look at what id the files are owned by. +```bash +drwxr-x--- 1 1000 1000 4096 Mar 1 13:04 . +drwxr-xr-x 1 0 0 4096 Mar 1 12:44 .. +-rw------- 1 1000 1000 21 Mar 1 13:04 .bash_history +-rw-r--r-- 1 1000 1000 220 Jan 6 2022 .bash_logout +-rw-r--r-- 1 1000 1000 3771 Jan 6 2022 .bashrc +drwxr-xr-x 3 1000 1000 4096 Mar 1 13:04 .dotnet +-rw-r--r-- 1 1000 1000 292 Mar 1 13:04 .gitconfig +drwx------ 2 1000 1000 4096 Mar 1 13:04 .gnupg +-rw-r--r-- 1 1000 1000 807 Jan 6 2022 .profile +drwxr-xr-x 2 1000 1000 4096 Mar 1 13:04 .ssh +drwxr-xr-x 6 1000 1000 4096 Mar 1 13:04 .vscode-server +drwxr-xr-x 2 0 0 4096 Mar 1 12:44 LME +``` +In this case you can see the files like `.bash_history` are owned by `1000 1000`. +The first number is your user id and the second is your group id. +So in the `testing/development` folder make a new file named `.env` and put this in it: +```bash +HOST_UID=1000 +HOST_GID=1000 +``` +Now you will need to build the containers for the first time. Subsequent builds, and up, will +use the prebuilt containers and keep the user id as the correct one in the container. +```bash +cd testing/development +docker compose build --no-cache +``` +You can follow the rest of the directions on this page and just make sure that when you get into the container, open a new bash shell and do a `ls -la` the files should be owned by `admin.ackbar` + + +### Options +- **Python Development Option**: This option is for development of the entire codebase and +is not set up for debugging and running tests easily. If you want to run tests and debug +in this environment, you can manually set it up by making a `launch.json` and a +`settings.json` in the root of the repo under `.vscode`. +You can copy the versions in the `testing/tests/.vscode` folder, as a starting point. +- **Python Tests Option**: This option is for opening only the test environment. You will want to open this one for running your tests as it already has quite a bit of setup for getting the tests to run easily. + +Using Docker helps to avoid polluting your host environment with multiple versions of Python. + +### Running tests in the Development Container Option +When you select the Python Tests option to run your container in, there are already +config files for running tests in VSCode so you won't have to set this part up. + +If you want to run tests within the +Python Development environment option, you will have to make a `.vscode/launch.json` in the root +of your environment. This folder isn't checked into the repo so it has to be manually +created. +The easy way to create this file is to click on the play button (triangle) with the little bug on it in your +VSCode activity bar. There will be a link there to "create a launch.json file". Click on that link and select +"Python Debugger"->"Python File". This will create a file and open it. Replace its contents with the below +code to run the `api_tests` in `testing/tests/api_tests`. +After that, the Run and Debug interface will change and have a green arrow in it for running and testing code. + +``` +{ + "version": "0.2.0", + "configurations": [ + { + "name": "Python Debugger: Run API Tests", + "type": "debugpy", + "request": "launch", + "module": "pytest", + "args": [ + "${workspaceFolder}/testing/tests/api_tests" + ], + "console": "integratedTerminal", + "justMyCode": false, + "cwd": "${workspaceFolder}/testing/tests", + "envFile": "${workspaceFolder}/testing/tests/.env" + }, + { + "name": "Python Debugger: Run Selenium linux only Tests", + "type": "debugpy", + "request": "launch", + "module": "pytest", + "args": [ + "${workspaceFolder}/testing/tests/selenium_tests/linux_only" + ], + "console": "integratedTerminal", + "justMyCode": false, + "cwd": "${workspaceFolder}/testing/tests", + "envFile": "${workspaceFolder}/testing/tests/.env" + }, + { + "name": "Python Debugger: Run Selenium Tests", + "type": "debugpy", + "request": "launch", + "program": "${workspaceFolder}/testing/tests/selenium_tests.py", + "args": [ + "--domain", "172.19.0.3" + ], + "console": "integratedTerminal", + "justMyCode": false, + "cwd": "${workspaceFolder}/testing/tests", + "envFile": "${workspaceFolder}/testing/tests/.env", + } + ] + } +``` +If you want to get the test explorer (beaker icon) to be able to find your tests, you can add +this to your `.vscode/settings.json`, so it knows to look in the `/testing/tests` folder. +``` +"python.testing.pytestArgs": [ + "testing/tests" +], +"python.testing.unittestEnabled": false, +"python.testing.nosetestsEnabled": false, +"python.testing.pytestEnabled": true +``` + +## VSCode Extensions +The necessary VSCode extensions have been installed, in the Python Tests container, for +running and debugging tests within VSCode. The first time you open the project in a +container, it may take a little time for VSCode to install the necessary extensions. + +## Environment Variables Setup +- There is an example `.env_example` file for setting environment variables for the tests. +- To use it, copy this file and rename it to `.env`. +- The testing environment will then pick up those variables and set them as environment +variables before running tests. + +## Python Virtual Environment Setup +In order for VSCode to use the python modules for the tests, you will want to install a +python virtual environment for it to use. You can make a python virtual environment +folder that is available for both of the development containers by making it in the +`testing/tests` folder. Then you can have only one copy of the environment for both +container options. +You can do this by opening a new terminal in VSCode, within the `testing/tests` +directory, and running: + + +`python3 -m venv venv` + +This will make a virtual environment for python to install its modules into. +Once you have made the virtual environment, you then run: + +`. venv/bin/activate` + +which will activate the virtual environment for you. +It will show this in the terminal prompt by prefacing your prompt with `(venv) restofprompt#`. + +Once you have activated the virtual environment, run the installer for the pip modules: + + `pip install -r requirements.txt` + +You can now select this environment in VSCode. To do this, open a python file from +within the project explorer. Once the file is open in the editor, VSCode will show +you which python version you are running in the bottom right of the screen. If you +click that version, you can select the venv version that you installed above. +The path should be `./testing/tests/venv/bin/python` + + +## Running the tests from the command line +Set up the virtual environment, activate it, and install the modules. Then you can run the tests with pytest + +``` +cd testing/tests +python3 -m venv venv +. venv/bin/activate +pip install -r requirements.txt +pytest +``` + +## Generating Test HTML Reports +After the tests have been executed, run the following command to generate HTML report to view Test Results. + +``` +pytest --html=report.html +``` + +Note: pytest-html has been added to requirements.txt. If for any reason pytest-html is not installed on your virtual environment; you may first need to install it with the following command. + +``` +pip install pytest-html +``` + +After html report is generated, run the following command outside virtual environment to attribute appropriate ownership on the html file so that you can open the file with the browser of choice. Google Chrome browser seems to provide a better display than Firefox. + +``` +chown 1000.1000 report.html +``` + +When a test fails, the test result details on the report provide appropriate information on the error message as you would expect to see on console. + + +## Development and Docker + +Using Visual Studio Code you can open this project in a container so you can develop in an environment that is just like the pipeline runs. +In order to do so, you will need to create a directory at the root of the repo and put some folders inside of it. +```bash +mkdir -p .devcontainer/python_development +touch .devcontainer/python_development/devcontainer.json +``` + +Once you have set up this configuration you can add this to `devcontainer.json`: +```json +{ + "name": "Python Development", + "dockerComposeFile": [ + "../../testing/development/docker-compose.yml" + ], + "service": "ubuntu", + "shutdownAction": "none", + "workspaceFolder": "/lme", + "customizations": { + "vscode": { + "extensions": [ + "ms-python.python", + "littlefoxteam.vscode-python-test-adapter", + "ms-python.black-formatter" + ] + } + }, + "remoteUser": "admin.ackbar" +} +``` + +Now you can press the blue button at the far bottom left of the VSCode editor and select "Reopen in container", choosing the "Python Development" option. + +In this container, you can reach an lme install (on the host's docker) by connecting to `lme` lme resolves to the other container where +you can run an lme install on. +You can see how to do an lme install on that container by looking at the `linux_only.yml` pipeline in the `.github/workflows` directory. + +At the time of this writing, you can run this on your host's system (not in the dev container): +```bash +cd LME/testing/development/ +docker compose exec -T lme bash -c "./testing/development/build_docker_lme_install.sh -b your-branch-name-with-no-quotes" +# Make sure your branch is pushed up to github before running this. +``` + +Once you do that, you can now reach that install from within your dev containers by using the hostname `lme`. + diff --git a/backups/.gitkeep b/testing/tests/api_tests/__init__.py similarity index 100% rename from backups/.gitkeep rename to testing/tests/api_tests/__init__.py diff --git a/testing/tests/api_tests/data_insertion_tests/__init__.py b/testing/tests/api_tests/data_insertion_tests/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/testing/tests/api_tests/data_insertion_tests/conftest.py b/testing/tests/api_tests/data_insertion_tests/conftest.py new file mode 100644 index 00000000..65998f93 --- /dev/null +++ b/testing/tests/api_tests/data_insertion_tests/conftest.py @@ -0,0 +1,37 @@ +# conftest.py + +import os +import warnings +import pytest +import urllib3 + +# Disable SSL warnings +urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) + + +@pytest.fixture(autouse=True) +def suppress_insecure_request_warning(): + warnings.simplefilter("ignore", urllib3.exceptions.InsecureRequestWarning) + + +@pytest.fixture +def es_host(): + return os.getenv("ES_HOST", os.getenv("ELASTIC_HOST", "localhost")) + + +@pytest.fixture +def es_port(): + return os.getenv("ES_PORT", os.getenv("ELASTIC_PORT", "9200")) + + +@pytest.fixture +def username(): + return os.getenv("ES_USERNAME", os.getenv("ELASTIC_USERNAME", "elastic")) + + +@pytest.fixture +def password(): + return os.getenv( + "elastic", + os.getenv("ES_PASSWORD", os.getenv("ELASTIC_PASSWORD", "default_password")), + ) diff --git a/testing/tests/api_tests/data_insertion_tests/fixtures/hosts.json b/testing/tests/api_tests/data_insertion_tests/fixtures/hosts.json new file mode 100644 index 00000000..e3a58c0d --- /dev/null +++ b/testing/tests/api_tests/data_insertion_tests/fixtures/hosts.json @@ -0,0 +1,29 @@ +{ + "winlog": { + "computer_name": "testing.lme.local", + "event_id": "4625", + "task": "Logon", + "keywords": [ + "Audit Failure" + ], + "provider_name": "Microsoft-Windows-Security-Auditing", + "event_data": { + "LogonType": "3", + "IpAddress": "194.165.16.73", + "TargetUserName": "Administrator", + "TargetDomainName": "testserver.LME.LOCAL", + "LogonProcessName": "NtLmSsp ", + "AuthenticationPackageName": "NTLM" + } + }, + "@timestamp": "2024-05-08T08:40:18.252Z", + "host": { + "name": "testing.lme.local" + }, + "event": { + "code": "4625", + "provider": "Microsoft-Windows-Security-Auditing", + "action": "Logon", + "outcome": "failure" + } + } \ No newline at end of file diff --git a/testing/tests/api_tests/data_insertion_tests/fixtures/logonevents.json b/testing/tests/api_tests/data_insertion_tests/fixtures/logonevents.json new file mode 100644 index 00000000..400a0439 --- /dev/null +++ b/testing/tests/api_tests/data_insertion_tests/fixtures/logonevents.json @@ -0,0 +1,38 @@ +{ + "winlog": { + "computer_name": "C2.lme.local", + "keywords": [ + "Audit Failure" + ], + "user": { + "name": "APItestuserid", + "domain": "" + }, + "event_data": { + "LogonType": "2", + "SubjectUserName": "-", + "FailureReason": "%%2313", + "SubjectDomainName": "-", + "IpAddress": "194.169.175.22", + "TargetUserName": "solidart", + "LogonProcessName": "NtLmSsp ", + "SubjectUserSid": "S-1-0-0", + "TargetUserSid": "S-1-0-0", + "AuthenticationPackageName": "NTLM" + }, + "@timestamp": "2024-06-12T09:50:18.252Z", + "host": { + "name": "C2.lme.local" + } + }, + "event": { + "code": "4624", + "provider": "Microsoft-Windows-Security-Auditing", + "action": "Logon", + "outcome": "failure" + }, + "user": { + "name": "APItestuserid", + "domain": "test" + } + } \ No newline at end of file diff --git a/testing/tests/api_tests/data_insertion_tests/queries/filter_hosts.json b/testing/tests/api_tests/data_insertion_tests/queries/filter_hosts.json new file mode 100644 index 00000000..ad00cb9c --- /dev/null +++ b/testing/tests/api_tests/data_insertion_tests/queries/filter_hosts.json @@ -0,0 +1,287 @@ +{ + "aggs": { + "2": { + "terms": { + "field": "host.name", + "order": { + "_count": "desc" + }, + "size": 25 + } + } + }, + "size": 0, + "fields": [ + { + "field": "@timestamp", + "format": "date_time" + }, + { + "field": "code_signature.timestamp", + "format": "date_time" + }, + { + "field": "dll.code_signature.timestamp", + "format": "date_time" + }, + { + "field": "elf.creation_date", + "format": "date_time" + }, + { + "field": "event.created", + "format": "date_time" + }, + { + "field": "event.end", + "format": "date_time" + }, + { + "field": "event.ingested", + "format": "date_time" + }, + { + "field": "event.start", + "format": "date_time" + }, + { + "field": "file.accessed", + "format": "date_time" + }, + { + "field": "file.code_signature.timestamp", + "format": "date_time" + }, + { + "field": "file.created", + "format": "date_time" + }, + { + "field": "file.ctime", + "format": "date_time" + }, + { + "field": "file.elf.creation_date", + "format": "date_time" + }, + { + "field": "file.mtime", + "format": "date_time" + }, + { + "field": "file.x509.not_after", + "format": "date_time" + }, + { + "field": "file.x509.not_before", + "format": "date_time" + }, + { + "field": "package.installed", + "format": "date_time" + }, + { + "field": "process.code_signature.timestamp", + "format": "date_time" + }, + { + "field": "process.elf.creation_date", + "format": "date_time" + }, + { + "field": "process.end", + "format": "date_time" + }, + { + "field": "process.parent.code_signature.timestamp", + "format": "date_time" + }, + { + "field": "process.parent.elf.creation_date", + "format": "date_time" + }, + { + "field": "process.parent.end", + "format": "date_time" + }, + { + "field": "process.parent.start", + "format": "date_time" + }, + { + "field": "process.start", + "format": "date_time" + }, + { + "field": "threat.enrichments.indicator.file.accessed", + "format": "date_time" + }, + { + "field": "threat.enrichments.indicator.file.code_signature.timestamp", + "format": "date_time" + }, + { + "field": "threat.enrichments.indicator.file.created", + "format": "date_time" + }, + { + "field": "threat.enrichments.indicator.file.ctime", + "format": "date_time" + }, + { + "field": "threat.enrichments.indicator.file.elf.creation_date", + "format": "date_time" + }, + { + "field": "threat.enrichments.indicator.file.mtime", + "format": "date_time" + }, + { + "field": "threat.enrichments.indicator.first_seen", + "format": "date_time" + }, + { + "field": "threat.enrichments.indicator.last_seen", + "format": "date_time" + }, + { + "field": "threat.enrichments.indicator.modified_at", + "format": "date_time" + }, + { + "field": "threat.enrichments.indicator.x509.not_after", + "format": "date_time" + }, + { + "field": "threat.enrichments.indicator.x509.not_before", + "format": "date_time" + }, + { + "field": "threat.indicator.file.accessed", + "format": "date_time" + }, + { + "field": "threat.indicator.file.code_signature.timestamp", + "format": "date_time" + }, + { + "field": "threat.indicator.file.created", + "format": "date_time" + }, + { + "field": "threat.indicator.file.ctime", + "format": "date_time" + }, + { + "field": "threat.indicator.file.elf.creation_date", + "format": "date_time" + }, + { + "field": "threat.indicator.file.mtime", + "format": "date_time" + }, + { + "field": "threat.indicator.first_seen", + "format": "date_time" + }, + { + "field": "threat.indicator.last_seen", + "format": "date_time" + }, + { + "field": "threat.indicator.modified_at", + "format": "date_time" + }, + { + "field": "threat.indicator.x509.not_after", + "format": "date_time" + }, + { + "field": "threat.indicator.x509.not_before", + "format": "date_time" + }, + { + "field": "tls.client.not_after", + "format": "date_time" + }, + { + "field": "tls.client.not_before", + "format": "date_time" + }, + { + "field": "tls.client.x509.not_after", + "format": "date_time" + }, + { + "field": "tls.client.x509.not_before", + "format": "date_time" + }, + { + "field": "tls.server.not_after", + "format": "date_time" + }, + { + "field": "tls.server.not_before", + "format": "date_time" + }, + { + "field": "tls.server.x509.not_after", + "format": "date_time" + }, + { + "field": "tls.server.x509.not_before", + "format": "date_time" + }, + { + "field": "winlog.time_created", + "format": "date_time" + }, + { + "field": "x509.not_after", + "format": "date_time" + }, + { + "field": "x509.not_before", + "format": "date_time" + } + ], + "script_fields": {}, + "stored_fields": [ + "*" + ], + "runtime_mappings": { + "day_of_week": { + "type": "long", + "script": { + "source": "emit(doc['@timestamp'].value.dayOfWeekEnum.getValue())" + } + }, + "hour_of_day": { + "type": "long", + "script": { + "source": "emit (doc['@timestamp'].value.getHour())" + } + } + }, + "_source": { + "excludes": [] + }, + "query": { + "bool": { + "must": [], + "filter": [ + { + "range": { + "@timestamp": { + "format": "strict_date_optional_time", + "gte": "2024-05-29T13:29:01.758Z", + "lte": "2024-05-29T13:44:01.758Z" + } + } + } + ], + "should": [], + "must_not": [] + } + } + } \ No newline at end of file diff --git a/testing/tests/api_tests/data_insertion_tests/queries/filter_logonevents.json b/testing/tests/api_tests/data_insertion_tests/queries/filter_logonevents.json new file mode 100644 index 00000000..3e452ddc --- /dev/null +++ b/testing/tests/api_tests/data_insertion_tests/queries/filter_logonevents.json @@ -0,0 +1,127 @@ +{ + "aggs": { + "2": { + "terms": { + "field": "user.name", + "order": { + "_count": "desc" + }, + "size": 12000 + } + } + }, + "size": 100, + "script_fields": {}, + "stored_fields": [ + "*" + ], + "_source": { + "excludes": [] + }, + "query": { + "bool": { + "must": [], + "filter": [ + { + "bool": { + "filter": [ + { + "bool": { + "should": [ + { + "term": { + "event.code": { + "value": "4624" + } + } + } + ], + "minimum_should_match": 1 + } + }, + { + "bool": { + "must_not": { + "bool": { + "should": [ + { + "wildcard": { + "user.name": { + "value": "*$" + } + } + } + ], + "minimum_should_match": 1 + } + } + } + } + ] + } + }, + { + "bool": { + "should": [ + { + "match_phrase": { + "winlog.event_data.LogonType": "2" + } + }, + { + "match_phrase": { + "winlog.event_data.LogonType": "10" + } + }, + { + "match_phrase": { + "winlog.event_data.LogonType": "11" + } + }, + { + "match_phrase": { + "winlog.event_data.LogonType": "7" + } + } + ], + "minimum_should_match": 1 + } + }, + { + "range": { + "@timestamp": { + "format": "strict_date_optional_time", + "gte": "2024-06-05T18:00:00.000Z", + "lte": "2024-06-12T18:33:09.566Z" + } + } + } + ], + "should": [], + "must_not": [ + { + "bool": { + "should": [ + { + "match_phrase": { + "user.domain": "NT AUTHORITY" + } + }, + { + "match_phrase": { + "user.domain": "Window Manager" + } + }, + { + "match_phrase": { + "user.domain": "Font Driver Host" + } + } + ], + "minimum_should_match": 1 + } + } + ] + } + } + } \ No newline at end of file diff --git a/testing/tests/api_tests/data_insertion_tests/test_server.py b/testing/tests/api_tests/data_insertion_tests/test_server.py new file mode 100644 index 00000000..7228b664 --- /dev/null +++ b/testing/tests/api_tests/data_insertion_tests/test_server.py @@ -0,0 +1,55 @@ +from datetime import datetime, timedelta +import json +import time +import warnings + +import pytest +from jsonschema import validate +from jsonschema.exceptions import ValidationError +import requests +from requests.auth import HTTPBasicAuth +import urllib3 +import os + +from api_tests.helpers import make_request, load_json_schema, get_latest_winlogbeat_index, post_request, insert_winlog_data + +# Disable SSL warnings +urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) + +current_script_path = os.path.abspath(__file__) +current_script_dir = os.path.dirname(current_script_path) + + +def convertJsonFileToString(file_path): + with open(file_path, "r") as file: + return file.read() + + +@pytest.fixture(autouse=True) +def suppress_insecure_request_warning(): + warnings.simplefilter("ignore", urllib3.exceptions.InsecureRequestWarning) + + + +def test_filter_hosts_insert(es_host, es_port, username, password): + + second_response_loaded=insert_winlog_data(es_host, es_port, username, password, 'filter_hosts.json', 'hosts.json', 0) + + # Check to make sure the data was inserted + + for i in range(5): + #print(second_response_loaded['aggregations']['2']['buckets'][i]['key']) + if second_response_loaded['aggregations']['2']['buckets'][i]['key'] == 'testing.lme.local': + break + + assert(second_response_loaded['aggregations']['2']['buckets'][i]['key'] == 'testing.lme.local') + +def test_user_logon_events_insert(es_host, es_port, username, password): + + second_response_loaded=insert_winlog_data(es_host, es_port, username, password, 'filter_logonevents.json', 'logonevents.json', 2) + + # Check to make sure the data was inserted + assert(second_response_loaded['aggregations']['2']['buckets'][0]['key'] == 'APItestuserid') + + + diff --git a/testing/tests/api_tests/helpers.py b/testing/tests/api_tests/helpers.py new file mode 100644 index 00000000..5a1a33af --- /dev/null +++ b/testing/tests/api_tests/helpers.py @@ -0,0 +1,103 @@ +import json + +import requests +from requests.auth import HTTPBasicAuth +from datetime import datetime, timedelta +import os +import time +import urllib3 + + +def make_request(url, username, password, body=None): + auth = HTTPBasicAuth(username, password) + headers = {"Content-Type": "application/json"} + + if body: + response = requests.post( + url, auth=auth, verify=False, data=json.dumps(body), headers=headers + ) + else: + response = requests.get(url, auth=auth, verify=False) + + return response + + +def post_request(url, username, password, body): + auth = HTTPBasicAuth(username, password) + headers = {"Content-Type": "application/json"} + + response = requests.post( + url, + auth=auth, + verify=False, + data=json.dumps(body), + headers=headers + ) + + return response + + +def load_json_schema(file_path): + with open(file_path, "r") as file: + return json.load(file) + +def get_latest_winlogbeat_index(hostname, port, username, password): + url = f"https://{hostname}:{port}/_cat/indices/winlogbeat-*?h=index&s=index:desc&format=json" + response = make_request(url, username, password) + + if response.status_code == 200: + indices = json.loads(response.text) + if indices: + latest_index = indices[0]["index"] + return latest_index + else: + print("No winlogbeat indices found.") + else: + print(f"Error retrieving winlogbeat indices. Status code: {response.status_code}") + + return None + +def insert_winlog_data(es_host, es_port, username, password, filter_query_filename, fixture_filename, filter_num): + # Get the current date + today = datetime.now() + + # Generate timestamp one day before + one_day_before = (today - timedelta(days=1)).strftime("%Y-%m-%dT%H:%M:%S.%fZ") + + # Generate timestamp one day after + one_day_after = (today + timedelta(days=1)).strftime("%Y-%m-%dT%H:%M:%S.%fZ") + + # Computer software overview-> Filter Hosts + url = f"https://{es_host}:{es_port}" + + current_script_path = os.path.abspath(__file__) + current_script_dir = os.path.dirname(current_script_path) + + # This is the query from the dashboard in Kibana + filter_query = load_json_schema(f"{current_script_dir}/data_insertion_tests/queries/{filter_query_filename}") + filter_query['query']['bool']['filter'][filter_num]['range']['@timestamp']['gte'] = one_day_before + filter_query['query']['bool']['filter'][filter_num]['range']['@timestamp']['lte'] = one_day_after + + # You can use this to compare to the update later + first_response = make_request(f"{url}/winlogbeat-*/_search", username, password, filter_query) + first_response_loaded = first_response.json() + + # Get the latest winlogbeat index + latest_index = get_latest_winlogbeat_index(es_host, es_port, username, password) + + # This fixture is a pared down version of the data that will match the query + fixture = load_json_schema(f"{current_script_dir}/data_insertion_tests/fixtures/{fixture_filename}") + fixture['@timestamp'] = datetime.now().strftime("%Y-%m-%dT%H:%M:%S.%fZ") + + # Insert the fixture into the latest index + ans = post_request(f"{url}/{latest_index}/_doc", username, password, fixture) + + # Make sure to sleep for a few seconds to allow the data to be indexed + time.sleep(2) + + # Make the same query again + second_response = make_request(f"{url}/winlogbeat-*/_search", username, password, filter_query) + + second_response_loaded = second_response.json() + + return second_response_loaded \ No newline at end of file diff --git a/testing/tests/api_tests/linux_only/__init__.py b/testing/tests/api_tests/linux_only/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/testing/tests/api_tests/linux_only/conftest.py b/testing/tests/api_tests/linux_only/conftest.py new file mode 100644 index 00000000..65998f93 --- /dev/null +++ b/testing/tests/api_tests/linux_only/conftest.py @@ -0,0 +1,37 @@ +# conftest.py + +import os +import warnings +import pytest +import urllib3 + +# Disable SSL warnings +urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) + + +@pytest.fixture(autouse=True) +def suppress_insecure_request_warning(): + warnings.simplefilter("ignore", urllib3.exceptions.InsecureRequestWarning) + + +@pytest.fixture +def es_host(): + return os.getenv("ES_HOST", os.getenv("ELASTIC_HOST", "localhost")) + + +@pytest.fixture +def es_port(): + return os.getenv("ES_PORT", os.getenv("ELASTIC_PORT", "9200")) + + +@pytest.fixture +def username(): + return os.getenv("ES_USERNAME", os.getenv("ELASTIC_USERNAME", "elastic")) + + +@pytest.fixture +def password(): + return os.getenv( + "elastic", + os.getenv("ES_PASSWORD", os.getenv("ELASTIC_PASSWORD", "default_password")), + ) diff --git a/testing/tests/api_tests/linux_only/schemas/es_root.json b/testing/tests/api_tests/linux_only/schemas/es_root.json new file mode 100644 index 00000000..f529876c --- /dev/null +++ b/testing/tests/api_tests/linux_only/schemas/es_root.json @@ -0,0 +1,68 @@ +{ + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "cluster_name": { + "type": "string" + }, + "cluster_uuid": { + "type": "string" + }, + "version": { + "type": "object", + "properties": { + "number": { + "type": "string" + }, + "build_flavor": { + "type": "string" + }, + "build_type": { + "type": "string" + }, + "build_hash": { + "type": "string" + }, + "build_date": { + "type": "string", + "format": "date-time" + }, + "build_snapshot": { + "type": "boolean" + }, + "lucene_version": { + "type": "string" + }, + "minimum_wire_compatibility_version": { + "type": "string" + }, + "minimum_index_compatibility_version": { + "type": "string" + } + }, + "required": [ + "number", + "build_flavor", + "build_type", + "build_hash", + "build_date", + "build_snapshot", + "lucene_version", + "minimum_wire_compatibility_version", + "minimum_index_compatibility_version" + ] + }, + "tagline": { + "type": "string" + } + }, + "required": [ + "name", + "cluster_name", + "cluster_uuid", + "version", + "tagline" + ] +} \ No newline at end of file diff --git a/testing/tests/api_tests/linux_only/test_data/response.json b/testing/tests/api_tests/linux_only/test_data/response.json new file mode 100644 index 00000000..7a4f834d --- /dev/null +++ b/testing/tests/api_tests/linux_only/test_data/response.json @@ -0,0 +1,17 @@ +{ + "name" : "es01", + "cluster_name" : "loggingmadeeasy-es", + "cluster_uuid" : "1dhOid2uS5Ct41bytJ6P6Q", + "version" : { + "number" : "8.11.1", + "build_flavor" : "default", + "build_type" : "docker", + "build_hash" : "6f9ff581fbcde658e6f69d6ce03050f060d1fd0c", + "build_date" : "2023-11-11T10:05:59.421038163Z", + "build_snapshot" : false, + "lucene_version" : "9.8.0", + "minimum_wire_compatibility_version" : "7.17.0", + "minimum_index_compatibility_version" : "7.0.0" + }, + "tagline" : "You Know, for Search" + } diff --git a/testing/tests/api_tests/linux_only/test_server.py b/testing/tests/api_tests/linux_only/test_server.py new file mode 100644 index 00000000..9d80b91d --- /dev/null +++ b/testing/tests/api_tests/linux_only/test_server.py @@ -0,0 +1,101 @@ +import json +import warnings + +import pytest +from jsonschema import validate +from jsonschema.exceptions import ValidationError +import requests +from requests.auth import HTTPBasicAuth +import urllib3 +import os + +from api_tests.helpers import make_request, load_json_schema + +# Disable SSL warnings +urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) + +current_script_path = os.path.abspath(__file__) +current_script_dir = os.path.dirname(current_script_path) + + +def convertJsonFileToString(file_path): + with open(file_path, "r") as file: + return file.read() + + +@pytest.fixture(autouse=True) +def suppress_insecure_request_warning(): + warnings.simplefilter("ignore", urllib3.exceptions.InsecureRequestWarning) + + +def test_elastic_root(es_host, es_port, username, password): + url = f"https://{es_host}:{es_port}" + response = make_request(url, username, password) + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + body = response.json() + + assert body["name"] == "es01", f"Expected 'es01', got {body['name']}" + assert ( + body["cluster_name"] == "loggingmadeeasy-es" + ), f"Expected 'loggingmadeeasy-es', got {body['cluster_name']}" + assert ( + body["version"]["number"] == "8.11.1" + ), f"Expected '8.11.1', got {body['version']['number']}" + assert ( + body["version"]["build_flavor"] == "default" + ), f"Expected 'default', got {body['version']['build_flavor']}" + assert ( + body["version"]["build_type"] == "docker" + ), f"Expected 'docker', got {body['version']['build_type']}" + assert ( + body["version"]["lucene_version"] == "9.8.0" + ), f"Expected '9.8.0', got {body['version']['lucene_version']}" + assert ( + body["version"]["minimum_wire_compatibility_version"] == "7.17.0" + ), f"Expected '7.17.0', got {body['version']['minimum_wire_compatibility_version']}" + assert ( + body["version"]["minimum_index_compatibility_version"] == "7.0.0" + ), f"Expected '7.0.0', got {body['version']['minimum_index_compatibility_version']}" + + # Validating JSON Response schema + schema = load_json_schema(f"{current_script_dir}/schemas/es_root.json") + validate(instance=response.json(), schema=schema) + + +def test_elastic_indices(es_host, es_port, username, password): + url = f"https://{es_host}:{es_port}/_cat/indices/" + response = make_request(url, username, password) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + assert ( + "green open .internal.alerts-observability.logs.alerts-default" in response.text + ) + assert ( + "green open .internal.alerts-observability.uptime.alerts-default" + in response.text + ) + assert ( + "green open .internal.alerts-ml.anomaly-detection.alerts-default" + in response.text + ) + assert ( + "green open .internal.alerts-observability.slo.alerts-default" in response.text + ) + assert ( + "green open .internal.alerts-observability.apm.alerts-default" in response.text + ) + assert ( + "green open .internal.alerts-observability.metrics.alerts-default" + in response.text + ) + assert ( + "green open .kibana-observability-ai-assistant-conversations" in response.text + ) + assert "green open winlogbeat" in response.text + assert ( + "green open .internal.alerts-observability.threshold.alerts-default" + in response.text + ) + assert "green open .kibana-observability-ai-assistant-kb" in response.text + assert "green open .internal.alerts-security.alerts-default" in response.text + assert "green open .internal.alerts-stack.alerts-default" in response.text diff --git a/testing/tests/api_tests/winlogbeat/__init__.py b/testing/tests/api_tests/winlogbeat/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/testing/tests/api_tests/winlogbeat/conftest.py b/testing/tests/api_tests/winlogbeat/conftest.py new file mode 100644 index 00000000..65998f93 --- /dev/null +++ b/testing/tests/api_tests/winlogbeat/conftest.py @@ -0,0 +1,37 @@ +# conftest.py + +import os +import warnings +import pytest +import urllib3 + +# Disable SSL warnings +urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) + + +@pytest.fixture(autouse=True) +def suppress_insecure_request_warning(): + warnings.simplefilter("ignore", urllib3.exceptions.InsecureRequestWarning) + + +@pytest.fixture +def es_host(): + return os.getenv("ES_HOST", os.getenv("ELASTIC_HOST", "localhost")) + + +@pytest.fixture +def es_port(): + return os.getenv("ES_PORT", os.getenv("ELASTIC_PORT", "9200")) + + +@pytest.fixture +def username(): + return os.getenv("ES_USERNAME", os.getenv("ELASTIC_USERNAME", "elastic")) + + +@pytest.fixture +def password(): + return os.getenv( + "elastic", + os.getenv("ES_PASSWORD", os.getenv("ELASTIC_PASSWORD", "default_password")), + ) diff --git a/testing/tests/api_tests/winlogbeat/schemas/winlogbeat_search.json b/testing/tests/api_tests/winlogbeat/schemas/winlogbeat_search.json new file mode 100644 index 00000000..012907a8 --- /dev/null +++ b/testing/tests/api_tests/winlogbeat/schemas/winlogbeat_search.json @@ -0,0 +1,959 @@ +{ + "$schema": "http://json-schema.org/draft-07/schema#", + "title": "Generated schema for Root", + "type": "object", + "properties": { + "took": { + "type": "number" + }, + "timed_out": { + "type": "boolean" + }, + "_shards": { + "type": "object", + "properties": { + "total": { + "type": "number" + }, + "successful": { + "type": "number" + }, + "skipped": { + "type": "number" + }, + "failed": { + "type": "number" + } + }, + "required": [ + "total", + "successful", + "skipped", + "failed" + ] + }, + "hits": { + "type": "object", + "properties": { + "total": { + "type": "object", + "properties": { + "value": { + "type": "number" + }, + "relation": { + "type": "string" + } + }, + "required": [ + "value", + "relation" + ] + }, + "max_score": { + "type": "number" + }, + "hits": { + "type": "array", + "items": { + "type": "object", + "properties": { + "_index": { + "type": "string" + }, + "_id": { + "type": "string" + }, + "_score": { + "type": "number" + }, + "_ignored": { + "type": "array", + "items": { + "type": "string" + } + }, + "_source": { + "type": "object", + "properties": { + "agent": { + "type": "object", + "properties": { + "name": { + "type": "string" + }, + "id": { + "type": "string" + }, + "ephemeral_id": { + "type": "string" + }, + "type": { + "type": "string" + }, + "version": { + "type": "string" + } + }, + "required": [ + "name", + "id", + "ephemeral_id", + "type", + "version" + ] + }, + "winlog": { + "type": "object", + "properties": { + "record_id": { + "type": "number" + }, + "computer_name": { + "type": "string" + }, + "event_id": { + "type": "string" + }, + "task": { + "type": "string" + }, + "keywords": { + "type": "array", + "items": { + "type": "string" + } + }, + "channel": { + "type": "string" + }, + "api": { + "type": "string" + }, + "event_data": { + "type": "object", + "properties": { + "RuleId": { + "type": "string" + }, + "RuleName": { + "type": "string" + }, + "RuleAttr": { + "type": "string" + }, + "ProfileUsed": { + "type": "string" + }, + "Binary": { + "type": "string" + }, + "param1": { + "type": "string" + }, + "param2": { + "type": "string" + }, + "MinimumPasswordLength": { + "type": "string" + }, + "MinimumPasswordLengthAudit": { + "type": "string" + }, + "IsTestConfig": { + "type": "string" + }, + "Config": { + "type": "string" + }, + "DriveName": { + "type": "string" + }, + "CorruptionActionState": { + "type": "string" + }, + "DeviceName": { + "type": "string" + }, + "DeviceVersionMinor": { + "type": "string" + }, + "DeviceTime": { + "type": "string" + }, + "DeviceVersionMajor": { + "type": "string" + }, + "DeviceNameLength": { + "type": "string" + }, + "FinalStatus": { + "type": "string" + }, + "MiniportName": { + "type": "string" + }, + "MiniportNameLen": { + "type": "string" + }, + "Status": { + "type": "string" + }, + "Version": { + "type": "string" + }, + "VersionLen": { + "type": "string" + }, + "Group": { + "type": "string" + }, + "Number": { + "type": "string" + }, + "MaximumPerformancePercent": { + "type": "string" + }, + "MinimumThrottlePercent": { + "type": "string" + }, + "MinimumPerformancePercent": { + "type": "string" + }, + "IdleImplementation": { + "type": "string" + }, + "PerformanceImplementation": { + "type": "string" + }, + "IdleStateCount": { + "type": "string" + }, + "NominalFrequency": { + "type": "string" + }, + "State": { + "type": "string" + }, + "Reason": { + "type": "string" + }, + "CountOld": { + "type": "string" + }, + "CountNew": { + "type": "string" + }, + "UpdateReason": { + "type": "string" + }, + "EnabledNew": { + "type": "string" + }, + "ExitBootServicesExit": { + "type": "string" + }, + "ResetEndStart": { + "type": "string" + }, + "LoadOSImageStart": { + "type": "string" + }, + "StartOSImageStart": { + "type": "string" + }, + "ExitBootServicesEntry": { + "type": "string" + }, + "BitlockerUserInputTime": { + "type": "string" + }, + "EntryCount": { + "type": "string" + }, + "LoadOptions": { + "type": "string" + }, + "BootType": { + "type": "string" + }, + "BootMenuPolicy": { + "type": "string" + }, + "LastBootGood": { + "type": "string" + }, + "LastBootId": { + "type": "string" + }, + "BootStatusPolicy": { + "type": "string" + }, + "LastShutdownGood": { + "type": "string" + }, + "EnableDisableReason": { + "type": "string" + }, + "VsmPolicy": { + "type": "string" + }, + "MajorVersion": { + "type": "string" + }, + "BootMode": { + "type": "string" + }, + "StartTime": { + "type": "string" + }, + "BuildVersion": { + "type": "string" + }, + "ServiceVersion": { + "type": "string" + }, + "MinorVersion": { + "type": "string" + }, + "QfeVersion": { + "type": "string" + }, + "StopTime": { + "type": "string" + }, + "ShutdownActionType": { + "type": "string" + }, + "ShutdownEventCode": { + "type": "string" + }, + "ShutdownReason": { + "type": "string" + }, + "param7": { + "type": "string" + }, + "param5": { + "type": "string" + }, + "param6": { + "type": "string" + }, + "param4": { + "type": "string" + }, + "VsmLaunchType": { + "type": "string" + }, + "RemoteEventLogging": { + "type": "string" + }, + "TestSigning": { + "type": "string" + }, + "HypervisorLoadOptions": { + "type": "string" + }, + "SubjectLogonId": { + "type": "string" + }, + "ConfigAccessPolicy": { + "type": "string" + }, + "FlightSigning": { + "type": "string" + }, + "AdvancedOptions": { + "type": "string" + }, + "SubjectUserName": { + "type": "string" + }, + "KernelDebug": { + "type": "string" + }, + "HypervisorLaunchType": { + "type": "string" + }, + "DisableIntegrityChecks": { + "type": "string" + }, + "SubjectDomainName": { + "type": "string" + }, + "HypervisorDebug": { + "type": "string" + }, + "SubjectUserSid": { + "type": "string" + }, + "TargetLogonId": { + "type": "string" + }, + "TargetProcessId": { + "type": "string" + }, + "TargetProcessName": { + "type": "string" + }, + "TargetUserName": { + "type": "string" + }, + "ProcessId": { + "type": "string" + }, + "TargetDomainName": { + "type": "string" + }, + "TargetUserSid": { + "type": "string" + }, + "MandatoryLabel": { + "type": "string" + }, + "ParentProcessName": { + "type": "string" + }, + "NewProcessId": { + "type": "string" + }, + "TokenElevationType": { + "type": "string" + }, + "NewProcessName": { + "type": "string" + }, + "CommandLine": { + "type": "string" + }, + "PrivilegeList": { + "type": "string" + }, + "ProcessName": { + "type": "string" + }, + "LogonGuid": { + "type": "string" + }, + "TargetOutboundDomainName": { + "type": "string" + }, + "VirtualAccount": { + "type": "string" + }, + "IpPort": { + "type": "string" + }, + "TransmittedServices": { + "type": "string" + }, + "LmPackageName": { + "type": "string" + }, + "RestrictedAdminMode": { + "type": "string" + }, + "ElevatedToken": { + "type": "string" + }, + "WorkstationName": { + "type": "string" + }, + "LogonProcessName": { + "type": "string" + }, + "LogonType": { + "type": "string" + }, + "KeyLength": { + "type": "string" + }, + "TargetOutboundUserName": { + "type": "string" + }, + "TargetLinkedLogonId": { + "type": "string" + }, + "IpAddress": { + "type": "string" + }, + "ImpersonationLevel": { + "type": "string" + }, + "AuthenticationPackageName": { + "type": "string" + }, + "CallerProcessId": { + "type": "string" + }, + "TargetSid": { + "type": "string" + }, + "CallerProcessName": { + "type": "string" + }, + "PreviousTime": { + "type": "string" + }, + "NewTime": { + "type": "string" + }, + "Win32Error": { + "type": "string" + }, + "Library": { + "type": "string" + }, + "TargetLogonGuid": { + "type": "string" + }, + "TargetInfo": { + "type": "string" + }, + "TargetServerName": { + "type": "string" + }, + "NotificationPackageName": { + "type": "string" + }, + "PuaCount": { + "type": "string" + }, + "PuaPolicyId": { + "type": "string" + }, + "SecurityPackageName": { + "type": "string" + }, + "Path": { + "type": "string" + }, + "ScriptBlockId": { + "type": "string" + }, + "MessageNumber": { + "type": "string" + }, + "ScriptBlockText": { + "type": "string" + }, + "MessageTotal": { + "type": "string" + }, + "Payload": { + "type": "string" + }, + "ContextInfo": { + "type": "string" + }, + "param3": { + "type": "string" + }, + "DnsHostName": { + "type": "string" + }, + "SidHistory": { + "type": "string" + }, + "LogonHours": { + "type": "string" + }, + "ScriptPath": { + "type": "string" + }, + "ServicePrincipalNames": { + "type": "string" + }, + "DisplayName": { + "type": "string" + }, + "HomePath": { + "type": "string" + }, + "AllowedToDelegateTo": { + "type": "string" + }, + "UserWorkstations": { + "type": "string" + }, + "SamAccountName": { + "type": "string" + }, + "OldUacValue": { + "type": "string" + }, + "UserParameters": { + "type": "string" + }, + "HomeDirectory": { + "type": "string" + }, + "NewUacValue": { + "type": "string" + }, + "PrimaryGroupId": { + "type": "string" + }, + "AccountExpires": { + "type": "string" + }, + "ProfilePath": { + "type": "string" + }, + "UserAccountControl": { + "type": "string" + }, + "PasswordLastSet": { + "type": "string" + }, + "ComputerAccountChange": { + "type": "string" + }, + "UserPrincipalName": { + "type": "string" + }, + "DwordVal": { + "type": "string" + }, + "OldTime": { + "type": "string" + }, + "ProcessID": { + "type": "string" + }, + "UserSid": { + "type": "string" + }, + "TSId": { + "type": "string" + }, + "MulticastFlowsEnabled": { + "type": "string" + }, + "LogSuccessfulConnectionsEnabled": { + "type": "string" + }, + "RemoteAdminEnabled": { + "type": "string" + }, + "LogDroppedPacketsEnabled": { + "type": "string" + }, + "OperationMode": { + "type": "string" + }, + "Profile": { + "type": "string" + }, + "GroupPolicyApplied": { + "type": "string" + }, + "ReasonForRejection": { + "type": "string" + }, + "param8": { + "type": "string" + }, + "param9": { + "type": "string" + }, + "param10": { + "type": "string" + }, + "param11": { + "type": "string" + }, + "param12": { + "type": "string" + }, + "AccessMask": { + "type": "string" + }, + "ResourceAttributes": { + "type": "string" + }, + "ObjectName": { + "type": "string" + }, + "ObjectType": { + "type": "string" + }, + "ObjectServer": { + "type": "string" + }, + "HandleId": { + "type": "string" + }, + "AccessList": { + "type": "string" + }, + "TransactionId": { + "type": "string" + }, + "AdditionalInfo": { + "type": "string" + }, + "Properties": { + "type": "string" + }, + "AdditionalInfo2": { + "type": "string" + }, + "OperationType": { + "type": "string" + }, + "TicketEncryptionType": { + "type": "string" + }, + "ServiceName": { + "type": "string" + }, + "TicketOptions": { + "type": "string" + }, + "ServiceSid": { + "type": "string" + }, + "ClientProcessId": { + "type": "string" + }, + "FQDN": { + "type": "string" + }, + "TaskName": { + "type": "string" + }, + "RpcCallClientLocality": { + "type": "string" + }, + "ParentProcessId": { + "type": "string" + }, + "ClientProcessStartKey": { + "type": "string" + }, + "TaskContentNew": { + "type": "string" + }, + "TaskContent": { + "type": "string" + }, + "PreAuthType": { + "type": "string" + }, + "Type": { + "type": "string" + }, + "ReadOperation": { + "type": "string" + }, + "ReturnCode": { + "type": "string" + }, + "CountOfCredentialsReturned": { + "type": "string" + }, + "ProcessCreationTime": { + "type": "string" + }, + "TargetName": { + "type": "string" + } + }, + "required": [] + }, + "opcode": { + "type": "string" + }, + "provider_name": { + "type": "string" + }, + "process": { + "type": "object", + "properties": { + "pid": { + "type": "number" + }, + "thread": { + "type": "object", + "properties": { + "id": { + "type": "number" + } + }, + "required": [ + "id" + ] + } + }, + "required": [ + "pid", + "thread" + ] + }, + "version": { + "type": "number" + }, + "provider_guid": { + "type": "string" + }, + "activity_id": { + "type": "string" + }, + "user": { + "type": "object", + "properties": { + "identifier": { + "type": "string" + }, + "domain": { + "type": "string" + }, + "name": { + "type": "string" + }, + "type": { + "type": "string" + } + }, + "required": [ + "identifier", + "domain", + "name", + "type" + ] + } + }, + "required": [ + "record_id", + "computer_name", + "event_id", + "task", + "channel", + "api", + "provider_name" + ] + }, + "@timestamp": { + "type": "string" + }, + "ecs": { + "type": "object", + "properties": { + "version": { + "type": "string" + } + }, + "required": [ + "version" + ] + }, + "log": { + "type": "object", + "properties": { + "level": { + "type": "string" + } + }, + "required": [ + "level" + ] + }, + "host": { + "type": "object", + "properties": { + "name": { + "type": "string" + } + }, + "required": [ + "name" + ] + }, + "@version": { + "type": "string" + }, + "message": { + "type": "string" + }, + "event": { + "type": "object", + "properties": { + "ingested": { + "type": "string" + }, + "code": { + "type": "string" + }, + "original": { + "type": "string" + }, + "provider": { + "type": "string" + }, + "created": { + "type": "string" + }, + "kind": { + "type": "string" + }, + "action": { + "type": "string" + }, + "outcome": { + "type": "string" + } + }, + "required": [ + "ingested", + "code", + "provider", + "created", + "kind", + "action" + ] + }, + "tags": { + "type": "array", + "items": { + "type": "string" + } + } + }, + "required": [ + "agent", + "winlog", + "@timestamp", + "ecs", + "log", + "host", + "@version", + "event", + "tags" + ] + } + }, + "required": [ + "_index", + "_id", + "_score", + "_source" + ] + } + } + }, + "required": [ + "total", + "max_score", + "hits" + ] + } + }, + "required": [ + "took", + "timed_out", + "_shards", + "hits" + ] + } \ No newline at end of file diff --git a/testing/tests/api_tests/winlogbeat/test_data/mapping_datafields.txt b/testing/tests/api_tests/winlogbeat/test_data/mapping_datafields.txt new file mode 100644 index 00000000..237898c0 --- /dev/null +++ b/testing/tests/api_tests/winlogbeat/test_data/mapping_datafields.txt @@ -0,0 +1,492 @@ +[ + "message", + "tags", + "agent.ephemeral_id", + "agent.id", + "agent.name", + "agent.type", + "agent.version", + "as.organization.name", + "client.address", + "client.as.organization.name", + "client.domain", + "client.geo.city_name", + "client.geo.continent_name", + "client.geo.country_iso_code", + "client.geo.country_name", + "client.geo.name", + "client.geo.region_iso_code", + "client.geo.region_name", + "client.mac", + "client.registered_domain", + "client.top_level_domain", + "client.user.domain", + "client.user.email", + "client.user.full_name", + "client.user.group.domain", + "client.user.group.id", + "client.user.group.name", + "client.user.hash", + "client.user.id", + "client.user.name", + "cloud.account.id", + "cloud.availability_zone", + "cloud.instance.id", + "cloud.instance.name", + "cloud.machine.type", + "cloud.provider", + "cloud.region", + "container.id", + "container.image.name", + "container.image.tag", + "container.name", + "container.runtime", + "destination.address", + "destination.as.organization.name", + "destination.domain", + "destination.geo.city_name", + "destination.geo.continent_name", + "destination.geo.country_iso_code", + "destination.geo.country_name", + "destination.geo.name", + "destination.geo.region_iso_code", + "destination.geo.region_name", + "destination.mac", + "destination.registered_domain", + "destination.top_level_domain", + "destination.user.domain", + "destination.user.email", + "destination.user.full_name", + "destination.user.group.domain", + "destination.user.group.id", + "destination.user.group.name", + "destination.user.hash", + "destination.user.id", + "destination.user.name", + "dns.answers.class", + "dns.answers.data", + "dns.answers.name", + "dns.answers.type", + "dns.header_flags", + "dns.id", + "dns.op_code", + "dns.question.class", + "dns.question.name", + "dns.question.registered_domain", + "dns.question.subdomain", + "dns.question.top_level_domain", + "dns.question.type", + "dns.response_code", + "dns.type", + "ecs.version", + "error.code", + "error.id", + "error.message", + "error.stack_trace", + "error.type", + "event.action", + "event.category", + "event.code", + "event.dataset", + "event.hash", + "event.id", + "event.kind", + "event.module", + "event.outcome", + "event.provider", + "event.timezone", + "event.type", + "file.device", + "file.directory", + "file.extension", + "file.gid", + "file.group", + "file.hash.md5", + "file.hash.sha1", + "file.hash.sha256", + "file.hash.sha512", + "file.inode", + "file.mode", + "file.name", + "file.owner", + "file.path", + "file.target_path", + "file.type", + "file.uid", + "geo.city_name", + "geo.continent_name", + "geo.country_iso_code", + "geo.country_name", + "geo.name", + "geo.region_iso_code", + "geo.region_name", + "group.domain", + "group.id", + "group.name", + "hash.md5", + "hash.sha1", + "hash.sha256", + "hash.sha512", + "host.architecture", + "host.geo.city_name", + "host.geo.continent_name", + "host.geo.country_iso_code", + "host.geo.country_name", + "host.geo.name", + "host.geo.region_iso_code", + "host.geo.region_name", + "host.hostname", + "host.id", + "host.mac", + "host.name", + "host.os.family", + "host.os.full", + "host.os.kernel", + "host.os.name", + "host.os.platform", + "host.os.version", + "host.type", + "host.user.domain", + "host.user.email", + "host.user.full_name", + "host.user.group.domain", + "host.user.group.id", + "host.user.group.name", + "host.user.hash", + "host.user.id", + "host.user.name", + "http.request.body.content", + "http.request.method", + "http.request.referrer", + "http.response.body.content", + "http.version", + "log.level", + "log.logger", + "log.origin.file.name", + "log.origin.function", + "log.syslog.facility.name", + "log.syslog.severity.name", + "network.application", + "network.community_id", + "network.direction", + "network.iana_number", + "network.name", + "network.protocol", + "network.transport", + "network.type", + "observer.geo.city_name", + "observer.geo.continent_name", + "observer.geo.country_iso_code", + "observer.geo.country_name", + "observer.geo.name", + "observer.geo.region_iso_code", + "observer.geo.region_name", + "observer.hostname", + "observer.mac", + "observer.name", + "observer.os.family", + "observer.os.full", + "observer.os.kernel", + "observer.os.name", + "observer.os.platform", + "observer.os.version", + "observer.product", + "observer.serial_number", + "observer.type", + "observer.vendor", + "observer.version", + "organization.id", + "organization.name", + "os.family", + "os.full", + "os.kernel", + "os.name", + "os.platform", + "os.version", + "package.architecture", + "package.checksum", + "package.description", + "package.install_scope", + "package.license", + "package.name", + "package.path", + "package.version", + "process.args", + "process.executable", + "process.hash.md5", + "process.hash.sha1", + "process.hash.sha256", + "process.hash.sha512", + "process.name", + "process.thread.name", + "process.title", + "process.working_directory", + "server.address", + "server.as.organization.name", + "server.domain", + "server.geo.city_name", + "server.geo.continent_name", + "server.geo.country_iso_code", + "server.geo.country_name", + "server.geo.name", + "server.geo.region_iso_code", + "server.geo.region_name", + "server.mac", + "server.registered_domain", + "server.top_level_domain", + "server.user.domain", + "server.user.email", + "server.user.full_name", + "server.user.group.domain", + "server.user.group.id", + "server.user.group.name", + "server.user.hash", + "server.user.id", + "server.user.name", + "service.ephemeral_id", + "service.id", + "service.name", + "service.node.name", + "service.state", + "service.type", + "service.version", + "source.address", + "source.as.organization.name", + "source.domain", + "source.geo.city_name", + "source.geo.continent_name", + "source.geo.country_iso_code", + "source.geo.country_name", + "source.geo.name", + "source.geo.region_iso_code", + "source.geo.region_name", + "source.mac", + "source.registered_domain", + "source.top_level_domain", + "source.user.domain", + "source.user.email", + "source.user.full_name", + "source.user.group.domain", + "source.user.group.id", + "source.user.group.name", + "source.user.hash", + "source.user.id", + "source.user.name", + "threat.framework", + "threat.tactic.id", + "threat.tactic.name", + "threat.tactic.reference", + "threat.technique.id", + "threat.technique.name", + "threat.technique.reference", + "trace.id", + "transaction.id", + "url.domain", + "url.extension", + "url.fragment", + "url.full", + "url.original", + "url.password", + "url.path", + "url.query", + "url.registered_domain", + "url.scheme", + "url.top_level_domain", + "url.username", + "user.domain", + "user.email", + "user.full_name", + "user.group.domain", + "user.group.id", + "user.group.name", + "user.hash", + "user.id", + "user.name", + "user_agent.device.name", + "user_agent.name", + "user_agent.original.text", + "user_agent.original", + "user_agent.os.family", + "user_agent.os.full", + "user_agent.os.kernel", + "user_agent.os.name", + "user_agent.os.platform", + "user_agent.os.version", + "user_agent.version", + "agent.hostname", + "timeseries.instance", + "cloud.image.id", + "host.os.build", + "host.os.codename", + "kubernetes.pod.name", + "kubernetes.pod.uid", + "kubernetes.namespace", + "kubernetes.node.name", + "kubernetes.node.hostname", + "kubernetes.replicaset.name", + "kubernetes.deployment.name", + "kubernetes.statefulset.name", + "kubernetes.container.name", + "jolokia.agent.version", + "jolokia.agent.id", + "jolokia.server.product", + "jolokia.server.version", + "jolokia.server.vendor", + "jolokia.url", + "event.original", + "winlog.api", + "winlog.activity_id", + "winlog.computer_name", + "winlog.event_data.AuthenticationPackageName", + "winlog.event_data.Binary", + "winlog.event_data.BitlockerUserInputTime", + "winlog.event_data.BootMode", + "winlog.event_data.BootType", + "winlog.event_data.BuildVersion", + "winlog.event_data.Company", + "winlog.event_data.CorruptionActionState", + "winlog.event_data.CreationUtcTime", + "winlog.event_data.Description", + "winlog.event_data.Detail", + "winlog.event_data.DeviceName", + "winlog.event_data.DeviceNameLength", + "winlog.event_data.DeviceTime", + "winlog.event_data.DeviceVersionMajor", + "winlog.event_data.DeviceVersionMinor", + "winlog.event_data.DriveName", + "winlog.event_data.DriverName", + "winlog.event_data.DriverNameLength", + "winlog.event_data.DwordVal", + "winlog.event_data.EntryCount", + "winlog.event_data.ExtraInfo", + "winlog.event_data.FailureName", + "winlog.event_data.FailureNameLength", + "winlog.event_data.FileVersion", + "winlog.event_data.FinalStatus", + "winlog.event_data.Group", + "winlog.event_data.IdleImplementation", + "winlog.event_data.IdleStateCount", + "winlog.event_data.ImpersonationLevel", + "winlog.event_data.IntegrityLevel", + "winlog.event_data.IpAddress", + "winlog.event_data.IpPort", + "winlog.event_data.KeyLength", + "winlog.event_data.LastBootGood", + "winlog.event_data.LastShutdownGood", + "winlog.event_data.LmPackageName", + "winlog.event_data.LogonGuid", + "winlog.event_data.LogonId", + "winlog.event_data.LogonProcessName", + "winlog.event_data.LogonType", + "winlog.event_data.MajorVersion", + "winlog.event_data.MaximumPerformancePercent", + "winlog.event_data.MemberName", + "winlog.event_data.MemberSid", + "winlog.event_data.MinimumPerformancePercent", + "winlog.event_data.MinimumThrottlePercent", + "winlog.event_data.MinorVersion", + "winlog.event_data.NewProcessId", + "winlog.event_data.NewProcessName", + "winlog.event_data.NewSchemeGuid", + "winlog.event_data.NewTime", + "winlog.event_data.NominalFrequency", + "winlog.event_data.Number", + "winlog.event_data.OldSchemeGuid", + "winlog.event_data.OldTime", + "winlog.event_data.OriginalFileName", + "winlog.event_data.Path", + "winlog.event_data.PerformanceImplementation", + "winlog.event_data.PreviousCreationUtcTime", + "winlog.event_data.PreviousTime", + "winlog.event_data.PrivilegeList", + "winlog.event_data.ProcessId", + "winlog.event_data.ProcessName", + "winlog.event_data.ProcessPath", + "winlog.event_data.ProcessPid", + "winlog.event_data.Product", + "winlog.event_data.PuaCount", + "winlog.event_data.PuaPolicyId", + "winlog.event_data.QfeVersion", + "winlog.event_data.Reason", + "winlog.event_data.SchemaVersion", + "winlog.event_data.ScriptBlockText", + "winlog.event_data.ServiceName", + "winlog.event_data.ServiceVersion", + "winlog.event_data.ShutdownActionType", + "winlog.event_data.ShutdownEventCode", + "winlog.event_data.ShutdownReason", + "winlog.event_data.Signature", + "winlog.event_data.SignatureStatus", + "winlog.event_data.Signed", + "winlog.event_data.StartTime", + "winlog.event_data.State", + "winlog.event_data.Status", + "winlog.event_data.StopTime", + "winlog.event_data.SubjectDomainName", + "winlog.event_data.SubjectLogonId", + "winlog.event_data.SubjectUserName", + "winlog.event_data.SubjectUserSid", + "winlog.event_data.TSId", + "winlog.event_data.TargetDomainName", + "winlog.event_data.TargetInfo", + "winlog.event_data.TargetLogonGuid", + "winlog.event_data.TargetLogonId", + "winlog.event_data.TargetServerName", + "winlog.event_data.TargetUserName", + "winlog.event_data.TargetUserSid", + "winlog.event_data.TerminalSessionId", + "winlog.event_data.TokenElevationType", + "winlog.event_data.TransmittedServices", + "winlog.event_data.UserSid", + "winlog.event_data.Version", + "winlog.event_data.Workstation", + "winlog.event_data.param1", + "winlog.event_data.param2", + "winlog.event_data.param3", + "winlog.event_data.param4", + "winlog.event_data.param5", + "winlog.event_data.param6", + "winlog.event_data.param7", + "winlog.event_data.param8", + "winlog.event_id", + "winlog.keywords", + "winlog.channel", + "winlog.record_id", + "winlog.related_activity_id", + "winlog.opcode", + "winlog.provider_guid", + "winlog.provider_name", + "winlog.task", + "winlog.user.identifier", + "winlog.user.name", + "winlog.user.domain", + "winlog.user.type", + "powershell.id", + "powershell.pipeline_id", + "powershell.runspace_id", + "powershell.command.path", + "powershell.command.name", + "powershell.command.type", + "powershell.command.value", + "powershell.command.invocation_details.type", + "powershell.command.invocation_details.related_command", + "powershell.command.invocation_details.name", + "powershell.command.invocation_details.value", + "powershell.connected_user.domain", + "powershell.connected_user.name", + "powershell.engine.version", + "powershell.engine.previous_state", + "powershell.engine.new_state", + "powershell.file.script_block_id", + "powershell.file.script_block_text", + "powershell.process.executable_version", + "powershell.provider.new_state", + "powershell.provider.name", + "winlog.logon.type", + "winlog.logon.id", + "winlog.logon.failure.reason", + "winlog.logon.failure.status", + "winlog.logon.failure.sub_status", + "sysmon.dns.status", + "fields.*", + ] \ No newline at end of file diff --git a/testing/tests/api_tests/winlogbeat/test_data/mapping_response.json b/testing/tests/api_tests/winlogbeat/test_data/mapping_response.json new file mode 100644 index 00000000..c66cd188 --- /dev/null +++ b/testing/tests/api_tests/winlogbeat/test_data/mapping_response.json @@ -0,0 +1,7379 @@ +{ + "winlogbeat-000001": { + "mappings": { + "_meta": { + "beat": "winlogbeat", + "version": "7.17.6" + }, + "dynamic_templates": [ + { + "labels": { + "path_match": "labels.*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + "container.labels": { + "path_match": "container.labels.*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + "fields": { + "path_match": "fields.*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + "docker.container.labels": { + "path_match": "docker.container.labels.*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + "kubernetes.labels.*": { + "path_match": "kubernetes.labels.*", + "mapping": { + "type": "keyword" + } + } + }, + { + "kubernetes.annotations.*": { + "path_match": "kubernetes.annotations.*", + "mapping": { + "type": "keyword" + } + } + }, + { + "kubernetes.selectors.*": { + "path_match": "kubernetes.selectors.*", + "mapping": { + "type": "keyword" + } + } + }, + { + "winlog.event_data": { + "path_match": "winlog.event_data.*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + "winlog.user_data": { + "path_match": "winlog.user_data.*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + "strings_as_keyword": { + "match_mapping_type": "string", + "mapping": { + "ignore_above": 1024, + "type": "keyword" + } + } + } + ], + "date_detection": false, + "properties": { + "@timestamp": { + "type": "date" + }, + "@version": { + "type": "keyword", + "ignore_above": 1024 + }, + "agent": { + "properties": { + "build": { + "properties": { + "original": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ephemeral_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "hostname": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "client": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + }, + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "bytes": { + "type": "long" + }, + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "mac": { + "type": "keyword", + "ignore_above": 1024 + }, + "nat": { + "properties": { + "ip": { + "type": "ip" + }, + "port": { + "type": "long" + } + } + }, + "packets": { + "type": "long" + }, + "port": { + "type": "long" + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "cloud": { + "properties": { + "account": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "availability_zone": { + "type": "keyword", + "ignore_above": 1024 + }, + "image": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "instance": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "machine": { + "properties": { + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "project": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "provider": { + "type": "keyword", + "ignore_above": 1024 + }, + "region": { + "type": "keyword", + "ignore_above": 1024 + }, + "service": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "container": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "image": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "tag": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "labels": { + "type": "object" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "runtime": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "data_stream": { + "properties": { + "dataset": { + "type": "constant_keyword" + }, + "namespace": { + "type": "constant_keyword" + }, + "type": { + "type": "constant_keyword" + } + } + }, + "destination": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + }, + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "bytes": { + "type": "long" + }, + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "mac": { + "type": "keyword", + "ignore_above": 1024 + }, + "nat": { + "properties": { + "ip": { + "type": "ip" + }, + "port": { + "type": "long" + } + } + }, + "packets": { + "type": "long" + }, + "port": { + "type": "long" + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "dll": { + "properties": { + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024 + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "dns": { + "properties": { + "answers": { + "properties": { + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "ttl": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "header_flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "op_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "question": { + "properties": { + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "resolved_ip": { + "type": "ip" + }, + "response_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "docker": { + "properties": { + "container": { + "properties": { + "labels": { + "type": "object" + } + } + } + } + }, + "ecs": { + "properties": { + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "elf": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "byte_order": { + "type": "keyword", + "ignore_above": 1024 + }, + "cpu_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "creation_date": { + "type": "date" + }, + "exports": { + "type": "flattened" + }, + "header": { + "properties": { + "abi_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "entrypoint": { + "type": "long" + }, + "object_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "os_abi": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "imports": { + "type": "flattened" + }, + "sections": { + "type": "nested", + "properties": { + "chi2": { + "type": "long" + }, + "entropy": { + "type": "long" + }, + "flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_offset": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "virtual_address": { + "type": "long" + }, + "virtual_size": { + "type": "long" + } + } + }, + "segments": { + "type": "nested", + "properties": { + "sections": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "shared_libraries": { + "type": "keyword", + "ignore_above": 1024 + }, + "telfhash": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "error": { + "properties": { + "code": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "message": { + "type": "match_only_text" + }, + "stack_trace": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "event": { + "properties": { + "action": { + "type": "keyword", + "ignore_above": 1024 + }, + "agent_id_status": { + "type": "keyword", + "ignore_above": 1024 + }, + "category": { + "type": "keyword", + "ignore_above": 1024 + }, + "code": { + "type": "keyword", + "ignore_above": 1024 + }, + "created": { + "type": "date" + }, + "dataset": { + "type": "keyword", + "ignore_above": 1024 + }, + "duration": { + "type": "long" + }, + "end": { + "type": "date" + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "ingested": { + "type": "date" + }, + "kind": { + "type": "keyword", + "ignore_above": 1024 + }, + "module": { + "type": "keyword", + "ignore_above": 1024 + }, + "original": { + "type": "keyword", + "ignore_above": 1024 + }, + "outcome": { + "type": "keyword", + "ignore_above": 1024 + }, + "provider": { + "type": "keyword", + "ignore_above": 1024 + }, + "reason": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "risk_score": { + "type": "float" + }, + "risk_score_norm": { + "type": "float" + }, + "sequence": { + "type": "long" + }, + "severity": { + "type": "long" + }, + "start": { + "type": "date" + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "url": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "fields": { + "type": "object" + }, + "file": { + "properties": { + "accessed": { + "type": "date" + }, + "attributes": { + "type": "keyword", + "ignore_above": 1024 + }, + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "created": { + "type": "date" + }, + "ctime": { + "type": "date" + }, + "device": { + "type": "keyword", + "ignore_above": 1024 + }, + "directory": { + "type": "keyword", + "ignore_above": 1024 + }, + "drive_letter": { + "type": "keyword", + "ignore_above": 1 + }, + "elf": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "byte_order": { + "type": "keyword", + "ignore_above": 1024 + }, + "cpu_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "creation_date": { + "type": "date" + }, + "exports": { + "type": "flattened" + }, + "header": { + "properties": { + "abi_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "entrypoint": { + "type": "long" + }, + "object_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "os_abi": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "imports": { + "type": "flattened" + }, + "sections": { + "type": "nested", + "properties": { + "chi2": { + "type": "long" + }, + "entropy": { + "type": "long" + }, + "flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_offset": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "virtual_address": { + "type": "long" + }, + "virtual_size": { + "type": "long" + } + } + }, + "segments": { + "type": "nested", + "properties": { + "sections": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "shared_libraries": { + "type": "keyword", + "ignore_above": 1024 + }, + "telfhash": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "extension": { + "type": "keyword", + "ignore_above": 1024 + }, + "fork_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "gid": { + "type": "keyword", + "ignore_above": 1024 + }, + "group": { + "type": "keyword", + "ignore_above": 1024 + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "inode": { + "type": "keyword", + "ignore_above": 1024 + }, + "mime_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "mode": { + "type": "keyword", + "ignore_above": 1024 + }, + "mtime": { + "type": "date" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "owner": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "size": { + "type": "long" + }, + "target_path": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "uid": { + "type": "keyword", + "ignore_above": 1024 + }, + "x509": { + "properties": { + "alternative_names": { + "type": "keyword", + "ignore_above": 1024 + }, + "issuer": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "public_key_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_exponent": { + "type": "long", + "index": false, + "doc_values": false + }, + "public_key_size": { + "type": "long" + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "signature_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version_number": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "host": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "containerized": { + "type": "boolean" + }, + "cpu": { + "properties": { + "usage": { + "type": "scaled_float", + "scaling_factor": 1000.0 + } + } + }, + "disk": { + "properties": { + "read": { + "properties": { + "bytes": { + "type": "long" + } + } + }, + "write": { + "properties": { + "bytes": { + "type": "long" + } + } + } + } + }, + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hostname": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "ip": { + "type": "ip" + }, + "mac": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "network": { + "properties": { + "egress": { + "properties": { + "bytes": { + "type": "long" + }, + "packets": { + "type": "long" + } + } + }, + "ingress": { + "properties": { + "bytes": { + "type": "long" + }, + "packets": { + "type": "long" + } + } + } + } + }, + "os": { + "properties": { + "build": { + "type": "keyword", + "ignore_above": 1024 + }, + "codename": { + "type": "keyword", + "ignore_above": 1024 + }, + "family": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "kernel": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "platform": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "uptime": { + "type": "long" + }, + "user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "http": { + "properties": { + "request": { + "properties": { + "body": { + "properties": { + "bytes": { + "type": "long" + }, + "content": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + }, + "bytes": { + "type": "long" + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "method": { + "type": "keyword", + "ignore_above": 1024 + }, + "mime_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "referrer": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "response": { + "properties": { + "body": { + "properties": { + "bytes": { + "type": "long" + }, + "content": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + }, + "bytes": { + "type": "long" + }, + "mime_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "status_code": { + "type": "long" + } + } + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "interface": { + "properties": { + "alias": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "jolokia": { + "properties": { + "agent": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "secured": { + "type": "boolean" + }, + "server": { + "properties": { + "product": { + "type": "keyword", + "ignore_above": 1024 + }, + "vendor": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "url": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "kubernetes": { + "properties": { + "annotations": { + "properties": { + "*": { + "type": "object" + } + } + }, + "container": { + "properties": { + "image": { + "type": "alias", + "path": "container.image.name" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "deployment": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "labels": { + "properties": { + "*": { + "type": "object" + } + } + }, + "namespace": { + "type": "keyword", + "ignore_above": 1024 + }, + "node": { + "properties": { + "hostname": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "pod": { + "properties": { + "ip": { + "type": "ip" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "uid": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "replicaset": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "selectors": { + "properties": { + "*": { + "type": "object" + } + } + }, + "statefulset": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "labels": { + "type": "object" + }, + "log": { + "properties": { + "file": { + "properties": { + "path": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "level": { + "type": "keyword", + "ignore_above": 1024 + }, + "logger": { + "type": "keyword", + "ignore_above": 1024 + }, + "origin": { + "properties": { + "file": { + "properties": { + "line": { + "type": "long" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "function": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "original": { + "type": "keyword", + "index": false, + "doc_values": false, + "ignore_above": 1024 + }, + "syslog": { + "properties": { + "facility": { + "properties": { + "code": { + "type": "long" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "priority": { + "type": "long" + }, + "severity": { + "properties": { + "code": { + "type": "long" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + } + } + }, + "message": { + "type": "match_only_text" + }, + "network": { + "properties": { + "application": { + "type": "keyword", + "ignore_above": 1024 + }, + "bytes": { + "type": "long" + }, + "community_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "direction": { + "type": "keyword", + "ignore_above": 1024 + }, + "forwarded_ip": { + "type": "ip" + }, + "iana_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "inner": { + "properties": { + "vlan": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "packets": { + "type": "long" + }, + "protocol": { + "type": "keyword", + "ignore_above": 1024 + }, + "transport": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "vlan": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "observer": { + "properties": { + "egress": { + "properties": { + "interface": { + "properties": { + "alias": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "vlan": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "zone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hostname": { + "type": "keyword", + "ignore_above": 1024 + }, + "ingress": { + "properties": { + "interface": { + "properties": { + "alias": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "vlan": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "zone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "mac": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "os": { + "properties": { + "family": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "kernel": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "platform": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "vendor": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "orchestrator": { + "properties": { + "api_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "cluster": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "url": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "namespace": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "resource": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "organization": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + }, + "os": { + "properties": { + "family": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "kernel": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "platform": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "package": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "build_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "checksum": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "install_scope": { + "type": "keyword", + "ignore_above": 1024 + }, + "installed": { + "type": "date" + }, + "license": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "powershell": { + "properties": { + "command": { + "properties": { + "invocation_details": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "related_command": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "value": { + "type": "text", + "norms": false + } + } + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "value": { + "type": "text", + "norms": false + } + } + }, + "connected_user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "engine": { + "properties": { + "new_state": { + "type": "keyword", + "ignore_above": 1024 + }, + "previous_state": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "file": { + "properties": { + "script_block_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "script_block_text": { + "type": "text", + "norms": false + } + } + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "pipeline_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "process": { + "properties": { + "executable_version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "provider": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "new_state": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "runspace_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "sequence": { + "type": "long" + }, + "total": { + "type": "long" + } + } + }, + "process": { + "properties": { + "args": { + "type": "keyword", + "ignore_above": 1024 + }, + "args_count": { + "type": "long" + }, + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "command_line": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "elf": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "byte_order": { + "type": "keyword", + "ignore_above": 1024 + }, + "cpu_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "creation_date": { + "type": "date" + }, + "exports": { + "type": "flattened" + }, + "header": { + "properties": { + "abi_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "entrypoint": { + "type": "long" + }, + "object_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "os_abi": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "imports": { + "type": "flattened" + }, + "sections": { + "type": "nested", + "properties": { + "chi2": { + "type": "long" + }, + "entropy": { + "type": "long" + }, + "flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_offset": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "virtual_address": { + "type": "long" + }, + "virtual_size": { + "type": "long" + } + } + }, + "segments": { + "type": "nested", + "properties": { + "sections": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "shared_libraries": { + "type": "keyword", + "ignore_above": 1024 + }, + "telfhash": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "end": { + "type": "date" + }, + "entity_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "executable": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "exit_code": { + "type": "long" + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "parent": { + "properties": { + "args": { + "type": "keyword", + "ignore_above": 1024 + }, + "args_count": { + "type": "long" + }, + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "command_line": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "elf": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "byte_order": { + "type": "keyword", + "ignore_above": 1024 + }, + "cpu_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "creation_date": { + "type": "date" + }, + "exports": { + "type": "flattened" + }, + "header": { + "properties": { + "abi_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "entrypoint": { + "type": "long" + }, + "object_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "os_abi": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "imports": { + "type": "flattened" + }, + "sections": { + "type": "nested", + "properties": { + "chi2": { + "type": "long" + }, + "entropy": { + "type": "long" + }, + "flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_offset": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "virtual_address": { + "type": "long" + }, + "virtual_size": { + "type": "long" + } + } + }, + "segments": { + "type": "nested", + "properties": { + "sections": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "shared_libraries": { + "type": "keyword", + "ignore_above": 1024 + }, + "telfhash": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "end": { + "type": "date" + }, + "entity_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "executable": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "exit_code": { + "type": "long" + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "pgid": { + "type": "long" + }, + "pid": { + "type": "long" + }, + "ppid": { + "type": "long" + }, + "start": { + "type": "date" + }, + "thread": { + "properties": { + "id": { + "type": "long" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "title": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "uptime": { + "type": "long" + }, + "working_directory": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "pgid": { + "type": "long" + }, + "pid": { + "type": "long" + }, + "ppid": { + "type": "long" + }, + "start": { + "type": "date" + }, + "thread": { + "properties": { + "id": { + "type": "long" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "title": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "uptime": { + "type": "long" + }, + "working_directory": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + }, + "registry": { + "properties": { + "data": { + "properties": { + "bytes": { + "type": "keyword", + "ignore_above": 1024 + }, + "strings": { + "type": "wildcard", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hive": { + "type": "keyword", + "ignore_above": 1024 + }, + "key": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024 + }, + "value": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "related": { + "properties": { + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "hosts": { + "type": "keyword", + "ignore_above": 1024 + }, + "ip": { + "type": "ip" + }, + "user": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "rule": { + "properties": { + "author": { + "type": "keyword", + "ignore_above": 1024 + }, + "category": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "license": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "ruleset": { + "type": "keyword", + "ignore_above": 1024 + }, + "uuid": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "server": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + }, + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "bytes": { + "type": "long" + }, + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "mac": { + "type": "keyword", + "ignore_above": 1024 + }, + "nat": { + "properties": { + "ip": { + "type": "ip" + }, + "port": { + "type": "long" + } + } + }, + "packets": { + "type": "long" + }, + "port": { + "type": "long" + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "service": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + }, + "environment": { + "type": "keyword", + "ignore_above": 1024 + }, + "ephemeral_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "node": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "state": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "source": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + }, + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "bytes": { + "type": "long" + }, + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "mac": { + "type": "keyword", + "ignore_above": 1024 + }, + "nat": { + "properties": { + "ip": { + "type": "ip" + }, + "port": { + "type": "long" + } + } + }, + "packets": { + "type": "long" + }, + "port": { + "type": "long" + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "span": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "sysmon": { + "properties": { + "dns": { + "properties": { + "status": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "file": { + "properties": { + "archived": { + "type": "boolean" + }, + "is_executable": { + "type": "boolean" + } + } + } + } + }, + "tags": { + "type": "keyword", + "ignore_above": 1024 + }, + "threat": { + "properties": { + "enrichments": { + "type": "nested", + "properties": { + "indicator": { + "properties": { + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "confidence": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "file": { + "properties": { + "accessed": { + "type": "date" + }, + "attributes": { + "type": "keyword", + "ignore_above": 1024 + }, + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "created": { + "type": "date" + }, + "ctime": { + "type": "date" + }, + "device": { + "type": "keyword", + "ignore_above": 1024 + }, + "directory": { + "type": "keyword", + "ignore_above": 1024 + }, + "drive_letter": { + "type": "keyword", + "ignore_above": 1 + }, + "elf": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "byte_order": { + "type": "keyword", + "ignore_above": 1024 + }, + "cpu_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "creation_date": { + "type": "date" + }, + "exports": { + "type": "flattened" + }, + "header": { + "properties": { + "abi_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "entrypoint": { + "type": "long" + }, + "object_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "os_abi": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "imports": { + "type": "flattened" + }, + "sections": { + "type": "nested", + "properties": { + "chi2": { + "type": "long" + }, + "entropy": { + "type": "long" + }, + "flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_offset": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "virtual_address": { + "type": "long" + }, + "virtual_size": { + "type": "long" + } + } + }, + "segments": { + "type": "nested", + "properties": { + "sections": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "shared_libraries": { + "type": "keyword", + "ignore_above": 1024 + }, + "telfhash": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "extension": { + "type": "keyword", + "ignore_above": 1024 + }, + "fork_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "gid": { + "type": "keyword", + "ignore_above": 1024 + }, + "group": { + "type": "keyword", + "ignore_above": 1024 + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "inode": { + "type": "keyword", + "ignore_above": 1024 + }, + "mime_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "mode": { + "type": "keyword", + "ignore_above": 1024 + }, + "mtime": { + "type": "date" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "owner": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "size": { + "type": "long" + }, + "target_path": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "uid": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "first_seen": { + "type": "date" + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "last_seen": { + "type": "date" + }, + "marking": { + "properties": { + "tlp": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "modified_at": { + "type": "date" + }, + "port": { + "type": "long" + }, + "provider": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "registry": { + "properties": { + "data": { + "properties": { + "bytes": { + "type": "keyword", + "ignore_above": 1024 + }, + "strings": { + "type": "wildcard", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hive": { + "type": "keyword", + "ignore_above": 1024 + }, + "key": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024 + }, + "value": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "scanner_stats": { + "type": "long" + }, + "sightings": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "url": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "extension": { + "type": "keyword", + "ignore_above": 1024 + }, + "fragment": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "original": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "password": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "wildcard", + "ignore_above": 1024 + }, + "port": { + "type": "long" + }, + "query": { + "type": "keyword", + "ignore_above": 1024 + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "scheme": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "username": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "x509": { + "properties": { + "alternative_names": { + "type": "keyword", + "ignore_above": 1024 + }, + "issuer": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "public_key_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_exponent": { + "type": "long", + "index": false, + "doc_values": false + }, + "public_key_size": { + "type": "long" + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "signature_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version_number": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "matched": { + "properties": { + "atomic": { + "type": "keyword", + "ignore_above": 1024 + }, + "field": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "index": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "framework": { + "type": "keyword", + "ignore_above": 1024 + }, + "group": { + "properties": { + "alias": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "indicator": { + "properties": { + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "confidence": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "file": { + "properties": { + "accessed": { + "type": "date" + }, + "attributes": { + "type": "keyword", + "ignore_above": 1024 + }, + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "created": { + "type": "date" + }, + "ctime": { + "type": "date" + }, + "device": { + "type": "keyword", + "ignore_above": 1024 + }, + "directory": { + "type": "keyword", + "ignore_above": 1024 + }, + "drive_letter": { + "type": "keyword", + "ignore_above": 1 + }, + "elf": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "byte_order": { + "type": "keyword", + "ignore_above": 1024 + }, + "cpu_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "creation_date": { + "type": "date" + }, + "exports": { + "type": "flattened" + }, + "header": { + "properties": { + "abi_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "entrypoint": { + "type": "long" + }, + "object_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "os_abi": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "imports": { + "type": "flattened" + }, + "sections": { + "type": "nested", + "properties": { + "chi2": { + "type": "long" + }, + "entropy": { + "type": "long" + }, + "flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_offset": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "virtual_address": { + "type": "long" + }, + "virtual_size": { + "type": "long" + } + } + }, + "segments": { + "type": "nested", + "properties": { + "sections": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "shared_libraries": { + "type": "keyword", + "ignore_above": 1024 + }, + "telfhash": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "extension": { + "type": "keyword", + "ignore_above": 1024 + }, + "fork_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "gid": { + "type": "keyword", + "ignore_above": 1024 + }, + "group": { + "type": "keyword", + "ignore_above": 1024 + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "inode": { + "type": "keyword", + "ignore_above": 1024 + }, + "mime_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "mode": { + "type": "keyword", + "ignore_above": 1024 + }, + "mtime": { + "type": "date" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "owner": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "size": { + "type": "long" + }, + "target_path": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "uid": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "first_seen": { + "type": "date" + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "last_seen": { + "type": "date" + }, + "marking": { + "properties": { + "tlp": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "modified_at": { + "type": "date" + }, + "port": { + "type": "long" + }, + "provider": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "registry": { + "properties": { + "data": { + "properties": { + "bytes": { + "type": "keyword", + "ignore_above": 1024 + }, + "strings": { + "type": "wildcard", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hive": { + "type": "keyword", + "ignore_above": 1024 + }, + "key": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024 + }, + "value": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "scanner_stats": { + "type": "long" + }, + "sightings": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "url": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "extension": { + "type": "keyword", + "ignore_above": 1024 + }, + "fragment": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "original": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "password": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "wildcard", + "ignore_above": 1024 + }, + "port": { + "type": "long" + }, + "query": { + "type": "keyword", + "ignore_above": 1024 + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "scheme": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "username": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "x509": { + "properties": { + "alternative_names": { + "type": "keyword", + "ignore_above": 1024 + }, + "issuer": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "public_key_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_exponent": { + "type": "long", + "index": false, + "doc_values": false + }, + "public_key_size": { + "type": "long" + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "signature_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version_number": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "software": { + "properties": { + "alias": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "platforms": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "tactic": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "technique": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "subtechnique": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + } + } + }, + "timeseries": { + "properties": { + "instance": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "tls": { + "properties": { + "cipher": { + "type": "keyword", + "ignore_above": 1024 + }, + "client": { + "properties": { + "certificate": { + "type": "keyword", + "ignore_above": 1024 + }, + "certificate_chain": { + "type": "keyword", + "ignore_above": 1024 + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "issuer": { + "type": "keyword", + "ignore_above": 1024 + }, + "ja3": { + "type": "keyword", + "ignore_above": 1024 + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "server_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "type": "keyword", + "ignore_above": 1024 + }, + "supported_ciphers": { + "type": "keyword", + "ignore_above": 1024 + }, + "x509": { + "properties": { + "alternative_names": { + "type": "keyword", + "ignore_above": 1024 + }, + "issuer": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "public_key_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_exponent": { + "type": "long", + "index": false, + "doc_values": false + }, + "public_key_size": { + "type": "long" + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "signature_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version_number": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "established": { + "type": "boolean" + }, + "next_protocol": { + "type": "keyword", + "ignore_above": 1024 + }, + "resumed": { + "type": "boolean" + }, + "server": { + "properties": { + "certificate": { + "type": "keyword", + "ignore_above": 1024 + }, + "certificate_chain": { + "type": "keyword", + "ignore_above": 1024 + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "issuer": { + "type": "keyword", + "ignore_above": 1024 + }, + "ja3s": { + "type": "keyword", + "ignore_above": 1024 + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "subject": { + "type": "keyword", + "ignore_above": 1024 + }, + "x509": { + "properties": { + "alternative_names": { + "type": "keyword", + "ignore_above": 1024 + }, + "issuer": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "public_key_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_exponent": { + "type": "long", + "index": false, + "doc_values": false + }, + "public_key_size": { + "type": "long" + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "signature_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version_number": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + }, + "version_protocol": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "trace": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "transaction": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "url": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "extension": { + "type": "keyword", + "ignore_above": 1024 + }, + "fragment": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "original": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "password": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "wildcard", + "ignore_above": 1024 + }, + "port": { + "type": "long" + }, + "query": { + "type": "keyword", + "ignore_above": 1024 + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "scheme": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "username": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "user": { + "properties": { + "changes": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "effective": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + }, + "target": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "user_agent": { + "properties": { + "device": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "original": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "os": { + "properties": { + "family": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "kernel": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "platform": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "vlan": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "vulnerability": { + "properties": { + "category": { + "type": "keyword", + "ignore_above": 1024 + }, + "classification": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "enumeration": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "report_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "scanner": { + "properties": { + "vendor": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "score": { + "properties": { + "base": { + "type": "float" + }, + "environmental": { + "type": "float" + }, + "temporal": { + "type": "float" + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "severity": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "winlog": { + "properties": { + "activity_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "api": { + "type": "keyword", + "ignore_above": 1024 + }, + "channel": { + "type": "keyword", + "ignore_above": 1024 + }, + "computer_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "event_data": { + "properties": { + "AccessGranted": { + "type": "keyword" + }, + "AccessList": { + "type": "keyword" + }, + "AccessMask": { + "type": "keyword" + }, + "AccessRemoved": { + "type": "keyword" + }, + "AccountExpires": { + "type": "keyword" + }, + "AccountName": { + "type": "keyword" + }, + "AdditionalInfo": { + "type": "keyword" + }, + "AdditionalInfo2": { + "type": "keyword" + }, + "Address": { + "type": "keyword" + }, + "AddressLength": { + "type": "keyword" + }, + "AdvancedOptions": { + "type": "keyword" + }, + "AlgorithmName": { + "type": "keyword" + }, + "AllowedToDelegateTo": { + "type": "keyword" + }, + "AuthenticationPackageName": { + "type": "keyword", + "ignore_above": 1024 + }, + "Binary": { + "type": "keyword", + "ignore_above": 1024 + }, + "BitlockerUserInputTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "BootMenuPolicy": { + "type": "keyword" + }, + "BootMode": { + "type": "keyword", + "ignore_above": 1024 + }, + "BootStatusPolicy": { + "type": "keyword" + }, + "BootType": { + "type": "keyword", + "ignore_above": 1024 + }, + "BuildVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "CallerProcessId": { + "type": "keyword" + }, + "CallerProcessName": { + "type": "keyword" + }, + "ClientCreationTime": { + "type": "keyword" + }, + "ClientProcessId": { + "type": "keyword" + }, + "Company": { + "type": "keyword", + "ignore_above": 1024 + }, + "ComputerAccountChange": { + "type": "keyword" + }, + "Config": { + "type": "keyword" + }, + "ConfigAccessPolicy": { + "type": "keyword" + }, + "ContextInfo": { + "type": "keyword" + }, + "CorruptionActionState": { + "type": "keyword", + "ignore_above": 1024 + }, + "CountNew": { + "type": "keyword" + }, + "CountOfCredentialsReturned": { + "type": "keyword" + }, + "CountOld": { + "type": "keyword" + }, + "CreationUtcTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "CurrentStratumNumber": { + "type": "keyword" + }, + "DCName": { + "type": "keyword" + }, + "Default SD String:": { + "type": "keyword" + }, + "Description": { + "type": "keyword", + "ignore_above": 1024 + }, + "Detail": { + "type": "keyword", + "ignore_above": 1024 + }, + "DeviceName": { + "type": "keyword", + "ignore_above": 1024 + }, + "DeviceNameLength": { + "type": "keyword", + "ignore_above": 1024 + }, + "DeviceTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "DeviceVersionMajor": { + "type": "keyword", + "ignore_above": 1024 + }, + "DeviceVersionMinor": { + "type": "keyword", + "ignore_above": 1024 + }, + "DirtyPages": { + "type": "keyword" + }, + "DisableIntegrityChecks": { + "type": "keyword" + }, + "DisplayName": { + "type": "keyword" + }, + "DnsHostName": { + "type": "keyword" + }, + "DomainBehaviorVersion": { + "type": "keyword" + }, + "DomainName": { + "type": "keyword" + }, + "DomainPolicyChanged": { + "type": "keyword" + }, + "DomainSid": { + "type": "keyword" + }, + "DriveName": { + "type": "keyword", + "ignore_above": 1024 + }, + "DriverName": { + "type": "keyword", + "ignore_above": 1024 + }, + "DriverNameLength": { + "type": "keyword", + "ignore_above": 1024 + }, + "Dummy": { + "type": "keyword" + }, + "DwordVal": { + "type": "keyword", + "ignore_above": 1024 + }, + "ElevatedToken": { + "type": "keyword" + }, + "EnableDisableReason": { + "type": "keyword" + }, + "EnabledNew": { + "type": "keyword" + }, + "EntryCount": { + "type": "keyword", + "ignore_above": 1024 + }, + "ErrorMessage": { + "type": "keyword" + }, + "ErrorString": { + "type": "keyword" + }, + "ExitBootServicesEntry": { + "type": "keyword" + }, + "ExitBootServicesExit": { + "type": "keyword" + }, + "ExtraInfo": { + "type": "keyword", + "ignore_above": 1024 + }, + "FailureName": { + "type": "keyword", + "ignore_above": 1024 + }, + "FailureNameLength": { + "type": "keyword", + "ignore_above": 1024 + }, + "FileVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "FinalStatus": { + "type": "keyword", + "ignore_above": 1024 + }, + "FlightSigning": { + "type": "keyword" + }, + "ForceLogoff": { + "type": "keyword" + }, + "Group": { + "type": "keyword", + "ignore_above": 1024 + }, + "GroupName": { + "type": "keyword" + }, + "HandleId": { + "type": "keyword" + }, + "HiveName": { + "type": "keyword" + }, + "HiveNameLength": { + "type": "keyword" + }, + "HomeDirectory": { + "type": "keyword" + }, + "HomePath": { + "type": "keyword" + }, + "HypervisorDebug": { + "type": "keyword" + }, + "HypervisorLaunchType": { + "type": "keyword" + }, + "HypervisorLoadOptions": { + "type": "keyword" + }, + "IdleImplementation": { + "type": "keyword", + "ignore_above": 1024 + }, + "IdleStateCount": { + "type": "keyword", + "ignore_above": 1024 + }, + "ImagePath": { + "type": "keyword" + }, + "ImpersonationLevel": { + "type": "keyword", + "ignore_above": 1024 + }, + "IntegrityLevel": { + "type": "keyword", + "ignore_above": 1024 + }, + "IpAddress": { + "type": "keyword", + "ignore_above": 1024 + }, + "IpPort": { + "type": "keyword", + "ignore_above": 1024 + }, + "IsTestConfig": { + "type": "keyword" + }, + "KernelDebug": { + "type": "keyword" + }, + "KeyFilePath": { + "type": "keyword" + }, + "KeyLength": { + "type": "keyword", + "ignore_above": 1024 + }, + "KeyName": { + "type": "keyword" + }, + "KeyType": { + "type": "keyword" + }, + "KeysUpdated": { + "type": "keyword" + }, + "LastBootGood": { + "type": "keyword", + "ignore_above": 1024 + }, + "LastBootId": { + "type": "keyword" + }, + "LastShutdownGood": { + "type": "keyword", + "ignore_above": 1024 + }, + "Library": { + "type": "keyword" + }, + "LmPackageName": { + "type": "keyword", + "ignore_above": 1024 + }, + "LoadOSImageStart": { + "type": "keyword" + }, + "LoadOptions": { + "type": "keyword" + }, + "LockoutDuration": { + "type": "keyword" + }, + "LockoutObservationWindow": { + "type": "keyword" + }, + "LockoutThreshold": { + "type": "keyword" + }, + "LogonGuid": { + "type": "keyword", + "ignore_above": 1024 + }, + "LogonHours": { + "type": "keyword" + }, + "LogonId": { + "type": "keyword", + "ignore_above": 1024 + }, + "LogonProcessName": { + "type": "keyword", + "ignore_above": 1024 + }, + "LogonType": { + "type": "keyword", + "ignore_above": 1024 + }, + "MachineAccountQuota": { + "type": "keyword" + }, + "MajorVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "MandatoryLabel": { + "type": "keyword" + }, + "MaxPasswordAge": { + "type": "keyword" + }, + "MaximumPerformancePercent": { + "type": "keyword", + "ignore_above": 1024 + }, + "MemberName": { + "type": "keyword", + "ignore_above": 1024 + }, + "MemberSid": { + "type": "keyword", + "ignore_above": 1024 + }, + "MessageNumber": { + "type": "keyword" + }, + "MessageTotal": { + "type": "keyword" + }, + "MinPasswordAge": { + "type": "keyword" + }, + "MinPasswordLength": { + "type": "keyword" + }, + "MinimumPasswordLength": { + "type": "keyword" + }, + "MinimumPasswordLengthAudit": { + "type": "keyword" + }, + "MinimumPerformancePercent": { + "type": "keyword", + "ignore_above": 1024 + }, + "MinimumThrottlePercent": { + "type": "keyword", + "ignore_above": 1024 + }, + "MiniportName": { + "type": "keyword" + }, + "MiniportNameLen": { + "type": "keyword" + }, + "MinorVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "MixedDomainMode": { + "type": "keyword" + }, + "NewProcessId": { + "type": "keyword", + "ignore_above": 1024 + }, + "NewProcessName": { + "type": "keyword", + "ignore_above": 1024 + }, + "NewSchemeGuid": { + "type": "keyword", + "ignore_above": 1024 + }, + "NewSd": { + "type": "keyword" + }, + "NewSize": { + "type": "keyword" + }, + "NewTargetUserName": { + "type": "keyword" + }, + "NewTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "NewUacValue": { + "type": "keyword" + }, + "NominalFrequency": { + "type": "keyword", + "ignore_above": 1024 + }, + "Number": { + "type": "keyword", + "ignore_above": 1024 + }, + "NumberOfGroupPolicyObjects": { + "type": "keyword" + }, + "OSEditionID": { + "type": "keyword" + }, + "OSName": { + "type": "keyword" + }, + "OSbuildversion": { + "type": "keyword" + }, + "OSmajorversion": { + "type": "keyword" + }, + "OSminorversion": { + "type": "keyword" + }, + "OSservicepackmajorversion": { + "type": "keyword" + }, + "OSservicepackminorversion": { + "type": "keyword" + }, + "ObjectName": { + "type": "keyword" + }, + "ObjectServer": { + "type": "keyword" + }, + "ObjectType": { + "type": "keyword" + }, + "OemInformation": { + "type": "keyword" + }, + "OldSchemeGuid": { + "type": "keyword", + "ignore_above": 1024 + }, + "OldSd": { + "type": "keyword" + }, + "OldTargetUserName": { + "type": "keyword" + }, + "OldTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "OldUacValue": { + "type": "keyword" + }, + "Operation": { + "type": "keyword" + }, + "OperationType": { + "type": "keyword" + }, + "OriginalFileName": { + "type": "keyword", + "ignore_above": 1024 + }, + "OriginalSize": { + "type": "keyword" + }, + "ParentProcessName": { + "type": "keyword" + }, + "PasswordHistoryLength": { + "type": "keyword" + }, + "PasswordLastSet": { + "type": "keyword" + }, + "PasswordProperties": { + "type": "keyword" + }, + "Path": { + "type": "keyword", + "ignore_above": 1024 + }, + "Payload": { + "type": "keyword" + }, + "PerformanceImplementation": { + "type": "keyword", + "ignore_above": 1024 + }, + "PreAuthType": { + "type": "keyword" + }, + "PreviousCreationUtcTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "PreviousTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "PrimaryGroupId": { + "type": "keyword" + }, + "PrivilegeList": { + "type": "keyword", + "ignore_above": 1024 + }, + "ProcessCreationTime": { + "type": "keyword" + }, + "ProcessID": { + "type": "keyword" + }, + "ProcessId": { + "type": "keyword", + "ignore_above": 1024 + }, + "ProcessName": { + "type": "keyword", + "ignore_above": 1024 + }, + "ProcessPath": { + "type": "keyword", + "ignore_above": 1024 + }, + "ProcessPid": { + "type": "keyword", + "ignore_above": 1024 + }, + "ProcessingMode": { + "type": "keyword" + }, + "ProcessingTimeInMilliseconds": { + "type": "keyword" + }, + "Product": { + "type": "keyword", + "ignore_above": 1024 + }, + "ProfilePath": { + "type": "keyword" + }, + "Properties": { + "type": "keyword" + }, + "ProviderName": { + "type": "keyword" + }, + "PuaCount": { + "type": "keyword", + "ignore_above": 1024 + }, + "PuaPolicyId": { + "type": "keyword", + "ignore_above": 1024 + }, + "QfeVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "QueryName": { + "type": "keyword" + }, + "ReadOperation": { + "type": "keyword" + }, + "Reason": { + "type": "keyword", + "ignore_above": 1024 + }, + "RemoteEventLogging": { + "type": "keyword" + }, + "ResetEndStart": { + "type": "keyword" + }, + "RestrictedAdminMode": { + "type": "keyword" + }, + "ReturnCode": { + "type": "keyword" + }, + "SamAccountName": { + "type": "keyword" + }, + "SchemaVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "ScriptBlockId": { + "type": "keyword" + }, + "ScriptBlockText": { + "type": "keyword", + "ignore_above": 1024 + }, + "ScriptPath": { + "type": "keyword" + }, + "ServiceName": { + "type": "keyword", + "ignore_above": 1024 + }, + "ServicePrincipalNames": { + "type": "keyword" + }, + "ServiceSid": { + "type": "keyword" + }, + "ServiceType": { + "type": "keyword" + }, + "ServiceVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "ShutdownActionType": { + "type": "keyword", + "ignore_above": 1024 + }, + "ShutdownEventCode": { + "type": "keyword", + "ignore_above": 1024 + }, + "ShutdownReason": { + "type": "keyword", + "ignore_above": 1024 + }, + "SidHistory": { + "type": "keyword" + }, + "Signature": { + "type": "keyword", + "ignore_above": 1024 + }, + "SignatureStatus": { + "type": "keyword", + "ignore_above": 1024 + }, + "Signed": { + "type": "keyword", + "ignore_above": 1024 + }, + "StartOSImageStart": { + "type": "keyword" + }, + "StartTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "StartType": { + "type": "keyword" + }, + "State": { + "type": "keyword", + "ignore_above": 1024 + }, + "Status": { + "type": "keyword", + "ignore_above": 1024 + }, + "StopTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "SubjectDomainName": { + "type": "keyword", + "ignore_above": 1024 + }, + "SubjectLogonId": { + "type": "keyword", + "ignore_above": 1024 + }, + "SubjectUserName": { + "type": "keyword", + "ignore_above": 1024 + }, + "SubjectUserSid": { + "type": "keyword", + "ignore_above": 1024 + }, + "SupportInfo1": { + "type": "keyword" + }, + "SupportInfo2": { + "type": "keyword" + }, + "TSId": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetDomainName": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetInfo": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetLinkedLogonId": { + "type": "keyword" + }, + "TargetLogonGuid": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetLogonId": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetName": { + "type": "keyword" + }, + "TargetOutboundDomainName": { + "type": "keyword" + }, + "TargetOutboundUserName": { + "type": "keyword" + }, + "TargetProcessId": { + "type": "keyword" + }, + "TargetProcessName": { + "type": "keyword" + }, + "TargetServerName": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetSid": { + "type": "keyword" + }, + "TargetUserName": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetUserSid": { + "type": "keyword", + "ignore_above": 1024 + }, + "TaskName": { + "type": "keyword" + }, + "TerminalSessionId": { + "type": "keyword", + "ignore_above": 1024 + }, + "TestSigning": { + "type": "keyword" + }, + "TicketEncryptionType": { + "type": "keyword" + }, + "TicketOptions": { + "type": "keyword" + }, + "TimeSource": { + "type": "keyword" + }, + "TimeSourceRefId": { + "type": "keyword" + }, + "TokenElevationType": { + "type": "keyword", + "ignore_above": 1024 + }, + "TransmittedServices": { + "type": "keyword", + "ignore_above": 1024 + }, + "Type": { + "type": "keyword" + }, + "UpdateReason": { + "type": "keyword" + }, + "UserAccountControl": { + "type": "keyword" + }, + "UserContext": { + "type": "keyword" + }, + "UserName": { + "type": "keyword" + }, + "UserParameters": { + "type": "keyword" + }, + "UserPrincipalName": { + "type": "keyword" + }, + "UserSid": { + "type": "keyword", + "ignore_above": 1024 + }, + "UserWorkstations": { + "type": "keyword" + }, + "Version": { + "type": "keyword", + "ignore_above": 1024 + }, + "VersionLen": { + "type": "keyword" + }, + "VirtualAccount": { + "type": "keyword" + }, + "VsmLaunchType": { + "type": "keyword" + }, + "VsmPolicy": { + "type": "keyword" + }, + "Win32Error": { + "type": "keyword" + }, + "Workstation": { + "type": "keyword", + "ignore_above": 1024 + }, + "WorkstationName": { + "type": "keyword" + }, + "error": { + "type": "keyword" + }, + "evtHiveName": { + "type": "keyword" + }, + "evtHiveNameLength": { + "type": "keyword" + }, + "locationCode": { + "type": "keyword" + }, + "param1": { + "type": "keyword", + "ignore_above": 1024 + }, + "param10": { + "type": "keyword" + }, + "param11": { + "type": "keyword" + }, + "param12": { + "type": "keyword" + }, + "param2": { + "type": "keyword", + "ignore_above": 1024 + }, + "param3": { + "type": "keyword", + "ignore_above": 1024 + }, + "param4": { + "type": "keyword", + "ignore_above": 1024 + }, + "param5": { + "type": "keyword", + "ignore_above": 1024 + }, + "param6": { + "type": "keyword", + "ignore_above": 1024 + }, + "param7": { + "type": "keyword", + "ignore_above": 1024 + }, + "param8": { + "type": "keyword", + "ignore_above": 1024 + }, + "param9": { + "type": "keyword" + }, + "serviceGuid": { + "type": "keyword" + }, + "spn1": { + "type": "keyword" + }, + "spn2": { + "type": "keyword" + }, + "updateGuid": { + "type": "keyword" + }, + "updateRevisionNumber": { + "type": "keyword" + }, + "updateTitle": { + "type": "keyword" + } + } + }, + "event_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "keywords": { + "type": "keyword", + "ignore_above": 1024 + }, + "logon": { + "properties": { + "failure": { + "properties": { + "reason": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "sub_status": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "opcode": { + "type": "keyword", + "ignore_above": 1024 + }, + "process": { + "properties": { + "pid": { + "type": "long" + }, + "thread": { + "properties": { + "id": { + "type": "long" + } + } + } + } + }, + "provider_guid": { + "type": "keyword", + "ignore_above": 1024 + }, + "provider_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "record_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "related_activity_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "task": { + "type": "keyword", + "ignore_above": 1024 + }, + "time_created": { + "type": "date" + }, + "user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "identifier": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "user_data": { + "properties": { + "Channel": { + "type": "keyword" + }, + "ClientProcessId": { + "type": "keyword" + }, + "ClientProcessStartKey": { + "type": "keyword" + }, + "SubjectDomainName": { + "type": "keyword" + }, + "SubjectLogonId": { + "type": "keyword" + }, + "SubjectUserName": { + "type": "keyword" + }, + "SubjectUserSid": { + "type": "keyword" + }, + "binaryData": { + "type": "keyword" + }, + "binaryDataSize": { + "type": "keyword" + }, + "param1": { + "type": "keyword" + }, + "param2": { + "type": "keyword" + }, + "xml_name": { + "type": "keyword" + } + } + }, + "version": { + "type": "long" + } + } + }, + "x509": { + "properties": { + "alternative_names": { + "type": "keyword", + "ignore_above": 1024 + }, + "issuer": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "public_key_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_exponent": { + "type": "long", + "index": false, + "doc_values": false + }, + "public_key_size": { + "type": "long" + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "signature_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version_number": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + } + } +} \ No newline at end of file diff --git a/testing/tests/api_tests/winlogbeat/test_data/mapping_response_actual.json b/testing/tests/api_tests/winlogbeat/test_data/mapping_response_actual.json new file mode 100644 index 00000000..3ce0b3a3 --- /dev/null +++ b/testing/tests/api_tests/winlogbeat/test_data/mapping_response_actual.json @@ -0,0 +1,7376 @@ +{ + "winlogbeat-000001": { + "mappings": { + "_meta": { + "beat": "winlogbeat", + "version": "7.17.6" + }, + "dynamic_templates": [ + { + "labels": { + "path_match": "labels.*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + "container.labels": { + "path_match": "container.labels.*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + "fields": { + "path_match": "fields.*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + "docker.container.labels": { + "path_match": "docker.container.labels.*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + "kubernetes.labels.*": { + "path_match": "kubernetes.labels.*", + "mapping": { + "type": "keyword" + } + } + }, + { + "kubernetes.annotations.*": { + "path_match": "kubernetes.annotations.*", + "mapping": { + "type": "keyword" + } + } + }, + { + "kubernetes.selectors.*": { + "path_match": "kubernetes.selectors.*", + "mapping": { + "type": "keyword" + } + } + }, + { + "winlog.event_data": { + "path_match": "winlog.event_data.*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + "winlog.user_data": { + "path_match": "winlog.user_data.*", + "match_mapping_type": "string", + "mapping": { + "type": "keyword" + } + } + }, + { + "strings_as_keyword": { + "match_mapping_type": "string", + "mapping": { + "ignore_above": 1024, + "type": "keyword" + } + } + } + ], + "date_detection": false, + "properties": { + "@timestamp": { + "type": "date" + }, + "@version": { + "type": "keyword", + "ignore_above": 1024 + }, + "agent": { + "properties": { + "build": { + "properties": { + "original": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ephemeral_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "hostname": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "client": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + }, + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "bytes": { + "type": "long" + }, + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "mac": { + "type": "keyword", + "ignore_above": 1024 + }, + "nat": { + "properties": { + "ip": { + "type": "ip" + }, + "port": { + "type": "long" + } + } + }, + "packets": { + "type": "long" + }, + "port": { + "type": "long" + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "cloud": { + "properties": { + "account": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "availability_zone": { + "type": "keyword", + "ignore_above": 1024 + }, + "image": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "instance": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "machine": { + "properties": { + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "project": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "provider": { + "type": "keyword", + "ignore_above": 1024 + }, + "region": { + "type": "keyword", + "ignore_above": 1024 + }, + "service": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "container": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "image": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "tag": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "labels": { + "type": "object" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "runtime": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "data_stream": { + "properties": { + "dataset": { + "type": "constant_keyword" + }, + "namespace": { + "type": "constant_keyword" + }, + "type": { + "type": "constant_keyword" + } + } + }, + "destination": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + }, + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "bytes": { + "type": "long" + }, + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "mac": { + "type": "keyword", + "ignore_above": 1024 + }, + "nat": { + "properties": { + "ip": { + "type": "ip" + }, + "port": { + "type": "long" + } + } + }, + "packets": { + "type": "long" + }, + "port": { + "type": "long" + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "dll": { + "properties": { + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024 + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "dns": { + "properties": { + "answers": { + "properties": { + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "ttl": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "header_flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "op_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "question": { + "properties": { + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "resolved_ip": { + "type": "ip" + }, + "response_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "docker": { + "properties": { + "container": { + "properties": { + "labels": { + "type": "object" + } + } + } + } + }, + "ecs": { + "properties": { + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "elf": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "byte_order": { + "type": "keyword", + "ignore_above": 1024 + }, + "cpu_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "creation_date": { + "type": "date" + }, + "exports": { + "type": "flattened" + }, + "header": { + "properties": { + "abi_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "entrypoint": { + "type": "long" + }, + "object_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "os_abi": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "imports": { + "type": "flattened" + }, + "sections": { + "type": "nested", + "properties": { + "chi2": { + "type": "long" + }, + "entropy": { + "type": "long" + }, + "flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_offset": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "virtual_address": { + "type": "long" + }, + "virtual_size": { + "type": "long" + } + } + }, + "segments": { + "type": "nested", + "properties": { + "sections": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "shared_libraries": { + "type": "keyword", + "ignore_above": 1024 + }, + "telfhash": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "error": { + "properties": { + "code": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "message": { + "type": "match_only_text" + }, + "stack_trace": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "event": { + "properties": { + "action": { + "type": "keyword", + "ignore_above": 1024 + }, + "agent_id_status": { + "type": "keyword", + "ignore_above": 1024 + }, + "category": { + "type": "keyword", + "ignore_above": 1024 + }, + "code": { + "type": "keyword", + "ignore_above": 1024 + }, + "created": { + "type": "date" + }, + "dataset": { + "type": "keyword", + "ignore_above": 1024 + }, + "duration": { + "type": "long" + }, + "end": { + "type": "date" + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "ingested": { + "type": "date" + }, + "kind": { + "type": "keyword", + "ignore_above": 1024 + }, + "module": { + "type": "keyword", + "ignore_above": 1024 + }, + "original": { + "type": "keyword", + "ignore_above": 1024 + }, + "outcome": { + "type": "keyword", + "ignore_above": 1024 + }, + "provider": { + "type": "keyword", + "ignore_above": 1024 + }, + "reason": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "risk_score": { + "type": "float" + }, + "risk_score_norm": { + "type": "float" + }, + "sequence": { + "type": "long" + }, + "severity": { + "type": "long" + }, + "start": { + "type": "date" + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "url": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "fields": { + "type": "object" + }, + "file": { + "properties": { + "accessed": { + "type": "date" + }, + "attributes": { + "type": "keyword", + "ignore_above": 1024 + }, + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "created": { + "type": "date" + }, + "ctime": { + "type": "date" + }, + "device": { + "type": "keyword", + "ignore_above": 1024 + }, + "directory": { + "type": "keyword", + "ignore_above": 1024 + }, + "drive_letter": { + "type": "keyword", + "ignore_above": 1 + }, + "elf": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "byte_order": { + "type": "keyword", + "ignore_above": 1024 + }, + "cpu_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "creation_date": { + "type": "date" + }, + "exports": { + "type": "flattened" + }, + "header": { + "properties": { + "abi_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "entrypoint": { + "type": "long" + }, + "object_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "os_abi": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "imports": { + "type": "flattened" + }, + "sections": { + "type": "nested", + "properties": { + "chi2": { + "type": "long" + }, + "entropy": { + "type": "long" + }, + "flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_offset": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "virtual_address": { + "type": "long" + }, + "virtual_size": { + "type": "long" + } + } + }, + "segments": { + "type": "nested", + "properties": { + "sections": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "shared_libraries": { + "type": "keyword", + "ignore_above": 1024 + }, + "telfhash": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "extension": { + "type": "keyword", + "ignore_above": 1024 + }, + "fork_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "gid": { + "type": "keyword", + "ignore_above": 1024 + }, + "group": { + "type": "keyword", + "ignore_above": 1024 + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "inode": { + "type": "keyword", + "ignore_above": 1024 + }, + "mime_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "mode": { + "type": "keyword", + "ignore_above": 1024 + }, + "mtime": { + "type": "date" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "owner": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "size": { + "type": "long" + }, + "target_path": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "uid": { + "type": "keyword", + "ignore_above": 1024 + }, + "x509": { + "properties": { + "alternative_names": { + "type": "keyword", + "ignore_above": 1024 + }, + "issuer": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "public_key_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_exponent": { + "type": "long", + "index": false, + "doc_values": false + }, + "public_key_size": { + "type": "long" + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "signature_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version_number": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "host": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "containerized": { + "type": "boolean" + }, + "cpu": { + "properties": { + "usage": { + "type": "scaled_float", + "scaling_factor": 1000.0 + } + } + }, + "disk": { + "properties": { + "read": { + "properties": { + "bytes": { + "type": "long" + } + } + }, + "write": { + "properties": { + "bytes": { + "type": "long" + } + } + } + } + }, + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hostname": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "ip": { + "type": "ip" + }, + "mac": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "network": { + "properties": { + "egress": { + "properties": { + "bytes": { + "type": "long" + }, + "packets": { + "type": "long" + } + } + }, + "ingress": { + "properties": { + "bytes": { + "type": "long" + }, + "packets": { + "type": "long" + } + } + } + } + }, + "os": { + "properties": { + "build": { + "type": "keyword", + "ignore_above": 1024 + }, + "codename": { + "type": "keyword", + "ignore_above": 1024 + }, + "family": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "kernel": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "platform": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "uptime": { + "type": "long" + }, + "user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "http": { + "properties": { + "request": { + "properties": { + "body": { + "properties": { + "bytes": { + "type": "long" + }, + "content": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + }, + "bytes": { + "type": "long" + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "method": { + "type": "keyword", + "ignore_above": 1024 + }, + "mime_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "referrer": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "response": { + "properties": { + "body": { + "properties": { + "bytes": { + "type": "long" + }, + "content": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + }, + "bytes": { + "type": "long" + }, + "mime_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "status_code": { + "type": "long" + } + } + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "interface": { + "properties": { + "alias": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "jolokia": { + "properties": { + "agent": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "secured": { + "type": "boolean" + }, + "server": { + "properties": { + "product": { + "type": "keyword", + "ignore_above": 1024 + }, + "vendor": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "url": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "kubernetes": { + "properties": { + "annotations": { + "properties": { + "*": { + "type": "object" + } + } + }, + "container": { + "properties": { + "image": { + "type": "alias", + "path": "container.image.name" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "deployment": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "labels": { + "properties": { + "*": { + "type": "object" + } + } + }, + "namespace": { + "type": "keyword", + "ignore_above": 1024 + }, + "node": { + "properties": { + "hostname": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "pod": { + "properties": { + "ip": { + "type": "ip" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "uid": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "replicaset": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "selectors": { + "properties": { + "*": { + "type": "object" + } + } + }, + "statefulset": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "labels": { + "type": "object" + }, + "log": { + "properties": { + "file": { + "properties": { + "path": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "level": { + "type": "keyword", + "ignore_above": 1024 + }, + "logger": { + "type": "keyword", + "ignore_above": 1024 + }, + "origin": { + "properties": { + "file": { + "properties": { + "line": { + "type": "long" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "function": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "original": { + "type": "keyword", + "index": false, + "doc_values": false, + "ignore_above": 1024 + }, + "syslog": { + "properties": { + "facility": { + "properties": { + "code": { + "type": "long" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "priority": { + "type": "long" + }, + "severity": { + "properties": { + "code": { + "type": "long" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + } + } + }, + "message": { + "type": "match_only_text" + }, + "network": { + "properties": { + "application": { + "type": "keyword", + "ignore_above": 1024 + }, + "bytes": { + "type": "long" + }, + "community_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "direction": { + "type": "keyword", + "ignore_above": 1024 + }, + "forwarded_ip": { + "type": "ip" + }, + "iana_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "inner": { + "properties": { + "vlan": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "packets": { + "type": "long" + }, + "protocol": { + "type": "keyword", + "ignore_above": 1024 + }, + "transport": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "vlan": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "observer": { + "properties": { + "egress": { + "properties": { + "interface": { + "properties": { + "alias": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "vlan": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "zone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hostname": { + "type": "keyword", + "ignore_above": 1024 + }, + "ingress": { + "properties": { + "interface": { + "properties": { + "alias": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "vlan": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "zone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "mac": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "os": { + "properties": { + "family": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "kernel": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "platform": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "vendor": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "orchestrator": { + "properties": { + "api_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "cluster": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "url": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "namespace": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "resource": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "organization": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + }, + "os": { + "properties": { + "family": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "kernel": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "platform": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "package": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "build_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "checksum": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "install_scope": { + "type": "keyword", + "ignore_above": 1024 + }, + "installed": { + "type": "date" + }, + "license": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "powershell": { + "properties": { + "command": { + "properties": { + "invocation_details": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "related_command": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "value": { + "type": "text", + "norms": false + } + } + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "value": { + "type": "text", + "norms": false + } + } + }, + "connected_user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "engine": { + "properties": { + "new_state": { + "type": "keyword", + "ignore_above": 1024 + }, + "previous_state": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "file": { + "properties": { + "script_block_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "script_block_text": { + "type": "text", + "norms": false + } + } + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "pipeline_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "process": { + "properties": { + "executable_version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "provider": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "new_state": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "runspace_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "sequence": { + "type": "long" + }, + "total": { + "type": "long" + } + } + }, + "process": { + "properties": { + "args": { + "type": "keyword", + "ignore_above": 1024 + }, + "args_count": { + "type": "long" + }, + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "command_line": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "elf": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "byte_order": { + "type": "keyword", + "ignore_above": 1024 + }, + "cpu_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "creation_date": { + "type": "date" + }, + "exports": { + "type": "flattened" + }, + "header": { + "properties": { + "abi_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "entrypoint": { + "type": "long" + }, + "object_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "os_abi": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "imports": { + "type": "flattened" + }, + "sections": { + "type": "nested", + "properties": { + "chi2": { + "type": "long" + }, + "entropy": { + "type": "long" + }, + "flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_offset": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "virtual_address": { + "type": "long" + }, + "virtual_size": { + "type": "long" + } + } + }, + "segments": { + "type": "nested", + "properties": { + "sections": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "shared_libraries": { + "type": "keyword", + "ignore_above": 1024 + }, + "telfhash": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "end": { + "type": "date" + }, + "entity_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "executable": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "exit_code": { + "type": "long" + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "parent": { + "properties": { + "args": { + "type": "keyword", + "ignore_above": 1024 + }, + "args_count": { + "type": "long" + }, + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "command_line": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "elf": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "byte_order": { + "type": "keyword", + "ignore_above": 1024 + }, + "cpu_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "creation_date": { + "type": "date" + }, + "exports": { + "type": "flattened" + }, + "header": { + "properties": { + "abi_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "entrypoint": { + "type": "long" + }, + "object_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "os_abi": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "imports": { + "type": "flattened" + }, + "sections": { + "type": "nested", + "properties": { + "chi2": { + "type": "long" + }, + "entropy": { + "type": "long" + }, + "flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_offset": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "virtual_address": { + "type": "long" + }, + "virtual_size": { + "type": "long" + } + } + }, + "segments": { + "type": "nested", + "properties": { + "sections": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "shared_libraries": { + "type": "keyword", + "ignore_above": 1024 + }, + "telfhash": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "end": { + "type": "date" + }, + "entity_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "executable": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "exit_code": { + "type": "long" + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "pgid": { + "type": "long" + }, + "pid": { + "type": "long" + }, + "ppid": { + "type": "long" + }, + "start": { + "type": "date" + }, + "thread": { + "properties": { + "id": { + "type": "long" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "title": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "uptime": { + "type": "long" + }, + "working_directory": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "pgid": { + "type": "long" + }, + "pid": { + "type": "long" + }, + "ppid": { + "type": "long" + }, + "start": { + "type": "date" + }, + "thread": { + "properties": { + "id": { + "type": "long" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "title": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "uptime": { + "type": "long" + }, + "working_directory": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + }, + "registry": { + "properties": { + "data": { + "properties": { + "bytes": { + "type": "keyword", + "ignore_above": 1024 + }, + "strings": { + "type": "wildcard", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hive": { + "type": "keyword", + "ignore_above": 1024 + }, + "key": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024 + }, + "value": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "related": { + "properties": { + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "hosts": { + "type": "keyword", + "ignore_above": 1024 + }, + "ip": { + "type": "ip" + }, + "user": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "rule": { + "properties": { + "author": { + "type": "keyword", + "ignore_above": 1024 + }, + "category": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "license": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "ruleset": { + "type": "keyword", + "ignore_above": 1024 + }, + "uuid": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "server": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + }, + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "bytes": { + "type": "long" + }, + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "mac": { + "type": "keyword", + "ignore_above": 1024 + }, + "nat": { + "properties": { + "ip": { + "type": "ip" + }, + "port": { + "type": "long" + } + } + }, + "packets": { + "type": "long" + }, + "port": { + "type": "long" + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "service": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + }, + "environment": { + "type": "keyword", + "ignore_above": 1024 + }, + "ephemeral_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "node": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "state": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "source": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + }, + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "bytes": { + "type": "long" + }, + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "mac": { + "type": "keyword", + "ignore_above": 1024 + }, + "nat": { + "properties": { + "ip": { + "type": "ip" + }, + "port": { + "type": "long" + } + } + }, + "packets": { + "type": "long" + }, + "port": { + "type": "long" + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "span": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "sysmon": { + "properties": { + "dns": { + "properties": { + "status": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "file": { + "properties": { + "archived": { + "type": "boolean" + }, + "is_executable": { + "type": "boolean" + } + } + } + } + }, + "tags": { + "type": "keyword", + "ignore_above": 1024 + }, + "threat": { + "properties": { + "enrichments": { + "type": "nested", + "properties": { + "indicator": { + "properties": { + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "confidence": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "file": { + "properties": { + "accessed": { + "type": "date" + }, + "attributes": { + "type": "keyword", + "ignore_above": 1024 + }, + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "created": { + "type": "date" + }, + "ctime": { + "type": "date" + }, + "device": { + "type": "keyword", + "ignore_above": 1024 + }, + "directory": { + "type": "keyword", + "ignore_above": 1024 + }, + "drive_letter": { + "type": "keyword", + "ignore_above": 1 + }, + "elf": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "byte_order": { + "type": "keyword", + "ignore_above": 1024 + }, + "cpu_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "creation_date": { + "type": "date" + }, + "exports": { + "type": "flattened" + }, + "header": { + "properties": { + "abi_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "entrypoint": { + "type": "long" + }, + "object_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "os_abi": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "imports": { + "type": "flattened" + }, + "sections": { + "type": "nested", + "properties": { + "chi2": { + "type": "long" + }, + "entropy": { + "type": "long" + }, + "flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_offset": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "virtual_address": { + "type": "long" + }, + "virtual_size": { + "type": "long" + } + } + }, + "segments": { + "type": "nested", + "properties": { + "sections": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "shared_libraries": { + "type": "keyword", + "ignore_above": 1024 + }, + "telfhash": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "extension": { + "type": "keyword", + "ignore_above": 1024 + }, + "fork_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "gid": { + "type": "keyword", + "ignore_above": 1024 + }, + "group": { + "type": "keyword", + "ignore_above": 1024 + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "inode": { + "type": "keyword", + "ignore_above": 1024 + }, + "mime_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "mode": { + "type": "keyword", + "ignore_above": 1024 + }, + "mtime": { + "type": "date" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "owner": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "size": { + "type": "long" + }, + "target_path": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "uid": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "first_seen": { + "type": "date" + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "last_seen": { + "type": "date" + }, + "marking": { + "properties": { + "tlp": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "modified_at": { + "type": "date" + }, + "port": { + "type": "long" + }, + "provider": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "registry": { + "properties": { + "data": { + "properties": { + "bytes": { + "type": "keyword", + "ignore_above": 1024 + }, + "strings": { + "type": "wildcard", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hive": { + "type": "keyword", + "ignore_above": 1024 + }, + "key": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024 + }, + "value": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "scanner_stats": { + "type": "long" + }, + "sightings": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "url": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "extension": { + "type": "keyword", + "ignore_above": 1024 + }, + "fragment": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "original": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "password": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "wildcard", + "ignore_above": 1024 + }, + "port": { + "type": "long" + }, + "query": { + "type": "keyword", + "ignore_above": 1024 + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "scheme": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "username": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "x509": { + "properties": { + "alternative_names": { + "type": "keyword", + "ignore_above": 1024 + }, + "issuer": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "public_key_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_exponent": { + "type": "long", + "index": false, + "doc_values": false + }, + "public_key_size": { + "type": "long" + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "signature_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version_number": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "matched": { + "properties": { + "atomic": { + "type": "keyword", + "ignore_above": 1024 + }, + "field": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "index": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "framework": { + "type": "keyword", + "ignore_above": 1024 + }, + "group": { + "properties": { + "alias": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "indicator": { + "properties": { + "as": { + "properties": { + "number": { + "type": "long" + }, + "organization": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + } + } + } + } + }, + "confidence": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "properties": { + "address": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "file": { + "properties": { + "accessed": { + "type": "date" + }, + "attributes": { + "type": "keyword", + "ignore_above": 1024 + }, + "code_signature": { + "properties": { + "digest_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "exists": { + "type": "boolean" + }, + "signing_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "team_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "timestamp": { + "type": "date" + }, + "trusted": { + "type": "boolean" + }, + "valid": { + "type": "boolean" + } + } + }, + "created": { + "type": "date" + }, + "ctime": { + "type": "date" + }, + "device": { + "type": "keyword", + "ignore_above": 1024 + }, + "directory": { + "type": "keyword", + "ignore_above": 1024 + }, + "drive_letter": { + "type": "keyword", + "ignore_above": 1 + }, + "elf": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "byte_order": { + "type": "keyword", + "ignore_above": 1024 + }, + "cpu_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "creation_date": { + "type": "date" + }, + "exports": { + "type": "flattened" + }, + "header": { + "properties": { + "abi_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "class": { + "type": "keyword", + "ignore_above": 1024 + }, + "data": { + "type": "keyword", + "ignore_above": 1024 + }, + "entrypoint": { + "type": "long" + }, + "object_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "os_abi": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "imports": { + "type": "flattened" + }, + "sections": { + "type": "nested", + "properties": { + "chi2": { + "type": "long" + }, + "entropy": { + "type": "long" + }, + "flags": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_offset": { + "type": "keyword", + "ignore_above": 1024 + }, + "physical_size": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "virtual_address": { + "type": "long" + }, + "virtual_size": { + "type": "long" + } + } + }, + "segments": { + "type": "nested", + "properties": { + "sections": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "shared_libraries": { + "type": "keyword", + "ignore_above": 1024 + }, + "telfhash": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "extension": { + "type": "keyword", + "ignore_above": 1024 + }, + "fork_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "gid": { + "type": "keyword", + "ignore_above": 1024 + }, + "group": { + "type": "keyword", + "ignore_above": 1024 + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha512": { + "type": "keyword", + "ignore_above": 1024 + }, + "ssdeep": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "inode": { + "type": "keyword", + "ignore_above": 1024 + }, + "mime_type": { + "type": "keyword", + "ignore_above": 1024 + }, + "mode": { + "type": "keyword", + "ignore_above": 1024 + }, + "mtime": { + "type": "date" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "owner": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "pe": { + "properties": { + "architecture": { + "type": "keyword", + "ignore_above": 1024 + }, + "company": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024 + }, + "file_version": { + "type": "keyword", + "ignore_above": 1024 + }, + "imphash": { + "type": "keyword", + "ignore_above": 1024 + }, + "original_file_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "product": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "size": { + "type": "long" + }, + "target_path": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "uid": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "first_seen": { + "type": "date" + }, + "geo": { + "properties": { + "city_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "continent_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "country_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "location": { + "type": "geo_point" + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "postal_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_iso_code": { + "type": "keyword", + "ignore_above": 1024 + }, + "region_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "timezone": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "ip": { + "type": "ip" + }, + "last_seen": { + "type": "date" + }, + "marking": { + "properties": { + "tlp": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "modified_at": { + "type": "date" + }, + "port": { + "type": "long" + }, + "provider": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "registry": { + "properties": { + "data": { + "properties": { + "bytes": { + "type": "keyword", + "ignore_above": 1024 + }, + "strings": { + "type": "wildcard", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hive": { + "type": "keyword", + "ignore_above": 1024 + }, + "key": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "keyword", + "ignore_above": 1024 + }, + "value": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "scanner_stats": { + "type": "long" + }, + "sightings": { + "type": "long" + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "url": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "extension": { + "type": "keyword", + "ignore_above": 1024 + }, + "fragment": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "original": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "password": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "wildcard", + "ignore_above": 1024 + }, + "port": { + "type": "long" + }, + "query": { + "type": "keyword", + "ignore_above": 1024 + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "scheme": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "username": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "x509": { + "properties": { + "alternative_names": { + "type": "keyword", + "ignore_above": 1024 + }, + "issuer": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "public_key_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_exponent": { + "type": "long", + "index": false, + "doc_values": false + }, + "public_key_size": { + "type": "long" + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "signature_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version_number": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "software": { + "properties": { + "alias": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "platforms": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "tactic": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "technique": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "subtechnique": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + } + } + }, + "timeseries": { + "properties": { + "instance": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "tls": { + "properties": { + "cipher": { + "type": "keyword", + "ignore_above": 1024 + }, + "client": { + "properties": { + "certificate": { + "type": "keyword", + "ignore_above": 1024 + }, + "certificate_chain": { + "type": "keyword", + "ignore_above": 1024 + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "issuer": { + "type": "keyword", + "ignore_above": 1024 + }, + "ja3": { + "type": "keyword", + "ignore_above": 1024 + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "server_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "type": "keyword", + "ignore_above": 1024 + }, + "supported_ciphers": { + "type": "keyword", + "ignore_above": 1024 + }, + "x509": { + "properties": { + "alternative_names": { + "type": "keyword", + "ignore_above": 1024 + }, + "issuer": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "public_key_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_exponent": { + "type": "long", + "index": false, + "doc_values": false + }, + "public_key_size": { + "type": "long" + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "signature_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version_number": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "established": { + "type": "boolean" + }, + "next_protocol": { + "type": "keyword", + "ignore_above": 1024 + }, + "resumed": { + "type": "boolean" + }, + "server": { + "properties": { + "certificate": { + "type": "keyword", + "ignore_above": 1024 + }, + "certificate_chain": { + "type": "keyword", + "ignore_above": 1024 + }, + "hash": { + "properties": { + "md5": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha1": { + "type": "keyword", + "ignore_above": 1024 + }, + "sha256": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "issuer": { + "type": "keyword", + "ignore_above": 1024 + }, + "ja3s": { + "type": "keyword", + "ignore_above": 1024 + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "subject": { + "type": "keyword", + "ignore_above": 1024 + }, + "x509": { + "properties": { + "alternative_names": { + "type": "keyword", + "ignore_above": 1024 + }, + "issuer": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "public_key_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_exponent": { + "type": "long", + "index": false, + "doc_values": false + }, + "public_key_size": { + "type": "long" + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "signature_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version_number": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + }, + "version_protocol": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "trace": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "transaction": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "url": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "extension": { + "type": "keyword", + "ignore_above": 1024 + }, + "fragment": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "original": { + "type": "wildcard", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "password": { + "type": "keyword", + "ignore_above": 1024 + }, + "path": { + "type": "wildcard", + "ignore_above": 1024 + }, + "port": { + "type": "long" + }, + "query": { + "type": "keyword", + "ignore_above": 1024 + }, + "registered_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "scheme": { + "type": "keyword", + "ignore_above": 1024 + }, + "subdomain": { + "type": "keyword", + "ignore_above": 1024 + }, + "top_level_domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "username": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "user": { + "properties": { + "changes": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "effective": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + }, + "target": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "email": { + "type": "keyword", + "ignore_above": 1024 + }, + "full_name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "group": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "hash": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "roles": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + }, + "user_agent": { + "properties": { + "device": { + "properties": { + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "original": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "os": { + "properties": { + "family": { + "type": "keyword", + "ignore_above": 1024 + }, + "full": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "kernel": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "platform": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "vlan": { + "properties": { + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "vulnerability": { + "properties": { + "category": { + "type": "keyword", + "ignore_above": 1024 + }, + "classification": { + "type": "keyword", + "ignore_above": 1024 + }, + "description": { + "type": "keyword", + "ignore_above": 1024, + "fields": { + "text": { + "type": "match_only_text" + } + } + }, + "enumeration": { + "type": "keyword", + "ignore_above": 1024 + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "reference": { + "type": "keyword", + "ignore_above": 1024 + }, + "report_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "scanner": { + "properties": { + "vendor": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "score": { + "properties": { + "base": { + "type": "float" + }, + "environmental": { + "type": "float" + }, + "temporal": { + "type": "float" + }, + "version": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "severity": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "winlog": { + "properties": { + "activity_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "api": { + "type": "keyword", + "ignore_above": 1024 + }, + "channel": { + "type": "keyword", + "ignore_above": 1024 + }, + "computer_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "event_data": { + "properties": { + "AccessGranted": { + "type": "keyword" + }, + "AccessList": { + "type": "keyword" + }, + "AccessMask": { + "type": "keyword" + }, + "AccessRemoved": { + "type": "keyword" + }, + "AccountExpires": { + "type": "keyword" + }, + "AccountName": { + "type": "keyword" + }, + "AdditionalInfo": { + "type": "keyword" + }, + "AdditionalInfo2": { + "type": "keyword" + }, + "Address": { + "type": "keyword" + }, + "AddressLength": { + "type": "keyword" + }, + "AdvancedOptions": { + "type": "keyword" + }, + "AlgorithmName": { + "type": "keyword" + }, + "AllowedToDelegateTo": { + "type": "keyword" + }, + "AuthenticationPackageName": { + "type": "keyword", + "ignore_above": 1024 + }, + "Binary": { + "type": "keyword", + "ignore_above": 1024 + }, + "BitlockerUserInputTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "BootMenuPolicy": { + "type": "keyword" + }, + "BootMode": { + "type": "keyword", + "ignore_above": 1024 + }, + "BootStatusPolicy": { + "type": "keyword" + }, + "BootType": { + "type": "keyword", + "ignore_above": 1024 + }, + "BuildVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "CallerProcessId": { + "type": "keyword" + }, + "CallerProcessName": { + "type": "keyword" + }, + "ClientCreationTime": { + "type": "keyword" + }, + "ClientProcessId": { + "type": "keyword" + }, + "Company": { + "type": "keyword", + "ignore_above": 1024 + }, + "ComputerAccountChange": { + "type": "keyword" + }, + "Config": { + "type": "keyword" + }, + "ConfigAccessPolicy": { + "type": "keyword" + }, + "ContextInfo": { + "type": "keyword" + }, + "CorruptionActionState": { + "type": "keyword", + "ignore_above": 1024 + }, + "CountNew": { + "type": "keyword" + }, + "CountOfCredentialsReturned": { + "type": "keyword" + }, + "CountOld": { + "type": "keyword" + }, + "CreationUtcTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "CurrentStratumNumber": { + "type": "keyword" + }, + "DCName": { + "type": "keyword" + }, + "Default SD String:": { + "type": "keyword" + }, + "Description": { + "type": "keyword", + "ignore_above": 1024 + }, + "Detail": { + "type": "keyword", + "ignore_above": 1024 + }, + "DeviceName": { + "type": "keyword", + "ignore_above": 1024 + }, + "DeviceNameLength": { + "type": "keyword", + "ignore_above": 1024 + }, + "DeviceTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "DeviceVersionMajor": { + "type": "keyword", + "ignore_above": 1024 + }, + "DeviceVersionMinor": { + "type": "keyword", + "ignore_above": 1024 + }, + "DirtyPages": { + "type": "keyword" + }, + "DisableIntegrityChecks": { + "type": "keyword" + }, + "DisplayName": { + "type": "keyword" + }, + "DnsHostName": { + "type": "keyword" + }, + "DomainBehaviorVersion": { + "type": "keyword" + }, + "DomainName": { + "type": "keyword" + }, + "DomainPolicyChanged": { + "type": "keyword" + }, + "DomainSid": { + "type": "keyword" + }, + "DriveName": { + "type": "keyword", + "ignore_above": 1024 + }, + "DriverName": { + "type": "keyword", + "ignore_above": 1024 + }, + "DriverNameLength": { + "type": "keyword", + "ignore_above": 1024 + }, + "Dummy": { + "type": "keyword" + }, + "DwordVal": { + "type": "keyword", + "ignore_above": 1024 + }, + "ElevatedToken": { + "type": "keyword" + }, + "EnableDisableReason": { + "type": "keyword" + }, + "EnabledNew": { + "type": "keyword" + }, + "EntryCount": { + "type": "keyword", + "ignore_above": 1024 + }, + "ErrorMessage": { + "type": "keyword" + }, + "ErrorString": { + "type": "keyword" + }, + "ExitBootServicesEntry": { + "type": "keyword" + }, + "ExitBootServicesExit": { + "type": "keyword" + }, + "ExtraInfo": { + "type": "keyword", + "ignore_above": 1024 + }, + "FailureName": { + "type": "keyword", + "ignore_above": 1024 + }, + "FailureNameLength": { + "type": "keyword", + "ignore_above": 1024 + }, + "FileVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "FinalStatus": { + "type": "keyword", + "ignore_above": 1024 + }, + "FlightSigning": { + "type": "keyword" + }, + "ForceLogoff": { + "type": "keyword" + }, + "Group": { + "type": "keyword", + "ignore_above": 1024 + }, + "GroupName": { + "type": "keyword" + }, + "HandleId": { + "type": "keyword" + }, + "HiveName": { + "type": "keyword" + }, + "HiveNameLength": { + "type": "keyword" + }, + "HomeDirectory": { + "type": "keyword" + }, + "HomePath": { + "type": "keyword" + }, + "HypervisorDebug": { + "type": "keyword" + }, + "HypervisorLaunchType": { + "type": "keyword" + }, + "HypervisorLoadOptions": { + "type": "keyword" + }, + "IdleImplementation": { + "type": "keyword", + "ignore_above": 1024 + }, + "IdleStateCount": { + "type": "keyword", + "ignore_above": 1024 + }, + "ImagePath": { + "type": "keyword" + }, + "ImpersonationLevel": { + "type": "keyword", + "ignore_above": 1024 + }, + "IntegrityLevel": { + "type": "keyword", + "ignore_above": 1024 + }, + "IpAddress": { + "type": "keyword", + "ignore_above": 1024 + }, + "IpPort": { + "type": "keyword", + "ignore_above": 1024 + }, + "IsTestConfig": { + "type": "keyword" + }, + "KernelDebug": { + "type": "keyword" + }, + "KeyFilePath": { + "type": "keyword" + }, + "KeyLength": { + "type": "keyword", + "ignore_above": 1024 + }, + "KeyName": { + "type": "keyword" + }, + "KeyType": { + "type": "keyword" + }, + "KeysUpdated": { + "type": "keyword" + }, + "LastBootGood": { + "type": "keyword", + "ignore_above": 1024 + }, + "LastBootId": { + "type": "keyword" + }, + "LastShutdownGood": { + "type": "keyword", + "ignore_above": 1024 + }, + "Library": { + "type": "keyword" + }, + "LmPackageName": { + "type": "keyword", + "ignore_above": 1024 + }, + "LoadOSImageStart": { + "type": "keyword" + }, + "LoadOptions": { + "type": "keyword" + }, + "LockoutDuration": { + "type": "keyword" + }, + "LockoutObservationWindow": { + "type": "keyword" + }, + "LockoutThreshold": { + "type": "keyword" + }, + "LogonGuid": { + "type": "keyword", + "ignore_above": 1024 + }, + "LogonHours": { + "type": "keyword" + }, + "LogonId": { + "type": "keyword", + "ignore_above": 1024 + }, + "LogonProcessName": { + "type": "keyword", + "ignore_above": 1024 + }, + "LogonType": { + "type": "keyword", + "ignore_above": 1024 + }, + "MachineAccountQuota": { + "type": "keyword" + }, + "MajorVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "MandatoryLabel": { + "type": "keyword" + }, + "MaxPasswordAge": { + "type": "keyword" + }, + "MaximumPerformancePercent": { + "type": "keyword", + "ignore_above": 1024 + }, + "MemberName": { + "type": "keyword", + "ignore_above": 1024 + }, + "MemberSid": { + "type": "keyword", + "ignore_above": 1024 + }, + "MessageNumber": { + "type": "keyword" + }, + "MessageTotal": { + "type": "keyword" + }, + "MinPasswordAge": { + "type": "keyword" + }, + "MinPasswordLength": { + "type": "keyword" + }, + "MinimumPasswordLength": { + "type": "keyword" + }, + "MinimumPasswordLengthAudit": { + "type": "keyword" + }, + "MinimumPerformancePercent": { + "type": "keyword", + "ignore_above": 1024 + }, + "MinimumThrottlePercent": { + "type": "keyword", + "ignore_above": 1024 + }, + "MiniportName": { + "type": "keyword" + }, + "MiniportNameLen": { + "type": "keyword" + }, + "MinorVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "MixedDomainMode": { + "type": "keyword" + }, + "NewProcessId": { + "type": "keyword", + "ignore_above": 1024 + }, + "NewProcessName": { + "type": "keyword", + "ignore_above": 1024 + }, + "NewSchemeGuid": { + "type": "keyword", + "ignore_above": 1024 + }, + "NewSd": { + "type": "keyword" + }, + "NewSize": { + "type": "keyword" + }, + "NewTargetUserName": { + "type": "keyword" + }, + "NewTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "NewUacValue": { + "type": "keyword" + }, + "NominalFrequency": { + "type": "keyword", + "ignore_above": 1024 + }, + "Number": { + "type": "keyword", + "ignore_above": 1024 + }, + "NumberOfGroupPolicyObjects": { + "type": "keyword" + }, + "OSEditionID": { + "type": "keyword" + }, + "OSName": { + "type": "keyword" + }, + "OSbuildversion": { + "type": "keyword" + }, + "OSmajorversion": { + "type": "keyword" + }, + "OSminorversion": { + "type": "keyword" + }, + "OSservicepackmajorversion": { + "type": "keyword" + }, + "OSservicepackminorversion": { + "type": "keyword" + }, + "ObjectName": { + "type": "keyword" + }, + "ObjectServer": { + "type": "keyword" + }, + "ObjectType": { + "type": "keyword" + }, + "OemInformation": { + "type": "keyword" + }, + "OldSchemeGuid": { + "type": "keyword", + "ignore_above": 1024 + }, + "OldSd": { + "type": "keyword" + }, + "OldTargetUserName": { + "type": "keyword" + }, + "OldTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "OldUacValue": { + "type": "keyword" + }, + "Operation": { + "type": "keyword" + }, + "OperationType": { + "type": "keyword" + }, + "OriginalFileName": { + "type": "keyword", + "ignore_above": 1024 + }, + "OriginalSize": { + "type": "keyword" + }, + "ParentProcessName": { + "type": "keyword" + }, + "PasswordHistoryLength": { + "type": "keyword" + }, + "PasswordLastSet": { + "type": "keyword" + }, + "PasswordProperties": { + "type": "keyword" + }, + "Path": { + "type": "keyword", + "ignore_above": 1024 + }, + "Payload": { + "type": "keyword" + }, + "PerformanceImplementation": { + "type": "keyword", + "ignore_above": 1024 + }, + "PreAuthType": { + "type": "keyword" + }, + "PreviousCreationUtcTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "PreviousTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "PrimaryGroupId": { + "type": "keyword" + }, + "PrivilegeList": { + "type": "keyword", + "ignore_above": 1024 + }, + "ProcessCreationTime": { + "type": "keyword" + }, + "ProcessID": { + "type": "keyword" + }, + "ProcessId": { + "type": "keyword", + "ignore_above": 1024 + }, + "ProcessName": { + "type": "keyword", + "ignore_above": 1024 + }, + "ProcessPath": { + "type": "keyword", + "ignore_above": 1024 + }, + "ProcessPid": { + "type": "keyword", + "ignore_above": 1024 + }, + "ProcessingMode": { + "type": "keyword" + }, + "ProcessingTimeInMilliseconds": { + "type": "keyword" + }, + "Product": { + "type": "keyword", + "ignore_above": 1024 + }, + "ProfilePath": { + "type": "keyword" + }, + "Properties": { + "type": "keyword" + }, + "ProviderName": { + "type": "keyword" + }, + "PuaCount": { + "type": "keyword", + "ignore_above": 1024 + }, + "PuaPolicyId": { + "type": "keyword", + "ignore_above": 1024 + }, + "QfeVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "QueryName": { + "type": "keyword" + }, + "ReadOperation": { + "type": "keyword" + }, + "Reason": { + "type": "keyword", + "ignore_above": 1024 + }, + "RemoteEventLogging": { + "type": "keyword" + }, + "ResetEndStart": { + "type": "keyword" + }, + "RestrictedAdminMode": { + "type": "keyword" + }, + "ReturnCode": { + "type": "keyword" + }, + "SamAccountName": { + "type": "keyword" + }, + "SchemaVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "ScriptBlockId": { + "type": "keyword" + }, + "ScriptBlockText": { + "type": "keyword", + "ignore_above": 1024 + }, + "ScriptPath": { + "type": "keyword" + }, + "ServiceName": { + "type": "keyword", + "ignore_above": 1024 + }, + "ServicePrincipalNames": { + "type": "keyword" + }, + "ServiceSid": { + "type": "keyword" + }, + "ServiceType": { + "type": "keyword" + }, + "ServiceVersion": { + "type": "keyword", + "ignore_above": 1024 + }, + "ShutdownActionType": { + "type": "keyword", + "ignore_above": 1024 + }, + "ShutdownEventCode": { + "type": "keyword", + "ignore_above": 1024 + }, + "ShutdownReason": { + "type": "keyword", + "ignore_above": 1024 + }, + "SidHistory": { + "type": "keyword" + }, + "Signature": { + "type": "keyword", + "ignore_above": 1024 + }, + "SignatureStatus": { + "type": "keyword", + "ignore_above": 1024 + }, + "Signed": { + "type": "keyword", + "ignore_above": 1024 + }, + "StartOSImageStart": { + "type": "keyword" + }, + "StartTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "StartType": { + "type": "keyword" + }, + "State": { + "type": "keyword", + "ignore_above": 1024 + }, + "Status": { + "type": "keyword", + "ignore_above": 1024 + }, + "StopTime": { + "type": "keyword", + "ignore_above": 1024 + }, + "SubjectDomainName": { + "type": "keyword", + "ignore_above": 1024 + }, + "SubjectLogonId": { + "type": "keyword", + "ignore_above": 1024 + }, + "SubjectUserName": { + "type": "keyword", + "ignore_above": 1024 + }, + "SubjectUserSid": { + "type": "keyword", + "ignore_above": 1024 + }, + "SupportInfo1": { + "type": "keyword" + }, + "SupportInfo2": { + "type": "keyword" + }, + "TSId": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetDomainName": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetInfo": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetLinkedLogonId": { + "type": "keyword" + }, + "TargetLogonGuid": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetLogonId": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetName": { + "type": "keyword" + }, + "TargetOutboundDomainName": { + "type": "keyword" + }, + "TargetOutboundUserName": { + "type": "keyword" + }, + "TargetProcessId": { + "type": "keyword" + }, + "TargetProcessName": { + "type": "keyword" + }, + "TargetServerName": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetSid": { + "type": "keyword" + }, + "TargetUserName": { + "type": "keyword", + "ignore_above": 1024 + }, + "TargetUserSid": { + "type": "keyword", + "ignore_above": 1024 + }, + "TaskName": { + "type": "keyword" + }, + "TerminalSessionId": { + "type": "keyword", + "ignore_above": 1024 + }, + "TestSigning": { + "type": "keyword" + }, + "TicketEncryptionType": { + "type": "keyword" + }, + "TicketOptions": { + "type": "keyword" + }, + "TimeSource": { + "type": "keyword" + }, + "TimeSourceRefId": { + "type": "keyword" + }, + "TokenElevationType": { + "type": "keyword", + "ignore_above": 1024 + }, + "TransmittedServices": { + "type": "keyword", + "ignore_above": 1024 + }, + "Type": { + "type": "keyword" + }, + "UpdateReason": { + "type": "keyword" + }, + "UserAccountControl": { + "type": "keyword" + }, + "UserContext": { + "type": "keyword" + }, + "UserParameters": { + "type": "keyword" + }, + "UserPrincipalName": { + "type": "keyword" + }, + "UserSid": { + "type": "keyword", + "ignore_above": 1024 + }, + "UserWorkstations": { + "type": "keyword" + }, + "Version": { + "type": "keyword", + "ignore_above": 1024 + }, + "VersionLen": { + "type": "keyword" + }, + "VirtualAccount": { + "type": "keyword" + }, + "VsmLaunchType": { + "type": "keyword" + }, + "VsmPolicy": { + "type": "keyword" + }, + "Win32Error": { + "type": "keyword" + }, + "Workstation": { + "type": "keyword", + "ignore_above": 1024 + }, + "WorkstationName": { + "type": "keyword" + }, + "error": { + "type": "keyword" + }, + "evtHiveName": { + "type": "keyword" + }, + "evtHiveNameLength": { + "type": "keyword" + }, + "locationCode": { + "type": "keyword" + }, + "param1": { + "type": "keyword", + "ignore_above": 1024 + }, + "param10": { + "type": "keyword" + }, + "param11": { + "type": "keyword" + }, + "param12": { + "type": "keyword" + }, + "param2": { + "type": "keyword", + "ignore_above": 1024 + }, + "param3": { + "type": "keyword", + "ignore_above": 1024 + }, + "param4": { + "type": "keyword", + "ignore_above": 1024 + }, + "param5": { + "type": "keyword", + "ignore_above": 1024 + }, + "param6": { + "type": "keyword", + "ignore_above": 1024 + }, + "param7": { + "type": "keyword", + "ignore_above": 1024 + }, + "param8": { + "type": "keyword", + "ignore_above": 1024 + }, + "param9": { + "type": "keyword" + }, + "serviceGuid": { + "type": "keyword" + }, + "spn1": { + "type": "keyword" + }, + "spn2": { + "type": "keyword" + }, + "updateGuid": { + "type": "keyword" + }, + "updateRevisionNumber": { + "type": "keyword" + }, + "updateTitle": { + "type": "keyword" + } + } + }, + "event_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "keywords": { + "type": "keyword", + "ignore_above": 1024 + }, + "logon": { + "properties": { + "failure": { + "properties": { + "reason": { + "type": "keyword", + "ignore_above": 1024 + }, + "status": { + "type": "keyword", + "ignore_above": 1024 + }, + "sub_status": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "id": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "opcode": { + "type": "keyword", + "ignore_above": 1024 + }, + "process": { + "properties": { + "pid": { + "type": "long" + }, + "thread": { + "properties": { + "id": { + "type": "long" + } + } + } + } + }, + "provider_guid": { + "type": "keyword", + "ignore_above": 1024 + }, + "provider_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "record_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "related_activity_id": { + "type": "keyword", + "ignore_above": 1024 + }, + "task": { + "type": "keyword", + "ignore_above": 1024 + }, + "time_created": { + "type": "date" + }, + "user": { + "properties": { + "domain": { + "type": "keyword", + "ignore_above": 1024 + }, + "identifier": { + "type": "keyword", + "ignore_above": 1024 + }, + "name": { + "type": "keyword", + "ignore_above": 1024 + }, + "type": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "user_data": { + "properties": { + "Channel": { + "type": "keyword" + }, + "ClientProcessId": { + "type": "keyword" + }, + "ClientProcessStartKey": { + "type": "keyword" + }, + "SubjectDomainName": { + "type": "keyword" + }, + "SubjectLogonId": { + "type": "keyword" + }, + "SubjectUserName": { + "type": "keyword" + }, + "SubjectUserSid": { + "type": "keyword" + }, + "binaryData": { + "type": "keyword" + }, + "binaryDataSize": { + "type": "keyword" + }, + "param1": { + "type": "keyword" + }, + "param2": { + "type": "keyword" + }, + "xml_name": { + "type": "keyword" + } + } + }, + "version": { + "type": "long" + } + } + }, + "x509": { + "properties": { + "alternative_names": { + "type": "keyword", + "ignore_above": 1024 + }, + "issuer": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "not_after": { + "type": "date" + }, + "not_before": { + "type": "date" + }, + "public_key_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_curve": { + "type": "keyword", + "ignore_above": 1024 + }, + "public_key_exponent": { + "type": "long", + "index": false, + "doc_values": false + }, + "public_key_size": { + "type": "long" + }, + "serial_number": { + "type": "keyword", + "ignore_above": 1024 + }, + "signature_algorithm": { + "type": "keyword", + "ignore_above": 1024 + }, + "subject": { + "properties": { + "common_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "country": { + "type": "keyword", + "ignore_above": 1024 + }, + "distinguished_name": { + "type": "keyword", + "ignore_above": 1024 + }, + "locality": { + "type": "keyword", + "ignore_above": 1024 + }, + "organization": { + "type": "keyword", + "ignore_above": 1024 + }, + "organizational_unit": { + "type": "keyword", + "ignore_above": 1024 + }, + "state_or_province": { + "type": "keyword", + "ignore_above": 1024 + } + } + }, + "version_number": { + "type": "keyword", + "ignore_above": 1024 + } + } + } + } + } + } +} \ No newline at end of file diff --git a/testing/tests/api_tests/winlogbeat/test_data/winlog_search_data.json b/testing/tests/api_tests/winlogbeat/test_data/winlog_search_data.json new file mode 100644 index 00000000..ba8125be --- /dev/null +++ b/testing/tests/api_tests/winlogbeat/test_data/winlog_search_data.json @@ -0,0 +1,86 @@ +{ + "took": 3, + "timed_out": false, + "_shards": { + "total": 1, + "successful": 1, + "skipped": 0, + "failed": 0 + }, + "hits": { + "total": { + "value": 4283, + "relation": "eq" + }, + "max_score": 0.27635396, + "hits": [ + { + "_index": "winlogbeat-000001", + "_id": "Wqh8PI4BWrHmXCODvAOh", + "_score": 0.27635396, + "_source": { + "agent": { + "name": "DC1", + "id": "329b0988-40f1-4f26-9656-7f038ebc8d9c", + "ephemeral_id": "f189af81-3221-404f-a99c-350a087003fb", + "type": "winlogbeat", + "version": "8.5.0" + }, + "@timestamp": "2024-03-14T10:23:08.964Z", + "winlog": { + "record_id": 4714, + "computer_name": "DC1.lme.local", + "process": { + "pid": 648, + "thread": { + "id": 3684 + } + }, + "event_id": "4634", + "task": "Logoff", + "keywords": [ + "Audit Success" + ], + "provider_guid": "{54849625-5478-4994-a5ba-3e3b0328c30d}", + "channel": "Security", + "api": "wineventlog", + "event_data": { + "TargetLogonId": "0x5bec95", + "LogonType": "3", + "TargetUserName": "DC1$", + "TargetDomainName": "LME", + "TargetUserSid": "S-1-5-18" + }, + "opcode": "Info", + "provider_name": "Microsoft-Windows-Security-Auditing" + }, + "ecs": { + "version": "8.0.0" + }, + "log": { + "level": "information" + }, + "host": { + "name": "DC1.lme.local" + }, + "@version": "1", + "message": "An account was logged off.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tDC1$\n\tAccount Domain:\t\tLME\n\tLogon ID:\t\t0x5BEC95\n\nLogon Type:\t\t\t3\n\nThis event is generated when a logon session is destroyed. It may be positively correlated with a logon event using the Logon ID value. Logon IDs are only unique between reboots on the same computer.", + "event": { + "ingested": "2024-03-14T10:23:11.521737481Z", + "code": "4634", + "original": "An account was logged off.\n\nSubject:\n\tSecurity ID:\t\tS-1-5-18\n\tAccount Name:\t\tDC1$\n\tAccount Domain:\t\tLME\n\tLogon ID:\t\t0x5BEC95\n\nLogon Type:\t\t\t3\n\nThis event is generated when a logon session is destroyed. It may be positively correlated with a logon event using the Logon ID value. Logon IDs are only unique between reboots on the same computer.", + "provider": "Microsoft-Windows-Security-Auditing", + "kind": "event", + "created": "2024-03-14T10:23:10.459Z", + "action": "Logoff", + "outcome": "success" + }, + "tags": [ + "beats", + "beats_input_codec_plain_applied" + ] + } + } + ] + } +} \ No newline at end of file diff --git a/testing/tests/api_tests/winlogbeat/test_server.py b/testing/tests/api_tests/winlogbeat/test_server.py new file mode 100644 index 00000000..b84c0148 --- /dev/null +++ b/testing/tests/api_tests/winlogbeat/test_server.py @@ -0,0 +1,111 @@ +import json +import warnings + +import pytest +from jsonschema import validate +from jsonschema.exceptions import ValidationError +import requests +from requests.auth import HTTPBasicAuth +import urllib3 +import os + +from api_tests.helpers import make_request, load_json_schema + +# Disable SSL warnings +urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning) + +current_script_path = os.path.abspath(__file__) +current_script_dir = os.path.dirname(current_script_path) + + +def convertJsonFileToString(file_path): + with open(file_path, "r") as file: + return file.read() + + +@pytest.fixture(autouse=True) +def suppress_insecure_request_warning(): + warnings.simplefilter("ignore", urllib3.exceptions.InsecureRequestWarning) + + +@pytest.mark.skip(reason="This test is too fragile and the data is not stable") +def test_elastic_mapping(es_host, es_port, username, password): + # This test currently works for full installation. For Partial installation (only Ls1), the static mappings file will need to be changed. + url = f"https://{es_host}:{es_port}/winlogbeat-000001/_mapping" + response = make_request(url, username, password) + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + + response_data = response.json() + static_mapping = json.load( + open(f"{current_script_dir}/test_data/mapping_response.json") + ) + + # Dumping Actual Response Json into file for comparison if test fails. + json.dump( + response_data, + open(f"{current_script_dir}/test_data/mapping_response_actual.json", "w"), + indent=4, + ) + + assert static_mapping == response_data, "Mappings Json did not match Expected" + + +def test_winlogbeat_settings(es_host, es_port, username, password): + url = f"https://{es_host}:{es_port}/winlogbeat-*/_settings" + response = make_request(url, username, password) + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + body = response.json() + + # Getting the value of Root Key + for key in body: + rootKey = key + + assert ( + body[rootKey]["settings"]["index"]["lifecycle"]["name"] == "lme_ilm_policy" + ), f'Expected "lme_ilm_policy", got {body[rootKey]["settings"]["index"]["lifecycle"]["name"]}' + assert ( + body[rootKey]["settings"]["index"]["lifecycle"]["rollover_alias"] + == "winlogbeat-alias" + ), f'Expected "winlogbeat-alias", got {body[rootKey]["settings"]["index"]["lifecycle"]["rollover_alias"]}' + + assert ( + "creation_date" in body[rootKey]["settings"]["index"] + ), "Expected creation_date property, not found" + assert ( + "number_of_replicas" in body[rootKey]["settings"]["index"] + ), "Expected number_of_replicas property, not found" + assert ( + "uuid" in body[rootKey]["settings"]["index"] + ), "Expected uuid property, not found" + assert ( + "created" in body[rootKey]["settings"]["index"]["version"] + ), "Expected created property, not found" + + with open(f"{current_script_dir}/test_data/mapping_datafields.txt") as f: + data_fields = f.read().splitlines() + + act_data_fields = body[rootKey]["settings"]["index"]["query"]["default_field"] + assert ( + act_data_fields.sort() == data_fields.sort() + ), "Winlogbeats data fields do not match" + + +def test_winlogbeat_search(es_host, es_port, username, password): + # This test requires DC1 instance in cluster set up otherwise it will fail + url = f"https://{es_host}:{es_port}/winlogbeat-*/_search" + body = {"size": 1, "query": {"term": {"host.name": "DC1.lme.local"}}} + response = make_request(url, username, password, body=body) + + assert response.status_code == 200, f"Expected 200, got {response.status_code}" + data = response.json() + # json.dump( + # data, + # open(f"{current_script_dir}/test_data/winlog_search_data.json", "w"), + # indent=4, + # ) + + assert data["hits"]["hits"][0]["_source"]["host"]["name"] == "DC1.lme.local" + + # Validating JSON Response schema + schema = load_json_schema(f"{current_script_dir}/schemas/winlogbeat_search.json") + validate(instance=response.json(), schema=schema) diff --git a/testing/tests/docker-compose.yml b/testing/tests/docker-compose.yml new file mode 100644 index 00000000..2e4d3eb4 --- /dev/null +++ b/testing/tests/docker-compose.yml @@ -0,0 +1,9 @@ +version: '3.8' + +services: + ubuntu: + build: . + container_name: lme_testing + volumes: + - .:/app # Mounts the current directory to /app in the container + command: sleep infinity \ No newline at end of file diff --git a/testing/tests/requirements.txt b/testing/tests/requirements.txt new file mode 100644 index 00000000..59af84e1 --- /dev/null +++ b/testing/tests/requirements.txt @@ -0,0 +1,21 @@ +attrs>=23.2.0 +certifi>=2023.11.17 +charset-normalizer>=3.3.2 +exceptiongroup>=1.2.0 +idna>=3.6 +iniconfig>=2.0.0 +jsonschema>=4.21.1 +jsonschema-specifications>=2023.12.1 +packaging>=23.2 +pluggy>=1.4.0 +pytest>=8.0.0 +pytest-dotenv>=0.5.2 +python-dotenv>=1.0.1 +referencing>=0.33.0 +requests>=2.31.0 +rpds-py>=0.17.1 +tomli>=2.0.1 +urllib3>=2.1.0 +selenium +webdriver-manager +pytest-html>=4.1.1 diff --git a/testing/tests/selenium_tests.py b/testing/tests/selenium_tests.py new file mode 100644 index 00000000..5e1d115b --- /dev/null +++ b/testing/tests/selenium_tests.py @@ -0,0 +1,636 @@ +"""Runs automated test cases against the kibana dashboards. + +For full usage, run: + python3 selenium_tests.py -h + py -u selenium_tests.py 2> log.txt #redirects everything to a text file. +NOTE: +- before running the Elastic interface password must be +saved as an environment variable, ELASTIC_PASSWORD. +- The script assumes access to the server without any +ssl errors. + +Basic usage: + python3 selenium_tests.py --mode MODE --timeout TIMEOUT +where MODE is either headless, detached, or debug. Defaults to headless +and where TIMEOUT is in seconds. Defaults to 30. + +Additionally, you can pass in arguments to the unittest +library, such as the -v flag.""" + +import unittest +import argparse +import sys +import os + +from webdriver_manager.chrome import ChromeDriverManager +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.chrome.service import Service +from selenium.webdriver.common.by import By +from selenium import webdriver + +parser = argparse.ArgumentParser() +parser.add_argument('--timeout', help='Timeout, in seconds. Defaults to 30.', + default=30, + type=int) +parser.add_argument('--mode', help='Headless, no browser, detached, open browser, debug, open browser and leave it open. Default is no headless.', default='headless') +parser.add_argument('--domain', help='The ip or domain of the elasticsearch server', default='ls1') + +args, unittestArgs = parser.parse_known_args() + +def login(password : str) -> None: + """Login and load the home page""" + + url = f"https://{args.domain}" + driver.get(url) + + # Wait for the login page to load + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, 'input[name="username"]')) + WebDriverWait(driver, args.timeout).until(expected_cond) + + # Login + username_input = driver.find_element(By.CSS_SELECTOR, 'input[name="username"]') + username_input.send_keys("elastic") + password_input = driver.find_element(By.CSS_SELECTOR, 'input[name="password"]') + password_input.send_keys(password) + submit_button = driver.find_element(By.CSS_SELECTOR, 'button[data-test-subj="loginSubmit"]') + submit_button.click() + + # Wait for the home page to load + selector = 'div[data-test-subj="homeApp"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, args.timeout).until(expected_cond) + +def load_panel(panel_title : str): + """Waits for the given panel to load then returns it. Assumes that the appropriate dashboard + has already been loaded by the setUp functions.""" + + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.all_of( + EC.presence_of_element_located((By.CSS_SELECTOR, selector)), + EC.none_of(EC.text_to_be_present_in_element_attribute((By.CSS_SELECTOR, selector), + "innerHTML", "Loading")) + ) + WebDriverWait(driver, args.timeout).until(expected_cond) + return driver.find_element(By.CSS_SELECTOR, selector) + +class BasicLoading(unittest.TestCase): + "High-level tests, very basic functionality only." + + def test_title(self): + """If for some reason we weren't able to access the webpage at + all, this would be the first test to show it.""" + + driver.get(f"https://{args.domain}/app/dashboards") + selector = 'div[data-test-subj="dashboardLandingPage"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, args.timeout).until(expected_cond) + self.assertEqual(driver.title, "Dashboards - Elastic") + +class UserSecurityTests(unittest.TestCase): + """Test cases for the User Security Dashboard""" + + def setUp(self): + # The dashboard ID is hard-coded in the ndjson file + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"https://{args.domain}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, args.timeout).until(expected_cond) + + def test_dashboard_menu(self): + """Is there any data?""" + panel = load_panel("Dashboard Menu") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_search_users(self): + """Is there any data?""" + panel = load_panel("Search users") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_search_hosts(self): + """Is there any data?""" + panel = load_panel("Search hosts") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_filter_hosts(self): + """Is there any data?""" + panel = load_panel("Filter hosts") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_filter_users(self): + """Is there any data?""" + panel = load_panel("Filter users") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_security_logon_attempts(self): + """Is there any data?""" + panel = load_panel("Security - Logon attempts") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_security_logon_hosts(self): + """Is there any data?""" + panel = load_panel("Security - Logon hosts") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_logon_attempts(self): + """Is there any data?""" + panel = load_panel("Logon attempts") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_logged_on_computers(self): + """Is there any data?""" + panel = load_panel("Logged on computers") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_user_logon_logoff_events(self): + """Is there any data?""" + panel = load_panel("User Logon & Logoff Events") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_all_network_connections(self): + """Is there any data for the "All network connections" panel?""" + panel = load_panel("All network connections") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_network_connections_from_nonbrowser_processes(self): + """Is there any data?""" + panel = load_panel("Network connections from non-browser processes") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_network_connections_by_protocol(self): + """Is there any data for the "Network connection by protocol" panel?""" + panel = load_panel("Network connection by protocol") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_unusual_network_connections_from_non_browser_processes(self): + """Is there any data?""" + panel = load_panel("Unusual network connections from non-browser processes") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_network_connection_events(self): + """Is there any data?""" + panel = load_panel("Network Connection Events (Sysmon ID 3)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_spawned_processes(self): + """Is there any data?""" + panel = load_panel("Spawned Processes") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_powershell_events(self): + """Is there any data?""" + panel = load_panel("Powershell Events") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_powershell_events_over_time(self): + """Is there any data?""" + panel = load_panel("Powershell events over time") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_powershell_events_by_computer(self): + """Is there any data?""" + panel = load_panel("Powershell events by computer") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_potentially_suspicious_powershell(self): + """Is there any data?""" + panel = load_panel("Potentially suspicious powershell") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_powershell_network_connections(self): + """Is there any data?""" + panel = load_panel("Powershell network connections") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_references_to_temporary_files(self): + """Is there any data?""" + panel = load_panel("References to temporary files") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_raw_access_read(self): + """Is there any data?""" + panel = load_panel("RawAccessRead (Sysmon Event 9)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_defender_event_count(self): + """Is there any data?""" + panel = load_panel("Defender event count") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_av_hits(self): + """Is there any data?""" + panel = load_panel("AV Hits (Count)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_av_detections(self): + """Is there any data?""" + panel = load_panel("AV Detections (Event 1116)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_raw_access_read(self): + """Is there any data?""" + panel = load_panel("RawAccessRead (Sysmon Event 9)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + +class UserHRTests(unittest.TestCase): + """Test cases for the User HR Dashboard""" + + def setUp(self): + # The dashboard ID is hard-coded in the ndjson file + dashboard_id = "618bc5d0-84f8-11ee-9838-ff0db128d8b2" + driver.get(f"https://{args.domain}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, args.timeout).until(expected_cond) + + def test_dashboard_menu(self): + """Is there any data?""" + panel = load_panel("Dashboard Menu") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_domains_and_usernames(self): + """Is there any data?""" + panel = load_panel("Select domain(s) and username(s)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_filter_users(self): + """Is there any data?""" + panel = load_panel("Filter Users") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_filter_computers(self): + """Is there any data?""" + panel = load_panel("Filter Computers") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_filter_users(self): + """Is there any data?""" + panel = load_panel("Filter Users") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_all_user_events(self): + """Is there any data?""" + panel = load_panel("All User Events by Day of Week, Hour of Day") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_timestamps_by_count(self): + """Is there any data?""" + panel = load_panel("Timestamps by Count") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_user_logon_events(self): + """Is there any data?""" + panel = load_panel("User logon events (filter by LogonId)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_user_logoff_events(self): + """Is there any data?""" + panel = load_panel("User logoff events (correlate to logon events)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_inperson_vs_remote_logons(self): + """Is there any data?""" + panel = load_panel("In person vs Remote logons") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + +class SecurityDashboardSecurityLogTests(unittest.TestCase): + """Test cases for the Security Dashboard - Security Log Dashboard""" + + def setUp(self): + # The dashboard ID is hard-coded in the ndjson file + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + driver.get(f"https://{args.domain}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, args.timeout).until(expected_cond) + + def test_dashboard_menu(self): + """Is there any data?""" + panel = load_panel("Dashboard Menu") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_security_log_events(self): + """Is there any data?""" + panel = load_panel("Security logs events") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_computer_filter_results(self): + """Is there any data?""" + panel = load_panel("Select a computer to filter the below results. Leave blank for all") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_computer_filter(self): + """Is there any data?""" + panel = load_panel("Select a computername to filter") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_failed_logon_attempts(self): + """Is there any data?""" + panel = load_panel("Failed logon attempts") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_computers_showing_failed_login_attempts(self): + """Is there any data?""" + panel = load_panel("Computers showing failed login attempts - 10 maximum shown") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_failed_logons_type_codes(self): + """Is there any data?""" + panel = load_panel("Failed logon type codes") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_failed_logon_and_reason(self): + """Is there any data?""" + panel = load_panel("Failed logon and reason (status code)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_failed_logons(self): + """Is there any data?""" + panel = load_panel("Failed Logons") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_failed_logon_status_codes(self): + """Is there any data?""" + panel = load_panel("Failed logon status codes") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_log_cleared_event_id_1102_or_104(self): + """Is there any data?""" + panel = load_panel("Log Cleared - event ID 1102 or 104") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_security_log_events_detail(self): + """Is there any data?""" + panel = load_panel("Security log events - Detail") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_security_log_process_creation_event_id_4688(self): + """Is there any data?""" + panel = load_panel("Security log - Process creation - event ID 4688") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_security_log_logon_created_logon_type_2(self): + """Is there any data?""" + panel = load_panel("Security log - Logon created - Logon type 2") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_security_log_network_logon_created_type_3(self): + """Is there any data?""" + panel = load_panel("Security log - network logon created - Logon type 3") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_security_log_logon_as_a_service_type_5(self): + """Is there any data?""" + panel = load_panel("Sercurity log - logon as a service - Logon type 5") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_credential_sent_as_clear_text_type_8(self): + """Is there any data?""" + panel = load_panel("Security log - Credential sent as clear text - Logon type 8") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_logons_with_special_privileges(self): + """Is there any data?""" + panel = load_panel("Security log - Logons with special privileges assigned - event ID 4672") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_process_started_with_different_creds(self): + """Is there any data?""" + panel = load_panel("Security log - Process started with different credentials- " \ + "event ID 4648 [could be RUNAS, scheduled tasks]") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + +class ComputerSoftwareOverviewTests(unittest.TestCase): + """Test cases for the Computer Software Overview Dashboard""" + + def setUp(self): + # The dashboard ID is hard-coded in the ndjson file + dashboard_id = "33f0d3b0-8b8a-11ea-b1c6-a5bf39283f12" + driver.get(f"https://{args.domain}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, args.timeout).until(expected_cond) + + def test_dashboard_menu(self): + """Is there any data?""" + panel = load_panel("Dashboard Menu") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_host_count(self): + """Is there any data?""" + panel = load_panel("Host Count") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_filter_hosts(self): + """Is there any data?""" + panel = load_panel("Filter Hosts") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_processes(self): + """Is there any data?""" + panel = load_panel("Processes") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_application_crashing_and_hanging(self): + """Is there any data?""" + panel = load_panel("Application Crashing and Hanging") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_application_crashing_and_hanging_count(self): + """Is there any data?""" + panel = load_panel("Application Crashing and Hanging Count") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_create_remote_threat_events(self): + """Is there any data?""" + panel = load_panel("CreateRemoteThread events") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + +class SysmonSummaryTests(unittest.TestCase): + """Test cases for the Sysmon Summary Dashboard""" + + def setUp(self): + # The dashboard ID is hard-coded in the ndjson file + dashboard_id = "d2c73990-e5d4-11e9-8f1d-73a2ea4cc3ed" + driver.get(f"https://{args.domain}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, args.timeout).until(expected_cond) + + def test_total_number_of_sysmon_events_found(self): + """Is there any data?""" + panel = load_panel("Total number of Sysmon events found") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_percentage_of_sysmon_events_by_event_code(self): + """Is there any data?""" + panel = load_panel("Percentage of Sysmon events by event code") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_count_of_sysmon_events_by_event_code(self): + """Is there any data?""" + panel = load_panel("Count of Sysmon events by event code") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_top10_hosts_generating_most_sysmon_data(self): + """Is there any data?""" + panel = load_panel("Top 10 hosts generating the most Sysmon data") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_sysmon_event_code_reference(self): + """Is there any data?""" + panel = load_panel("Sysmon event code reference") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_sysmon_events(self): + """Is there any data?""" + panel = load_panel("Sysmon events") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + +class ProcessExplorerTests(unittest.TestCase): + """Test cases for the Process Explorer Dashboard""" + + def setUp(self): + # The dashboard ID is hard-coded in the ndjson file + dashboard_id = "f2cbc110-8400-11ee-a3de-f1bc0525ad6c" + driver.get(f"https://{args.domain}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, args.timeout).until(expected_cond) + + def test_process_spawns_over_time(self): + """Is there any data?""" + panel = load_panel("Process spawns over time") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_hosts(self): + """Is there any data?""" + panel = load_panel("Hosts") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_users(self): + """Is there any data?""" + panel = load_panel("Users") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_processes_created_by_users_over_time(self): + """Is there any data?""" + panel = load_panel("Processes created by users over time") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_process_spawn_event_logs_id1(self): + """Is there any data?""" + panel = load_panel("Process spawn event logs (Sysmon ID 1)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_files_created_in_downloads(self): + """Is there any data?""" + panel = load_panel("Files created (in Downloads)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_files_created_over_time_in_downloads(self): + """Is there any data?""" + panel = load_panel("Files created over time (in Downloads)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_registry_events_sysmon_12_13_14(self): + """Is there any data?""" + panel = load_panel("Registry events (Sysmon 12, 13, 14)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + +# class AlertingTests(unittest.TestCase): +# """Test cases for the Alerting Dashboard""" + +# def setUp(self): +# # The dashboard ID is hard-coded in the ndjson file +# dashboard_id = "ac1078e0-8a32-11ea-8939-89f508ff7909" +# driver.get(f"https://{args.domain}/app/dashboards#/view/{dashboard_id}") +# expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) +# WebDriverWait(driver, args.timeout).until(expected_cond) + +# def test_signals_overview(self): +# """Is there any data?""" +# panel = load_panel("Signals Overview") +# self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + +# def test_mitre_attack_technique(self): +# """Is there any data?""" +# panel = load_panel("MITRE ATT&CK Technique") +# self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + +# def test_signals_details(self): +# """Is there any data?""" +# panel = load_panel("Signals Details") +# self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + +# def test_full_event_logs(self): +# """Is there any data?""" +# panel = load_panel("Full Event Logs") +# self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + +class HealthCheckTests(unittest.TestCase): + """Test cases for the HealthCheck Dashboard""" + #2/6/2024, main branch on lme. The health check dashboard has an odd dashboard menu. This will likely need updating. + + def setUp(self): + # The dashboard ID is hard-coded in the ndjson file + dashboard_id = "51fe1470-fa59-11e9-bf25-8f92ffa3e3ec" + driver.get(f"https://{args.domain}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, args.timeout).until(expected_cond) + + def test_total_hosts(self): + """Is there any data?""" + panel = load_panel("Alpha - Health Check - Total Hosts - Metric") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_users_seen(self): + """Is there any data?""" + panel = load_panel("Users seen") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_number_of_admins(self): + """Is there any data?""" + panel = load_panel("Alpha - Health Check - Number of Admins - Metric (converted)") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_events_by_machine(self): + """Is there any data?""" + panel = load_panel("Events by machine") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + + def test_unexpected_shutdowns(self): + """Is there any data?""" + panel = load_panel("Unexpected shutdowns") + self.assertFalse("No results found" in panel.get_attribute("innerHTML")) + +options = webdriver.ChromeOptions() +if args.mode == "detached" or args.mode =="debug": #browser opens + print("# " + args.mode + " mode #") + options.add_experimental_option("detach", True) + +else: #Browser does not open. Default mode is headless + print("# headless mode #") + options.add_argument("--headless=new") + # options.add_argument("--proxy-server='direct://'") + # options.add_argument("--proxy-bypass-list=*") + options.add_argument("--disable-gpu") + options.add_argument("--window-size=1920,1080") + options.add_argument("--ignore-certificate-errors") + options.add_argument("--no-sandbox") + options.add_argument("--disable-dev-shm-usage") + +s = Service(ChromeDriverManager().install()) +driver = webdriver.Chrome(service=s, options=options) + +try: + login(os.environ['ELASTIC_PASSWORD']) +except KeyError: + MESSAGE = "Error: Elastic password not set. Should be saved as env variable, ELASTIC_PASSWORD." + print(MESSAGE, file=sys.stderr) + sys.exit(1) + +unit_argv = [sys.argv[0]] + unittestArgs +unittest.main(argv=unit_argv, exit=False) + +if args.mode == "debug": + print("# Debug Mode - Browser will remain open.") # Browser will stay open +else: + driver.stop_client() + driver.close() + driver.quit() diff --git a/testing/tests/selenium_tests/Old/dashboards.py b/testing/tests/selenium_tests/Old/dashboards.py new file mode 100644 index 00000000..890479a9 --- /dev/null +++ b/testing/tests/selenium_tests/Old/dashboards.py @@ -0,0 +1,334 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By + +class TestBasicLoading: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + # @pytest.fixture(scope="class", autouse=True) + # def setup_teardown(self, driver): + # yield + # driver.quit() # Clean up the browser (driver) here + + + def test_title(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards") + selector = 'div[data-test-subj="dashboardLandingPage"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + assert driver.title == "Dashboards - Elastic" + + def test_dashboard_menu(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Dashboard Menu" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + +class TestUserSecurityDashboard: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_search_users(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Search users" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_search_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Search hosts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_security_logon_attempts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security - Logon attempts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_security_logon_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security - Logon hosts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_av_hits(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "AV Hits (Count)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_defender_event_count(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Defender event count" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + +class TestUserHRDashboard: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + def test_dashboard_menu(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "618bc5d0-84f8-11ee-9838-ff0db128d8b2" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Dashboard Menu" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_domains_and_usernames(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "618bc5d0-84f8-11ee-9838-ff0db128d8b2" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Select domain(s) and username(s)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_all_user_events(self, driver, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "618bc5d0-84f8-11ee-9838-ff0db128d8b2" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "All User Events by Day of Week, Hour of Day" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_timestamps_by_count(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "618bc5d0-84f8-11ee-9838-ff0db128d8b2" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Timestamps by Count" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + + def test_dashboard_menu(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Dashboard Menu" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + +class TestSecurityDashboardSecurityLog: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_security_log_events(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security logs events" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_failed_logon_attempts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Failed logon attempts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_failed_logons_type_codes(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Failed logon type codes" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_failed_logon_status_codes(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Failed logon status codes" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + +class TestComputerSoftwareOverviewDashboard: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_dashboard_menu(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "33f0d3b0-8b8a-11ea-b1c6-a5bf39283f12" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Dashboard Menu" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_host_count(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "33f0d3b0-8b8a-11ea-b1c6-a5bf39283f12" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Host Count" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + +class TestSysmonSummaryDashboard: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_total_number_of_sysmon_events_found(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "d2c73990-e5d4-11e9-8f1d-73a2ea4cc3ed" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Total number of Sysmon events found" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_sysmon_event_code_reference(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "d2c73990-e5d4-11e9-8f1d-73a2ea4cc3ed" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Sysmon event code reference" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + +class TestHealthCheckDashboard: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_users_seen(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "51fe1470-fa59-11e9-bf25-8f92ffa3e3ec" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Users seen" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") \ No newline at end of file diff --git a/testing/tests/selenium_tests/Old/dashboards_cluster.py b/testing/tests/selenium_tests/Old/dashboards_cluster.py new file mode 100644 index 00000000..adcacb2f --- /dev/null +++ b/testing/tests/selenium_tests/Old/dashboards_cluster.py @@ -0,0 +1,784 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException + + +class TestHealthCheckDashboard: + dashboard_id = "51fe1470-fa59-11e9-bf25-8f92ffa3e3ec" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_number_of_admins(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Number of Admins" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_total_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Total Hosts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_events_by_machine(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Events by machine" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_unexpected_shutdowns(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Unexpected shutdowns" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + +class TestProcessExplorerDashboard: + dashboard_id = "f2cbc110-8400-11ee-a3de-f1bc0525ad6c" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_files_created_over_time_in_downloads(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Files created (in Downloads)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_files_created_in_downloads(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Files created (in Downloads)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Hosts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_process_spawn_event_logs_id1(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Process spawn event logs (Sysmon ID 1)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_process_spawns_over_time(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Process spawns over time" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_processes_created_by_users_over_time(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Processes created by users over time" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_registry_events_sysmon_12_13_14(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Registry events (Sysmon 12, 13, 14)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_users(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Users" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + +class TestSecurityDashboardSecurityLog: + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_computer_filter_results(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Select a computer to filter the below results. Leave blank for all" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_logons_with_special_privileges(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security log - Logons with special privileges assigned - event ID 4672" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_computer_filter(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Select a computername to filter" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_computers_showing_failed_login_attempts_none(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + + # Wait for the react-grid-layout element to be present + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + + panel_title = "Computers showing failed login attempts - 10 maximum shown" + selector = f'div[data-title="{panel_title}"]' + + # Wait for the specific panel to be present + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + + # Wait for either the panel content or the "No results found" message to be present + panel_content_selector = f"{selector} .echChart" + no_results_selector = f"{selector} .visError" + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, f"{panel_content_selector}, {no_results_selector}")) + WebDriverWait(driver, timeout).until(expected_cond) + + panel = driver.find_element(By.CSS_SELECTOR, selector) + + # Check if the panel content is present + try: + # Check if the "No results found" message is present + no_results_message = driver.find_element(By.CSS_SELECTOR, no_results_selector) + assert no_results_message.is_displayed() + except NoSuchElementException: + panel_content = driver.find_element(By.CSS_SELECTOR, panel_content_selector) + assert panel_content.is_displayed() + + def test_credential_sent_as_clear_text_type_8(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security log - Credential sent as clear text - Logon type 8" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_failed_logon_and_reason(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Failed logon and reason (status code)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_failed_logons(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Failed Logons" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_log_cleared_event_id_1102_or_104(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Log Cleared - event ID 1102 or 104" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_process_started_with_different_creds(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security log - Process started with different credentials- event ID 4648 [could be RUNAS, scheduled tasks]" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_security_log_events_detail(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security log events - Detail" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_security_log_logon_as_a_service_type_5(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Sercurity log - logon as a service - Logon type 5" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_security_log_logon_created_logon_type_2(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security log - Logon created - Logon type 2" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_security_log_network_logon_created_type_3(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security log - network logon created - Logon type 3" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_security_log_process_creation_event_id_4688(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security log - Process creation - event ID 4688" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + + +class TestComputerSoftwareOverviewDashboard: + dashboard_id = "33f0d3b0-8b8a-11ea-b1c6-a5bf39283f12" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_application_crashing_and_hanging(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Application Crashing and Hanging" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_application_crashing_and_hanging_count(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Application Crashing and Hanging Count" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_create_remote_threat_events(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "CreateRemoteThread events" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_filter_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Filter Hosts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_processes(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Processes" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + +class TestSysmonSummaryDashboard: + dashboard_id = "d2c73990-e5d4-11e9-8f1d-73a2ea4cc3ed" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_count_of_sysmon_events_by_event_code(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Count of Sysmon events by event code" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_percentage_of_sysmon_events_by_event_code(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Percentage of Sysmon events by event code" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_sysmon_events(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Sysmon events" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_top10_hosts_generating_most_sysmon_data(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Top 10 hosts generating the most Sysmon data" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + +class TestUserHRDashboard: + dashboard_id = "618bc5d0-84f8-11ee-9838-ff0db128d8b2" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_filter_computers(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Filter Computers" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_filter_users(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Filter Users" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_inperson_vs_remote_logons(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "In person vs Remote logons" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_user_logoff_events(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "User logoff events (correlate to logon events)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_user_logon_events(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "User logon events (filter by LogonId)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + +class TestUserSecurityDashboard: + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_all_network_connections(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "All network connections" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_av_detections(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "AV Detections (Event 1116)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_filter_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Filter hosts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_filter_users(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Filter users" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_logged_on_computers(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Logged on computers" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_logon_attempts(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Logon attempts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_network_connection_events(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Network Connection Events (Sysmon ID 3)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_network_connections_by_protocol(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Network connection by protocol" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_network_connections_from_nonbrowser_processes(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Network connections from non-browser processes" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_potentially_suspicious_powershell(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Potentially suspicious powershell" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_powershell_events_by_computer(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Powershell events by computer" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_powershell_events_over_time(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Powershell events over time" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_powershell_network_connections(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Powershell network connections" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_raw_access_read(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "RawAccessRead (Sysmon Event 9)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_references_to_temporary_files(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "References to temporary files" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_spawned_processes(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Spawned Processes" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_unusual_network_connections_from_non_browser_processes(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Unusual network connections from non-browser processes" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_user_logon_logoff_events(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards#/view/{self.dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "User Logon & Logoff Events" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") \ No newline at end of file diff --git a/testing/tests/selenium_tests/cluster/__init__.py b/testing/tests/selenium_tests/cluster/__init__.py new file mode 100644 index 00000000..e69de29b diff --git a/testing/tests/selenium_tests/cluster/conftest.py b/testing/tests/selenium_tests/cluster/conftest.py new file mode 100644 index 00000000..8b031074 --- /dev/null +++ b/testing/tests/selenium_tests/cluster/conftest.py @@ -0,0 +1,92 @@ +import pytest +import os +from webdriver_manager.chrome import ChromeDriverManager +from selenium.common.exceptions import TimeoutException +from selenium import webdriver +from selenium.webdriver.chrome.service import Service +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By + + +@pytest.fixture(scope="session") +def kibana_host(): + return os.getenv("KIBANA_HOST", "localhost") + +@pytest.fixture(scope="session") +def kibana_port(): + return int(os.getenv("KIBANA_PORT", 443)) + +@pytest.fixture(scope="session") +def kibana_user(): + return os.getenv("KIBANA_USER", "elastic") + +@pytest.fixture(scope="session") +def kibana_password(): + return os.getenv("elastic",os.getenv("KIBANA_PASSWORD", "changeme")) + +@pytest.fixture(scope="session") +def kibana_url(kibana_host, kibana_port): + return f"https://{kibana_host}:{kibana_port}" + +@pytest.fixture(scope="session") +def timeout(): + return int(os.getenv("SELENIUM_TIMEOUT", 30)) + +@pytest.fixture(scope="session") +def mode(): + return os.getenv("SELENIUM_MODE", "headless") + +@pytest.fixture(scope="session") +def driver(timeout, mode): + options = webdriver.ChromeOptions() + if mode == "detached" or mode == "debug": + options.add_experimental_option("detach", True) + options.add_argument("--ignore-certificate-errors") + options.add_argument("--allow-running-insecure-content") + else: + options.add_argument("--headless=new") + options.add_argument("--disable-gpu") + options.add_argument("--window-size=1920,1080") + options.add_argument("--ignore-certificate-errors") + options.add_argument("--no-sandbox") + options.add_argument("--disable-dev-shm-usage") + + s = Service(ChromeDriverManager().install()) + driver = webdriver.Chrome(service=s, options=options) + + yield driver + + if mode != "debug": + driver.stop_client() + driver.close() + driver.quit() + +@pytest.fixture(scope="session") +def login(driver, kibana_url, kibana_user, kibana_password, timeout): + def _login(): + """Login and load the home page""" + + driver.get(kibana_url) + + # Wait for the login page to load + # Check if the current URL contains the login page identifier + login_url_identifier = "/login" + if login_url_identifier in driver.current_url: + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, 'input[name="username"]')) + WebDriverWait(driver, timeout).until(expected_cond) + + # Login + username_input = driver.find_element(By.CSS_SELECTOR, 'input[name="username"]') + username_input.send_keys("elastic") + password_input = driver.find_element(By.CSS_SELECTOR, 'input[name="password"]') + password_input.send_keys(kibana_password) + submit_button = driver.find_element(By.CSS_SELECTOR, 'button[data-test-subj="loginSubmit"]') + submit_button.click() + + # Wait for the home page to load + selector = 'div[data-test-subj="homeApp"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + + return _login \ No newline at end of file diff --git a/testing/tests/selenium_tests/cluster/lib.py b/testing/tests/selenium_tests/cluster/lib.py new file mode 100644 index 00000000..7f88e5d2 --- /dev/null +++ b/testing/tests/selenium_tests/cluster/lib.py @@ -0,0 +1,41 @@ +import pytest +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException + + +def dashboard_test_function (driver, kibana_url, timeout, dashboard_id, panel_title, result_panel_class, noresult_panel_class): + + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + + # Wait for the react-grid-layout element to be present + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + + selector = f'div[data-title="{panel_title}"]' + + # Wait for the specific panel to be present + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + + # Wait for either the panel content or the "No results found" message to be present + + + panel_content_selector = f"{selector} {result_panel_class}" + no_results_selector = f"{selector} {noresult_panel_class}" + + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, f"{panel_content_selector}, {no_results_selector}")) + WebDriverWait(driver, timeout).until(expected_cond) + + + # Check if the panel content is present + try: + # Check if the "No results found" message is present + no_results_message = driver.find_element(By.CSS_SELECTOR, no_results_selector) + assert no_results_message.is_displayed() + except NoSuchElementException: + panel_content = driver.find_element(By.CSS_SELECTOR, panel_content_selector) + assert panel_content.is_displayed() + + \ No newline at end of file diff --git a/testing/tests/selenium_tests/cluster/test_computer_software_overview_dashboard.py b/testing/tests/selenium_tests/cluster/test_computer_software_overview_dashboard.py new file mode 100644 index 00000000..0202208a --- /dev/null +++ b/testing/tests/selenium_tests/cluster/test_computer_software_overview_dashboard.py @@ -0,0 +1,38 @@ +import pytest +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException + +from .lib import dashboard_test_function + +class TestComputerSoftwareOverviewDashboard: + dashboard_id = "33f0d3b0-8b8a-11ea-b1c6-a5bf39283f12" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_application_crashing_and_hanging(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Application Crashing and Hanging", ".echChart",".xyChart__empty") + + def test_application_crashing_and_hanging_count(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Application Crashing and Hanging Count", ".tbvChart",".visError") + + @pytest.mark.skip(reason="Skipping this test") + def test_create_remote_threat_events(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "CreateRemoteThread events", ".tbvChart",".visError") + + def test_filter_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Filter Hosts", ".tbvChart",".visError") + + + def test_processes(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Processes", ".tbvChart",".visError") + diff --git a/testing/tests/selenium_tests/cluster/test_health_check_dashboard.py b/testing/tests/selenium_tests/cluster/test_health_check_dashboard.py new file mode 100644 index 00000000..950e2c2c --- /dev/null +++ b/testing/tests/selenium_tests/cluster/test_health_check_dashboard.py @@ -0,0 +1,42 @@ +import pytest +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException + +from .lib import dashboard_test_function + +class TestHealthCheckDashboard: + dashboard_id = "51fe1470-fa59-11e9-bf25-8f92ffa3e3ec" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_number_of_admins(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Number of Admins", ".expExpressionRenderer",".dummyval") + # The arguement ".dummyval" is being used though it is not a valid selector. + # This panel should always have a visualization so there should never be no data message displayed. + # If there is no visualization rendered or "No Results found" message is displayed for this panel on dashboard, this test should fail which is correct behavior + + + def test_total_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Total Hosts", ".visualization",".dummyval") + + def test_events_by_machine(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Events by machine", ".echChart",".euiText") + + @pytest.mark.skip(reason="Skipping this test") + def test_unexpected_shutdowns(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Unexpected shutdowns", ".echChart",".visError") + + def test_users_seen(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Users seen", ".visualization",".dummyval") + + diff --git a/testing/tests/selenium_tests/cluster/test_process_explorer_dashboard.py b/testing/tests/selenium_tests/cluster/test_process_explorer_dashboard.py new file mode 100644 index 00000000..b85ccb7a --- /dev/null +++ b/testing/tests/selenium_tests/cluster/test_process_explorer_dashboard.py @@ -0,0 +1,53 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException +from .lib import dashboard_test_function + +class TestProcessExplorerDashboard: + dashboard_id = "f2cbc110-8400-11ee-a3de-f1bc0525ad6c" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + @pytest.mark.skip(reason="Skipping this test") + def test_files_created_over_time_in_downloads(self, setup_login, kibana_url, timeout): + #Did not find this dashboard panel on UI. This test should be removed. + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Files created (in Downloads)", ".needarealvaluehere",".euiFlexGroup") + + @pytest.mark.skip(reason="Skipping this test") + def test_files_created_in_downloads(self, setup_login, kibana_url, timeout): + #This dashboard panel is not working corectly. Shows no data even when there is data. Create issue LME#294 + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Files created (in Downloads)", ".euiFlexGroup", ".euiDataGrid__noResults",) + + def test_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Hosts", ".tbvChart",".visError") + + def test_process_spawn_event_logs_id1(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Process spawn event logs (Sysmon ID 1)", ".euiDataGrid",".euiDataGrid__noResults") + + def test_process_spawns_over_time(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Process spawns over time", ".echChart",".xyChart__empty") + + def test_processes_created_by_users_over_time(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Processes created by users over time", ".echChart",".xyChart__empty") + + def test_registry_events_sysmon_12_13_14(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Registry events (Sysmon 12, 13, 14)", ".euiDataGrid__focusWrap",".euiDataGrid__noResults") + + def test_users(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Users", ".euiDataGrid__focusWrap",".euiText") + + diff --git a/testing/tests/selenium_tests/cluster/test_security_dashboard_security_log.py b/testing/tests/selenium_tests/cluster/test_security_dashboard_security_log.py new file mode 100644 index 00000000..7fb229e0 --- /dev/null +++ b/testing/tests/selenium_tests/cluster/test_security_dashboard_security_log.py @@ -0,0 +1,98 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException +from .lib import dashboard_test_function + +class TestSecurityDashboardSecurityLog: + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_computer_filter_results(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Select a computer to filter the below results. Leave blank for all", ".euiFlexGroup",".dummyval") + # The arguement ".dummyval" is being used though it is not a valid selector. + # This panel should always have a visualization so there should never be no data message displayed. + # If there is no visualization rendered or "No Results found" message is displayed for this panel on dashboard, this test should fail which is correct behavior + + + @pytest.mark.skip(reason="Skipping this test") + def test_logons_with_special_privileges(self, setup_login, kibana_url, timeout): + #This dashboard panel needs test data. Currently the panel only gives No Result found + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security log - Logons with special privileges assigned - event ID 4672", ".needarealvaluehere",".visError") + + def test_computer_filter(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Select a computername to filter", ".tbvChart",".visError") + + def test_computers_showing_failed_login_attempts_none(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Computers showing failed login attempts - 10 maximum shown", ".echChart",".visError") + + @pytest.mark.skip(reason="Skipping this test") + def test_credential_sent_as_clear_text_type_8(self, setup_login, kibana_url, timeout): + #This dashboard panel needs test data. Currently the panel only gives No Result found + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security log - Credential sent as clear text - Logon type 8", ".needarealvaluehere",".visError") + + + def test_failed_logon_and_reason(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Failed logon and reason (status code)", ".echChart",".euiText") + + def test_failed_logons(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Failed Logons", ".unifiedDataTable",".euiDataGrid__noResults") + + @pytest.mark.skip(reason="Skipping this test") + def test_log_cleared_event_id_1102_or_104(self, setup_login, kibana_url, timeout): + #This dashboard panel needs test data. Currently the panel only gives No Result found + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Log Cleared - event ID 1102 or 104", ".needarealvaluehere",".euiDataGrid__noResults") + + + def test_process_started_with_different_creds(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security log - Process started with different credentials- event ID 4648 [could be RUNAS, scheduled tasks]", ".euiDataGrid",".euiDataGrid__noResults") + + def test_security_log_events_detail(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security log events - Detail", ".euiDataGrid",".euiDataGrid__noResults") + + def test_security_log_logon_as_a_service_type_5(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Sercurity log - logon as a service - Logon type 5",".euiDataGrid",".visError") + + def test_security_log_logon_created_logon_type_2(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security log - Logon created - Logon type 2",".tbvChart",".visError") + + def test_security_log_network_logon_created_type_3(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security log - network logon created - Logon type 3",".tbvChart",".visError") + + def test_security_log_process_creation_event_id_4688(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security log - Process creation - event ID 4688",".euiDataGrid",".euiDataGrid__noResults") + + def test_security_log_events(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security logs events",".visualization", ".dummyval") + # The arguement ".dummyval" is being used though it is not a valid selector. + # This panel should always have a visualization so there should never be no data message displayed. + # If there is no visualization rendered or "No Results found" message is displayed for this panel on dashboard, this test should fail which is correct behavior + + def test_failed_logon_type_codes(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Failed logon type codes",".visualization", ".dummyval") + + def test_failed_logon_status_codes(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Failed logon status codes",".visualization", ".dummyval") \ No newline at end of file diff --git a/testing/tests/selenium_tests/cluster/test_sysmon_summary_dashboard.py b/testing/tests/selenium_tests/cluster/test_sysmon_summary_dashboard.py new file mode 100644 index 00000000..a58d3fce --- /dev/null +++ b/testing/tests/selenium_tests/cluster/test_sysmon_summary_dashboard.py @@ -0,0 +1,48 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException +from .lib import dashboard_test_function + +class TestSysmonSummaryDashboard: + dashboard_id = "d2c73990-e5d4-11e9-8f1d-73a2ea4cc3ed" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_count_of_sysmon_events_by_event_code(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Count of Sysmon events by event code", ".tbvChart",".visError") + + + def test_total_number_of_sysmon_events_found(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Total number of Sysmon events found", ".visualization",".dummyval") + # The arguement ".dummyval" is being used though it is not a valid selector. + # This panel should always have a visualization so there should never be no data message displayed. + # If there is no visualization rendered or "No Results found" message is displayed for this panel on dashboard, this test should fail which is correct behavior + + + + def test_percentage_of_sysmon_events_by_event_code(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Percentage of Sysmon events by event code", ".echChart",".euiText") + + def test_sysmon_events(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Sysmon events", ".echChart",".visError") + + def test_top10_hosts_generating_most_sysmon_data(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Top 10 hosts generating the most Sysmon data", ".tbvChart",".visError") + + + def test_sysmon_events_code_reference(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Sysmon event code reference", ".visualization",".dummyval") + + diff --git a/testing/tests/selenium_tests/cluster/test_user_h_r_dashboard.py b/testing/tests/selenium_tests/cluster/test_user_h_r_dashboard.py new file mode 100644 index 00000000..3ecea47a --- /dev/null +++ b/testing/tests/selenium_tests/cluster/test_user_h_r_dashboard.py @@ -0,0 +1,65 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException +from .lib import dashboard_test_function + +class TestUserHRDashboard: + dashboard_id = "618bc5d0-84f8-11ee-9838-ff0db128d8b2" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_filter_computers(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Filter Computers", ".echChart",".xyChart__empty") + + + def test_filter_users(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Filter Users", ".echChart",".xyChart__empty") + + #@pytest.mark.skip(reason="Skipping this test") + def test_inperson_vs_remote_logons(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "In person vs Remote logons", ".echChart",".euiText") + + def test_user_logoff_events(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "User logoff events (correlate to logon events)", ".euiDataGrid",".euiDataGrid__noResults") + + #@pytest.mark.skip(reason="Skipping this test") + def test_user_logon_events(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "User logon events (filter by LogonId)", ".euiDataGrid",".euiDataGrid__noResults") + + def test_select_domain_and_username(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Select domain(s) and username(s)", ".icvContainer",".dummyval") + # The arguement ".dummyval" is being used though it is not a valid selector. + # This panel should always have a visualization so there should never be no data message displayed. + # If there is no visualization rendered or "No Results found" message is displayed for this panel on dashboard, this test should fail which is correct behavior + + #@pytest.mark.skip(reason="Skipping this test") + def test_hr_user_activity_title(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "HR - User activity title", ".visualization",".dummyval") + + + def test_all_user_events_dayofweek_hourofday(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "All User Events by Day of Week, Hour of Day", ".echChart",".dummyval") + + def test_timestamps_by_count(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Timestamps by Count", ".echChart",".dummyval") + + #@pytest.mark.skip(reason="Skipping this test") + def test_hr_logon_title(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "HR - Logon title", ".visualization",".dummyval") + \ No newline at end of file diff --git a/testing/tests/selenium_tests/cluster/test_user_security_dashboard.py b/testing/tests/selenium_tests/cluster/test_user_security_dashboard.py new file mode 100644 index 00000000..2c01faeb --- /dev/null +++ b/testing/tests/selenium_tests/cluster/test_user_security_dashboard.py @@ -0,0 +1,180 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException +from .lib import dashboard_test_function + +class TestUserSecurityDashboard: + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_search_users(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Search users", ".visualization",".dummyval") + # The arguement ".dummyval" is being used though it is not a valid selector. + # This panel should always have a visualization so there should never be no data message displayed. + # If there is no visualization rendered or "No Results found" message is displayed for this panel on dashboard, this test should fail which is correct behavior + + def test_filter_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Filter hosts", ".tbvChart",".visError") + + def test_search_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Search hosts", ".visualization",".dummyval") + + def test_filter_users(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Filter users", ".euiDataGrid",".euiText") + + def test_security_logons_title(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security - Logons Title", ".visualization",".dummyval") + + def test_security_logons_attempts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security - Logon attempts", ".visualization",".dummyval") + + def test_security_logons_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security - Logon hosts", ".visualization",".dummyval") + + + def test_logon_attempts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Logon attempts", ".echChart",".xyChart__empty") + + + def test_logged_on_computers(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Logged on computers", ".echChart",".euiText") + + def test_user_logon_logoff_events(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "User Logon & Logoff Events", ".euiDataGrid",".euiDataGrid__noResults") + + def test_security_network_title(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security - Network Title", ".visualization",".dummyval") + + def test_all_network_connections(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "All network connections", ".echChart",".xyChart__empty") + + def test_network_connections_from_nonbrowser_processes(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Network connections from non-browser processes", ".tbvChart",".visError") + + def test_network_connections_by_protocol(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Network connection by protocol", ".echChart",".xyChart__empty") + + def test_unusual_network_connections_from_non_browser_processes(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Unusual network connections from non-browser processes", ".tbvChart",".visError") + + def test_network_connection_events(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Network Connection Events (Sysmon ID 3)", ".euiDataGrid",".euiDataGrid__noResults") + + def test_unusual_network_connections_events_sysmonid_3(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Network Connection Events (Sysmon ID 3)", ".euiDataGrid",".euiDataGrid__noResults") + + def test_security_processes_title(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security - Processes Title", ".visualization",".dummyval") + + def test_spawned_processes(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Spawned Processes", ".euiDataGrid",".euiDataGrid__noResults") + + def test_powershell_events(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Powershell Events", ".visualization",".dummyval") + + def test_powershell_events_over_time(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Powershell events over time", ".echChart",".xyChart__empty") + + def test_powershell_events_by_computer(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Powershell events by computer", ".echChart",".euiText") + + @pytest.mark.skip(reason="Skipping this test") + def test_potentially_suspicious_powershell(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Potentially suspicious powershell", ".needarealvaluehere",".euiDataGrid__noResults") + #This dashboard panel needs test data. Currently the panel only gives No Result found + + @pytest.mark.skip(reason="Skipping this test") + def test_powershell_network_connections(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Powershell network connections", ".needarealvaluehere",".euiDataGrid__noResults") + + + def test_security_files_title(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security - Files title", ".visualization",".dummyval") + + @pytest.mark.skip(reason="Skipping this test") + def test_references_to_temporary_files(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "References to temporary files", ".needarealvaluehere",".visError") + + @pytest.mark.skip(reason="Skipping this test") + def test_raw_access_read(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "RawAccessRead (Sysmon Event 9)", ".needarealvaluehere",".euiDataGrid__noResults") + + def test_windows_defender_title(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Security - Windows Defender Title", ".visualization",".dummyval") + + + @pytest.mark.skip(reason="Skipping this test") + def test_av_detections(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "AV Detections (Event 1116)", ".needarealvaluehere",".euiDataGrid__noResults") + + def test_defender_event_count(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "Defender event count", ".visualization",".dummyval") + + def test_av_hits_count(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_test_function(driver, kibana_url, timeout, self.dashboard_id, "AV Hits (Count)", ".visualization",".dummyval") + + + + + + + + + + + + + + + + + + + + + + + + + + + + diff --git a/testing/tests/selenium_tests/linux_only/conftest.py b/testing/tests/selenium_tests/linux_only/conftest.py new file mode 100644 index 00000000..52bd88fc --- /dev/null +++ b/testing/tests/selenium_tests/linux_only/conftest.py @@ -0,0 +1,93 @@ +import pytest +import os +from webdriver_manager.chrome import ChromeDriverManager +from selenium.common.exceptions import TimeoutException +from selenium import webdriver +from selenium.webdriver.chrome.service import Service +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By + + +@pytest.fixture(scope="session") +def kibana_host(): + return os.getenv("KIBANA_HOST", "localhost") + +@pytest.fixture(scope="session") +def kibana_port(): + return int(os.getenv("KIBANA_PORT", 443)) + +@pytest.fixture(scope="session") +def kibana_user(): + return os.getenv("KIBANA_USER", "elastic") + +@pytest.fixture(scope="session") +def kibana_password(): + return os.getenv("elastic",os.getenv("KIBANA_PASSWORD", "changeme")) + +@pytest.fixture(scope="session") +def kibana_url(kibana_host, kibana_port): + return f"https://{kibana_host}:{kibana_port}" + +@pytest.fixture(scope="session") +def timeout(): + return int(os.getenv("SELENIUM_TIMEOUT", 30)) + +@pytest.fixture(scope="session") +def mode(): + return os.getenv("SELENIUM_MODE", "headless") + +@pytest.fixture(scope="session") +def driver(timeout, mode): + options = webdriver.ChromeOptions() + if mode == "detached" or mode == "debug": + options.add_experimental_option("detach", True) + options.add_argument("--ignore-certificate-errors") + options.add_argument("--allow-running-insecure-content") + options.add_argument('--force-device-scale-factor=1.5') + else: + options.add_argument("--headless=new") + options.add_argument("--disable-gpu") + options.add_argument("--window-size=1920,1080") + options.add_argument("--ignore-certificate-errors") + options.add_argument("--no-sandbox") + options.add_argument("--disable-dev-shm-usage") + + s = Service(ChromeDriverManager().install()) + driver = webdriver.Chrome(service=s, options=options) + + yield driver + + if mode != "debug": + driver.stop_client() + driver.close() + driver.quit() + +@pytest.fixture(scope="session") +def login(driver, kibana_url, kibana_user, kibana_password, timeout): + def _login(): + """Login and load the home page""" + + driver.get(kibana_url) + + # Wait for the login page to load + # Check if the current URL contains the login page identifier + login_url_identifier = "/login" + if login_url_identifier in driver.current_url: + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, 'input[name="username"]')) + WebDriverWait(driver, timeout).until(expected_cond) + + # Login + username_input = driver.find_element(By.CSS_SELECTOR, 'input[name="username"]') + username_input.send_keys("elastic") + password_input = driver.find_element(By.CSS_SELECTOR, 'input[name="password"]') + password_input.send_keys(kibana_password) + submit_button = driver.find_element(By.CSS_SELECTOR, 'button[data-test-subj="loginSubmit"]') + submit_button.click() + + # Wait for the home page to load + selector = 'div[data-test-subj="homeApp"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + + return _login \ No newline at end of file diff --git a/testing/tests/selenium_tests/linux_only/move_tests.sh b/testing/tests/selenium_tests/linux_only/move_tests.sh new file mode 100755 index 00000000..e3597de2 --- /dev/null +++ b/testing/tests/selenium_tests/linux_only/move_tests.sh @@ -0,0 +1,48 @@ +#!/bin/bash + +# Check if the Python file is provided as an argument +if [ $# -eq 0 ]; then + echo "Please provide the path to the Python file as an argument." + exit 1 +fi + +# Get the Python file path from the argument +python_file=$1 + +# Check if the Python file exists +if [ ! -f "$python_file" ]; then + echo "The specified Python file does not exist." + exit 1 +fi + +# Find all the class definitions in the Python file +class_names=$(grep -oP '(?<=class )\w+' "$python_file") + +# Iterate over each class name +for class_name in $class_names; do + # Convert the class name to snake case + snake_case_name=$(echo "$class_name" | sed 's/\([A-Z]\)/_\L\1/g;s/^_//') + + # Create a new file with the snake case class name + new_file="${snake_case_name}.py" + + # Add the import statements to the new file + echo "import pytest" > "$new_file" + echo "import os" >> "$new_file" + echo "from selenium.webdriver.support.ui import WebDriverWait" >> "$new_file" + echo "from selenium.webdriver.support import expected_conditions as EC" >> "$new_file" + echo "from selenium.webdriver.common.by import By" >> "$new_file" + echo "from selenium.common.exceptions import NoSuchElementException" >> "$new_file" + echo "" >> "$new_file" # Add an empty line for separation + + # Extract the class and its contents from the original file and append to the new file + sed -n "/class $class_name/,/class\s\+\w\+\s*:/p" "$python_file" | sed '$d' >> "$new_file" + + # Check if the new file is empty + if [ ! -s "$new_file" ]; then + echo "Class '$class_name' not found or empty. Skipping." + rm "$new_file" + else + echo "Extracted class '$class_name' to '$new_file'" + fi +done \ No newline at end of file diff --git a/testing/tests/selenium_tests/linux_only/test_basic_loading.py b/testing/tests/selenium_tests/linux_only/test_basic_loading.py new file mode 100644 index 00000000..bf301df5 --- /dev/null +++ b/testing/tests/selenium_tests/linux_only/test_basic_loading.py @@ -0,0 +1,40 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException + +class TestBasicLoading: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + # @pytest.fixture(scope="class", autouse=True) + # def setup_teardown(self, driver): + # yield + # driver.quit() # Clean up the browser (driver) here + + + def test_title(self, setup_login, kibana_url, timeout): + driver = setup_login + driver.get(f"{kibana_url}/app/dashboards") + selector = 'div[data-test-subj="dashboardLandingPage"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + assert driver.title == "Dashboards - Elastic" + + def test_dashboard_menu(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Dashboard Menu" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + diff --git a/testing/tests/selenium_tests/linux_only/test_computer_software_overview_dashboard_lo.py b/testing/tests/selenium_tests/linux_only/test_computer_software_overview_dashboard_lo.py new file mode 100644 index 00000000..000f901f --- /dev/null +++ b/testing/tests/selenium_tests/linux_only/test_computer_software_overview_dashboard_lo.py @@ -0,0 +1,39 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException + +class TestComputerSoftwareOverviewDashboard: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_dashboard_menu(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "33f0d3b0-8b8a-11ea-b1c6-a5bf39283f12" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Dashboard Menu" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_host_count(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "33f0d3b0-8b8a-11ea-b1c6-a5bf39283f12" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Host Count" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + diff --git a/testing/tests/selenium_tests/linux_only/test_health_check_dashboard_lo.py b/testing/tests/selenium_tests/linux_only/test_health_check_dashboard_lo.py new file mode 100644 index 00000000..cf630b83 --- /dev/null +++ b/testing/tests/selenium_tests/linux_only/test_health_check_dashboard_lo.py @@ -0,0 +1,24 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException + +class TestHealthCheckDashboard: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_users_seen(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "51fe1470-fa59-11e9-bf25-8f92ffa3e3ec" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Users seen" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) diff --git a/testing/tests/selenium_tests/linux_only/test_security_dashboard_security_log_lo.py b/testing/tests/selenium_tests/linux_only/test_security_dashboard_security_log_lo.py new file mode 100644 index 00000000..4f56dca4 --- /dev/null +++ b/testing/tests/selenium_tests/linux_only/test_security_dashboard_security_log_lo.py @@ -0,0 +1,65 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException + +class TestSecurityDashboardSecurityLog: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_security_log_events(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security logs events" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_failed_logon_attempts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Failed logon attempts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_failed_logons_type_codes(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Failed logon type codes" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_failed_logon_status_codes(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Failed logon status codes" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + diff --git a/testing/tests/selenium_tests/linux_only/test_sysmon_summary_dashboard_lo.py b/testing/tests/selenium_tests/linux_only/test_sysmon_summary_dashboard_lo.py new file mode 100644 index 00000000..443d0bf1 --- /dev/null +++ b/testing/tests/selenium_tests/linux_only/test_sysmon_summary_dashboard_lo.py @@ -0,0 +1,39 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException + +class TestSysmonSummaryDashboard: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_total_number_of_sysmon_events_found(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "d2c73990-e5d4-11e9-8f1d-73a2ea4cc3ed" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Total number of Sysmon events found" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_sysmon_event_code_reference(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "d2c73990-e5d4-11e9-8f1d-73a2ea4cc3ed" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Sysmon event code reference" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + diff --git a/testing/tests/selenium_tests/linux_only/test_user_h_r_dashboard_lo.py b/testing/tests/selenium_tests/linux_only/test_user_h_r_dashboard_lo.py new file mode 100644 index 00000000..778a21f7 --- /dev/null +++ b/testing/tests/selenium_tests/linux_only/test_user_h_r_dashboard_lo.py @@ -0,0 +1,78 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException + +class TestUserHRDashboard: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + def test_dashboard_menu(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "618bc5d0-84f8-11ee-9838-ff0db128d8b2" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Dashboard Menu" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_domains_and_usernames(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "618bc5d0-84f8-11ee-9838-ff0db128d8b2" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Select domain(s) and username(s)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_all_user_events(self, driver, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "618bc5d0-84f8-11ee-9838-ff0db128d8b2" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "All User Events by Day of Week, Hour of Day" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_timestamps_by_count(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "618bc5d0-84f8-11ee-9838-ff0db128d8b2" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Timestamps by Count" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + + def test_dashboard_menu(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "51186cd0-e8e9-11e9-9070-f78ae052729a" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Dashboard Menu" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + diff --git a/testing/tests/selenium_tests/linux_only/test_user_security_dashboard_lo.py b/testing/tests/selenium_tests/linux_only/test_user_security_dashboard_lo.py new file mode 100644 index 00000000..83d676fe --- /dev/null +++ b/testing/tests/selenium_tests/linux_only/test_user_security_dashboard_lo.py @@ -0,0 +1,91 @@ +import pytest +import os +from selenium.webdriver.support.ui import WebDriverWait +from selenium.webdriver.support import expected_conditions as EC +from selenium.webdriver.common.by import By +from selenium.common.exceptions import NoSuchElementException + +class TestUserSecurityDashboard: + @pytest.fixture(scope="class") + def setup_login(self, driver, login): + login() + yield driver + + def test_search_users(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Search users" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_search_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Search hosts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_security_logon_attempts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security - Logon attempts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_security_logon_hosts(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Security - Logon hosts" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_av_hits(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "AV Hits (Count)" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + + def test_defender_event_count(self, setup_login, kibana_url, timeout): + driver = setup_login + dashboard_id = "e5f203f0-6182-11ee-b035-d5f231e90733" + driver.get(f"{kibana_url}/app/dashboards#/view/{dashboard_id}") + expected_cond = EC.presence_of_element_located((By.CLASS_NAME, "react-grid-layout")) + WebDriverWait(driver, timeout).until(expected_cond) + panel_title = "Defender event count" + selector = f'div[data-title="{panel_title}"]' + expected_cond = EC.presence_of_element_located((By.CSS_SELECTOR, selector)) + WebDriverWait(driver, timeout).until(expected_cond) + panel = driver.find_element(By.CSS_SELECTOR, selector) + assert "No results found" not in panel.get_attribute("innerHTML") + diff --git a/testing/v2/development/Dockerfile b/testing/v2/development/Dockerfile new file mode 100644 index 00000000..9402c73e --- /dev/null +++ b/testing/v2/development/Dockerfile @@ -0,0 +1,64 @@ +# Use Ubuntu 22.04 as base image +FROM ubuntu:22.04 +ARG USER_ID=1001 +ARG GROUP_ID=1001 + +# Set environment variable to avoid interactive dialogues during build +ENV DEBIAN_FRONTEND=noninteractive + +# Install necessary APT packages including Python and pip +RUN apt-get update && apt-get install -y \ + lsb-release \ + python3 \ + python3-venv \ + python3-pip \ + zip \ + git \ + curl \ + wget \ + sudo \ + cron \ + freerdp2-x11 \ + pkg-config \ + libcairo2-dev \ + libdbus-1-dev \ + distro-info \ + libgirepository1.0-dev \ + && wget -q "https://packages.microsoft.com/config/ubuntu/$(lsb_release -rs)/packages-microsoft-prod.deb" \ + && dpkg -i packages-microsoft-prod.deb \ + && apt-get update \ + && apt-get install -y powershell \ + && rm -rf /var/lib/apt/lists/* \ + && curl -sL https://aka.ms/InstallAzureCLIDeb | bash \ + && wget https://dl.google.com/linux/direct/google-chrome-stable_current_amd64.deb \ + && apt install -y ./google-chrome-stable_current_amd64.deb \ + && rm -rf google-chrome-stable_current_amd64.deb \ + && sudo apt-get install -f \ + && apt-get clean + +# Install Ansible +RUN python3 -m pip install --upgrade pip \ + && python3 -m pip install ansible + +# Create a user and group 'admin.ackbar' with GID 1001 +RUN groupadd -g $GROUP_ID admin.ackbar \ + && useradd -m -u $USER_ID -g admin.ackbar --badnames admin.ackbar \ + && usermod -aG sudo admin.ackbar + +# Allow 'admin.ackbar' user to run sudo commands without a password +RUN echo "admin.ackbar ALL=(ALL) NOPASSWD: ALL" >> /etc/sudoers + +# Define the base directory as an environment variable +ENV BASE_DIR=/home/admin.ackbar/LME + +# Set work directory +WORKDIR $BASE_DIR + +# Change to non-root privilege +# USER admin.ackbar + +# Set timezone (optional) +ENV TZ=America/New_York + +# Keep the container running (This can be replaced by your application's main process) +CMD ["tail", "-f", "/dev/null"] \ No newline at end of file diff --git a/testing/v2/development/docker-compose.yml b/testing/v2/development/docker-compose.yml new file mode 100644 index 00000000..5daf5757 --- /dev/null +++ b/testing/v2/development/docker-compose.yml @@ -0,0 +1,26 @@ +# Docker Compose file for setting up development environment for LME project. +# +# This file defines two services: +# 1. ubuntu: +# - Builds an Ubuntu container with the specified USER_ID and GROUP_ID arguments. +# - Mounts the parent directory to /lme in the container, allowing access to the LME project. +# - Sets the container name to "v2_ubuntu". +# - Sets the user to the specified HOST_UID and HOST_GID. +# - Runs the command "sleep infinity" to keep the container running indefinitely. +# + +version: '3.8' + +services: + ubuntu: + build: + context: . + args: + USER_ID: "${HOST_UID:-1001}" + GROUP_ID: "${HOST_GID:-1001}" + container_name: v2_ubuntu + user: "${HOST_UID:-1001}:${HOST_GID:-1001}" + volumes: + - ../../../../LME/:/lme + command: sleep infinity + \ No newline at end of file diff --git a/testing/v2/installers/README.md b/testing/v2/installers/README.md new file mode 100644 index 00000000..2a13e8dd --- /dev/null +++ b/testing/v2/installers/README.md @@ -0,0 +1,15 @@ +There are readmes for each of the installer directories. + +You'll need to follow the steps in [Azure Authentication](/testing/v2/installers/azure/build_azure_linux_network.md#authentication) and +[Python Setup](/testing/v2/installers/azure/build_azure_linux_network.md#setup) prior to running the steps below. + +Quick Start + +```bash +./azure/build_azure_linux_network.py -g your-group-name -s 0.0.0.0 -vs Standard_D8_v4 -l westus -ast 00:00 +./minimega/install.sh lme-user $(cat your-group-name.ip.txt) your-group-name.password.txt +./ubuntu_qcow_maker/install.sh lme-user $(cat your-group-name.ip.txt) your-group-name.password.txt +./install_v2/install.sh lme-user $(cat your-group-name.ip.txt) your-group-name.password.txt branch +``` + +./azure/build_azure_linux_network.py -g lme-cbaxley-m1 -s 0.0.0.0 -vs Standard_D8_v4 -l westus -ast 00:00 -pub Canonical -io 0001-com-ubuntu-server-noble-daily -is 24_04-daily-lts-gen2 diff --git a/testing/v2/installers/azure/build_azure_linux_network.md b/testing/v2/installers/azure/build_azure_linux_network.md new file mode 100644 index 00000000..af8f84ab --- /dev/null +++ b/testing/v2/installers/azure/build_azure_linux_network.md @@ -0,0 +1,136 @@ +- [Authentication](#authentication) +- [Setup and Run the Script](#setup-and-run-the-script) + - [Prerequisites](#prerequisites) + - [Setup](#setup) + - [Running the Script](#running-the-script) + - [Allowed arguments](#allowed-arguments) + - [Cleanup](#cleanup) + +# Azure Authentication + +When running the script outside of an Azure environment, you may be prompted to log in interactively if you haven't authenticated previously. The script uses the `DefaultAzureCredential` or `ClientSecretCredential` from the `azure-identity` library, which follows a specific authentication flow: + +1. If the `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_CLIENT_SECRET` environment variables are set, the script will use them to authenticate using the `ClientSecretCredential`. This is typically used for non-interactive authentication, such as in automated scripts or CI/CD pipelines. + +2. If the environment variables are not set, the script falls back to using the `DefaultAzureCredential`. The `DefaultAzureCredential` tries to authenticate using the following methods, in order: + - Environment variables: If the `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_CLIENT_SECRET` environment variables are set, it will use them for authentication. + - Managed identity: If the script is running on an Azure VM or Azure Functions with a managed identity enabled, it will use the managed identity for authentication. + - Azure CLI: If you have authenticated previously using the Azure CLI (`az login`), it will use the cached credentials from the CLI. + - Interactive browser authentication: If none of the above methods succeed, it will open a browser window and prompt you to log in interactively. + +## Avoiding Interactive Login + +If you run the script outside of an Azure environment and you haven't authenticated previously using the Azure CLI or set the necessary environment variables, the script will prompt you to log in interactively through a browser window. + +To avoid interactive login, you can do one of the following: + +1. Set the `AZURE_CLIENT_ID`, `AZURE_TENANT_ID`, and `AZURE_CLIENT_SECRET` environment variables with the appropriate values for your Azure service principal. This allows the script to authenticate using the client secret. + +2. Authenticate using the Azure CLI by running `az login` before running the script. This will cache your credentials, and the script will use them for authentication. + +If you prefer not to be prompted for interactive login, make sure to set the necessary environment variables or authenticate using the Azure CLI beforehand. + +## Environment Variables + +The following environment variables can be set to provide authentication credentials: + +- `AZURE_CLIENT_ID`: The client ID of your Azure service principal. +- `AZURE_TENANT_ID`: The tenant ID of your Azure subscription. +- `AZURE_CLIENT_SECRET`: The client secret associated with your Azure service principal. +- `AZURE_SUBSCRIPTION_ID`: The subscription ID you want to use for creating the resources. + +If these environment variables are set, the script will use them for authentication. Otherwise, it will attempt to use the default Azure credential and retrieve the default subscription ID. + + +# Setup and Run the Script + +## Prerequisites + +- Python 3.x installed on your system + +## Setup + +1. Clone the repository or download the script files to your local machine. + +2. Open a terminal or command prompt and navigate to the directory where the script files are located. + +3. Create a new virtual environment by running the following command: + + ```bash + python -m venv venv + ``` + + This will create a new virtual environment named `venv` in the current directory. + +4. Activate the virtual environment: + + - For Windows: + ``` + venv\Scripts\activate + ``` + + - For macOS and Linux: + ``` + source venv/bin/activate + ``` + + You should see `(venv)` prefixed to your terminal prompt, indicating that the virtual environment is active. + +5. Install the required packages by running the following command: + + ``` + pip install -r build_azure_linux_network_requirements.txt + ``` + + This will install all the necessary packages listed in the `build_azure_linux_network_requirements.txt` file. + +## Running the Script + +To run the script, use the following command: + +```bash +python build_azure_linux_network.py -g -s 10.1.1.10/32 -ast 21:00 +``` + +Replace `` with the desired resource group name and `` with the comma-separated list of CIDR prefixes or IP ranges for allowed sources. + +Make sure you have the necessary authentication credentials set up before running the script. + +## Allowed arguments +| **Parameter** | **Alias** | **Description** | **Required** | **Default** | +|------------------------|-----------|--------------------------------------------------------------------------------------------------|--------------|---------------------------------| +| --resource-group | -g | Resource group name | Yes | | +| --allowed-sources | -s | Comma-separated list of CIDR prefixes or IP ranges (XX.XX.XX.XX/YY,XX.XX.XX.XX/YY,etc...) | Yes | | +| --location | -l | Location where the cluster will be built. | No | westus | +| --no-prompt | -y | Run the script with no prompt (useful for automated runs) | No | False | +| --subscription-id | -sid | Azure subscription ID. If not provided, the default subscription ID will be used. | No | | +| --vnet-name | -vn | Virtual network name | No | VNet1 | +| --vnet-prefix | -vp | Virtual network prefix | No | 10.1.0.0/16 | +| --subnet-name | -sn | Subnet name | No | SNet1 | +| --subnet-prefix | -sp | Subnet prefix | No | 10.1.0.0/24 | +| --ls-ip | -ip | IP address for the VM | No | 10.1.0.5 | +| --vm-admin | -u | Admin username for the VM | No | lme-user | +| --machine-name | -m | Name of the VM | No | ubuntu | +| --ports | -p | Ports to open | No | [22] | +| --priorities | -pr | Priorities for the ports | No | [1001] | +| --protocols | -pt | Protocols for the ports | No | ['Tcp'] | +| --vm-size | -vs | Size of the virtual machine | No | Standard_E2d_v4 | +| --image-publisher | -pub | Publisher of the VM image | No | Canonical | +| --image-offer | -io | Offer of the VM image | No | 0001-com-ubuntu-server-jammy | +| --image-sku | -is | SKU of the VM image | No | 22_04-lts-gen2 | +| --image-version | -iv | Version of the VM image | No | latest | +| --os-disk-size-gb | -os | Size of the OS disk in GB | No | 128 | +| --auto-shutdown-time | -ast | Auto-Shutdown time in UTC (HH:MM, e.g. 22:30, 00:00, 19:00). Convert timezone as necessary. | No | | +| --auto-shutdown-email | -ase | Auto-shutdown notification email | No | | + + + +## Cleanup + +When you're done using the script, you can deactivate the virtual environment by running the following command: + +``` +deactivate +``` + +This will deactivate the virtual environment and return you to your normal terminal prompt. diff --git a/testing/v2/installers/azure/build_azure_linux_network.py b/testing/v2/installers/azure/build_azure_linux_network.py new file mode 100755 index 00000000..559397ef --- /dev/null +++ b/testing/v2/installers/azure/build_azure_linux_network.py @@ -0,0 +1,624 @@ +#!/usr/bin/env python3 +import argparse +import os +import string +import random +from azure.identity import DefaultAzureCredential +from azure.mgmt.compute import ComputeManagementClient +from azure.mgmt.devtestlabs import DevTestLabsClient +from azure.mgmt.devtestlabs.models import Schedule +from azure.mgmt.resource import ResourceManagementClient +from azure.mgmt.network import NetworkManagementClient +from azure.mgmt.resource.subscriptions import SubscriptionClient +from datetime import datetime +from pathlib import Path + + +def generate_password(length=12): + uppercase_letters = string.ascii_uppercase + lowercase_letters = string.ascii_lowercase + digits = string.digits + special_chars = string.punctuation + + # Generate the password + password = [] + password.append(random.choice(uppercase_letters)) + password.append(random.choice(lowercase_letters)) + password.append(random.choice(digits)) + password.append(random.choice(special_chars)) + + # Generate the remaining characters + remaining_length = length - 4 + remaining_chars = uppercase_letters + lowercase_letters + digits \ + + special_chars + password.extend(random.choices(remaining_chars, k=remaining_length)) + + # Shuffle the password characters randomly + random.shuffle(password) + + return "".join(password) + + +def get_default_subscription_id(credential=None): + if credential is None: + credential = DefaultAzureCredential() + + """Get the default subscription ID from Azure environment""" + subscription_client = SubscriptionClient(credential) + subscription_list = list(subscription_client.subscriptions.list()) + if not subscription_list: + raise Exception("No Azure subscriptions found") + + # Use the first subscription in the list + return subscription_list[0].subscription_id + + +def create_clients(subscription_id): + credential = DefaultAzureCredential() + if subscription_id is None: + subscription_id = get_default_subscription_id(credential) + resource_client = ResourceManagementClient(credential, subscription_id) + network_client = NetworkManagementClient(credential, subscription_id) + compute_client = ComputeManagementClient(credential, subscription_id) + devtestlabs_client = DevTestLabsClient(credential, subscription_id) + return (resource_client, network_client, compute_client, + devtestlabs_client, subscription_id) + + +def check_ports_protocals_and_priorities(ports, priorities, protocols): + if len(ports) != len(priorities): + print("Priorities and Ports length should be equal!") + exit(1) + if len(ports) != len(protocols): + print("Protocols and Ports length should be equal!") + exit(1) + + +def set_network_rules( + network_client, + resource_group, + allowed_sources_list, + nsg_name, + ports, + priorities, + protocols, +): + check_ports_protocals_and_priorities(ports, priorities, protocols) + + for i in range(len(ports)): + port = ports[i] + priority = priorities[i] + protocol = protocols[i] + print(f"\nCreating Network Port {port} rule...") + + nsg_rule_params = { + "protocol": protocol, + "source_address_prefix": allowed_sources_list, + "destination_address_prefix": "*", + "access": "Allow", + "direction": "Inbound", + "source_port_range": "*", + "destination_port_range": str(port), + "priority": priority, + "name": f"Network_Port_Rule_{port}", + } + + nsg_rule_poller = network_client.security_rules.begin_create_or_update( + resource_group_name=resource_group, + network_security_group_name=nsg_name, + security_rule_name=nsg_rule_params["name"], + security_rule_parameters=nsg_rule_params, + ) + nsg_rule = nsg_rule_poller.result() + print(f"Network rule '{nsg_rule.name}' created successfully.") + + +def create_public_ip(network_client, resource_group, location, machine_name): + print(f"\nCreating public IP address for {machine_name}") + unique_dns_name = f"{machine_name}-{random.randint(1000, 9999)}" + public_ip_params = { + "location": location, + "public_ip_allocation_method": "Static", + "dns_settings": { + "domain_name_label": unique_dns_name + }, + } + public_ip_poller = ( + network_client.public_ip_addresses + .begin_create_or_update( + resource_group.name, + f"{machine_name}-public-ip", + public_ip_params + ) + ) + public_ip = public_ip_poller.result() + print( + f"Public IP address '{public_ip.name}' with " + f"ip {public_ip.ip_address} created successfully." + ) + return public_ip + + +def create_network_interface( + network_client, resource_group, location, machine_name, + subnet_id, private_ip_address, public_ip, nsg_id + ): + print(f"\nCreating network interface for {machine_name}...") + nic_params = { + "location": location, + "ip_configurations": [ + { + "name": f"{machine_name}-ipconfig", + "subnet": {"id": subnet_id}, + "private_ip_address": private_ip_address, + "private_ip_allocation_method": "Static", + "public_ip_address": { + "id": public_ip.id + } + } + ], + "network_security_group": { + "id": nsg_id + } + } + nic_poller = network_client.network_interfaces.begin_create_or_update( + resource_group.name, f"{machine_name}-nic", nic_params + ) + nic = nic_poller.result() + print(f"Network interface '{nic.name}' created successfully with associated NSG.") + return nic + + +def set_auto_shutdown( + devtestlabs_client, subscription_id, resource_group_name, location, + vm_name, auto_shutdown_time, auto_shutdown_email + ): + print( + f"\nCreating Auto-Shutdown Rule for {vm_name} " + f"at time {auto_shutdown_time}...") + schedule_name = f"shutdown-computevm-{vm_name}" + + schedule_params = Schedule( + status="Enabled", + task_type="ComputeVmShutdownTask", + daily_recurrence={"time": auto_shutdown_time}, + time_zone_id="UTC", + notification_settings={ + "status": "Enabled" if auto_shutdown_email else "Disabled", + "time_in_minutes": 30, + "webhook_url": None, + "email_recipient": auto_shutdown_email, + }, + target_resource_id=( + f"/subscriptions/{subscription_id}/resourceGroups/" + f"{resource_group_name}/providers/Microsoft.Compute/" + f"virtualMachines/{vm_name}" + ), + location=location, + ) + + devtestlabs_client.global_schedules.create_or_update( + resource_group_name, schedule_name, schedule_params + ) + print(f"Auto-Shutdown Rule for {vm_name} created successfully.") + + +def save_to_parent_directory(filename, content): + script_dir = Path(__file__).resolve().parent + parent_dir = script_dir.parent + file_path = parent_dir / filename + with open(file_path, "w") as file: + file.write(content) + print(f"File saved: {file_path}") + + +# All arguments are keyword arguments +def main( + *, + resource_group: str, + location: str, + allowed_sources: str, + no_prompt: bool, + subscription_id: str = None, + vnet_name: str, + vnet_prefix: str, + subnet_name: str, + subnet_prefix: str, + ls_ip: str, + vm_admin: str, + machine_name: str, + ports: list[int], + priorities: list[int], + protocols: list[str], + vm_size: str, + image_publisher: str, + image_offer: str, + image_sku: str, + image_version: str, + os_disk_size_gb: int, + auto_shutdown_time: str = None, + auto_shutdown_email: str = None, +): + ( + resource_client, + network_client, + compute_client, + devtestlabs_client, + subscription_id + ) = create_clients(subscription_id) + + # Variables used for Azure tags + current_user = os.getenv("USER", "unknown") + today = datetime.now().strftime("%Y-%m-%d") + project = "LME" + + # Validation of Globals + allowed_sources_list = allowed_sources.split(",") + if len(allowed_sources_list) < 1: + print( + "**ERROR**: Variable AllowedSources must " + "be set (set with -AllowedSources or -s)" + ) + exit(1) + + # Confirmation + print("Supplied configuration:\n") + + print(f"Location: {location}") + print(f"Resource group: {resource_group}") + print(f"Allowed sources (IP's): {allowed_sources_list}") + + if not no_prompt: + proceed = input("\nProceed? (Y/n) ") + while proceed.lower() not in ["y", "n"]: + proceed = input("\nProceed? (Y/n) ") + + if proceed.lower() == "n": + print("Setup canceled") + exit() + + # Setup resource group + print("\nCreating resource group...") + resource_group_params = { + "location": location, + "tags": { + "user": current_user, + "created_on": today, + "project": project, + }, + } + resource_group = resource_client.resource_groups.create_or_update( + resource_group, resource_group_params + ) + print(f"Resource group '{resource_group.name}' created successfully.") + + # Setup network + print("\nCreating virtual network...") + vnet_params = { + "location": location, + "address_space": {"address_prefixes": [vnet_prefix]}, + "subnets": [{"name": subnet_name, "address_prefix": subnet_prefix}], + "tags": { + "user": current_user, + "created_on": today, + "project": project, + }, + } + vnet_poller = network_client.virtual_networks.begin_create_or_update( + resource_group_name=resource_group.name, + virtual_network_name=vnet_name, + parameters=vnet_params, + ) + vnet = vnet_poller.result() + print(f"Virtual network '{vnet.name}' created successfully.") + + print("\nCreating network security group...") + nsg_params = { + "location": location, + "tags": { + "user": current_user, + "created_on": today, + "project": project, + }, + } + nsg_poller = network_client.network_security_groups.begin_create_or_update( + resource_group_name=resource_group.name, + network_security_group_name="NSG1", + parameters=nsg_params, + ) + nsg = nsg_poller.result() + print(f"Network security group '{nsg.name}' created successfully.") + + set_network_rules( + network_client, + resource_group.name, + allowed_sources, + nsg.name, + ports, + priorities, + protocols, + ) + + + # Create the VM + vm_password = generate_password() + + print( + f"\nWriting {vm_admin} password to {resource_group.name}.password.txt" + ) + save_to_parent_directory( + f"{resource_group.name}.password.txt", vm_password + ) + + subnet_id = ( + f"/subscriptions/{subscription_id}/" + f"resourceGroups/{resource_group.name}/" + f"providers/Microsoft.Network/" + f"virtualNetworks/{vnet_name}/" + f"subnets/{subnet_name}" + ) + + public_ip = create_public_ip( + network_client, resource_group, location, machine_name + ) + + print(f"\nWriting public_ip to {resource_group.name}.ip.txt") + save_to_parent_directory( + f"{resource_group.name}.ip.txt", + public_ip.ip_address + ) + + nic = create_network_interface( + network_client, + resource_group, + location, + machine_name, + subnet_id, + ls_ip, + public_ip, + nsg.id + ) + + print(f"\nCreating {machine_name}...") + ls1_params = { + "location": location, + "hardware_profile": {"vm_size": vm_size}, + "additional_capabilities": { + "nested_virtualization_enabled": True + }, + "storage_profile": { + "image_reference": { + "publisher": image_publisher, + "offer": image_offer, + "sku": image_sku, + "version": image_version, + }, + "os_disk": { + "create_option": "FromImage", + "disk_size_gb": os_disk_size_gb, + }, + }, + "os_profile": { + "computer_name": f"{machine_name}", + "admin_username": vm_admin, + "admin_password": vm_password, + }, + "network_profile": { + "network_interfaces": [ + { + "id": nic.id, + } + ], + }, + "tags": { + "user": current_user, + "created_on": today, + "project": project, + }, + } + ls1_poller = compute_client.virtual_machines.begin_create_or_update( + resource_group_name=resource_group.name, + vm_name=machine_name, + parameters=ls1_params, + ) + ls1 = ls1_poller.result() + print(f"Virtual machine '{ls1.name}' created successfully.") + + # Configure Auto-Shutdown + if auto_shutdown_time: + set_auto_shutdown( + devtestlabs_client, + subscription_id, + resource_group.name, + location, + machine_name, + auto_shutdown_time, + auto_shutdown_email + ) + + print("\nVM login info:") + print(f"ResourceGroup: {resource_group.name}") + print(f"PublicIP: {public_ip.ip_address}") + print(f"Username: {vm_admin}") + print(f"Password: {vm_password}") + print("SAVE THE ABOVE INFO\n") + + print("Done.") + + +if __name__ == "__main__": + parser = argparse.ArgumentParser(description="Setup Testbed for LME") + parser.add_argument( + "-l", + "--location", + default="westus", + help="Location where the cluster will be built. Default westus", + ) + parser.add_argument( + "-g", "--resource-group", required=True, help="Resource group name" + ) + parser.add_argument( + "-s", + "--allowed-sources", + required=True, + help="XX.XX.XX.XX/YY,XX.XX.XX.XX/YY,etc... Comma-separated " + "list of CIDR prefixes or IP ranges", + ) + parser.add_argument( + "-y", + "--no-prompt", + action="store_true", + help="Run the script with no prompt (useful for automated runs)", + ) + parser.add_argument( + "-sid", + "--subscription-id", + help="Azure subscription ID. If not provided, " + "the default subscription ID will be used.", + ) + parser.add_argument( + "-vn", + "--vnet-name", + default="VNet1", + help="Virtual network name. Default: VNet1", + ) + parser.add_argument( + "-vp", + "--vnet-prefix", + default="10.1.0.0/16", + help="Virtual network prefix. Default: 10.1.0.0/16", + ) + parser.add_argument( + "-sn", "--subnet-name", + default="SNet1", + help="Subnet name. Default: SNet1" + ) + parser.add_argument( + "-sp", + "--subnet-prefix", + default="10.1.0.0/24", + help="Subnet prefix. Default: 10.1.0.0/24", + ) + parser.add_argument( + "-ip", + "--ls-ip", + default="10.1.0.5", + help="IP address for the VM. Default: 10.1.0.5", + ) + parser.add_argument( + "-u", + "--vm-admin", + default="lme-user", + help="Admin username for the VM. Default: lme-user", + ) + parser.add_argument( + "-m", "--machine-name", + default="ubuntu", + help="Name of the VM. Default: ubuntu" + ) + parser.add_argument( + "-p", + "--ports", + type=int, + nargs="+", + default=[22, 443], + help="Ports to open. Default: [22, 443]", + ) + parser.add_argument( + "-pr", + "--priorities", + type=int, + nargs="+", + default=[1001, 1002], + help="Priorities for the ports. Default: [1001, 1002]", + ) + parser.add_argument( + "-pt", + "--protocols", + nargs="+", + default=["Tcp", "Tcp"], + help="Protocols for the ports. Default: ['Tcp']", + ) + parser.add_argument( + "-vs", + "--vm-size", + default="Standard_E2d_v4", + help="Size of the virtual machine. Default: Standard_E2d_v4", + # Standard_D8_v4 for testing minimega and a linux install of LME + # Standard_D16d_v4 is the smallest VM size that we can get away + # with for minimega to include all the machines + ) + parser.add_argument( + "-pub", + "--image-publisher", + default="Canonical", + help="Publisher of the VM image. Default: Canonical", + ) + parser.add_argument( + "-io", + "--image-offer", + default="0001-com-ubuntu-server-jammy", + help="Offer of the VM image. Default: 0001-com-ubuntu-server-jammy", + ) + parser.add_argument( + "-is", + "--image-sku", + default="22_04-lts-gen2", + help="SKU of the VM image. Default: 22_04-lts-gen2", + ) + # ubuntu-24_04-lts + parser.add_argument( + "-iv", + "--image-version", + default="latest", + help="Version of the VM image. Default: latest", + ) + parser.add_argument( + "-os", + "--os-disk-size-gb", + type=int, + default=128, + help="Size of the OS disk in GB. Default: 128", + ) + parser.add_argument( + "-ast", + "--auto-shutdown-time", + help="Auto-Shutdown time in UTC (HH:MM, e.g. 22:30, 00:00, 19:00). " + "Convert timezone as necessary.", + ) + parser.add_argument( + "-ase", + "--auto-shutdown-email", + help="Auto-shutdown notification email", + ) + + args = parser.parse_args() + check_ports_protocals_and_priorities( + args.ports, args.priorities, args.protocols + ) + + main( + resource_group=args.resource_group, + location=args.location, + allowed_sources=args.allowed_sources, + no_prompt=args.no_prompt, + subscription_id=args.subscription_id, + vnet_name=args.vnet_name, + vnet_prefix=args.vnet_prefix, + subnet_name=args.subnet_name, + subnet_prefix=args.subnet_prefix, + ls_ip=args.ls_ip, + vm_admin=args.vm_admin, + machine_name=args.machine_name, + ports=args.ports, + priorities=args.priorities, + protocols=args.protocols, + vm_size=args.vm_size, + image_publisher=args.image_publisher, + image_offer=args.image_offer, + image_sku=args.image_sku, + image_version=args.image_version, + os_disk_size_gb=args.os_disk_size_gb, + auto_shutdown_time=args.auto_shutdown_time, + auto_shutdown_email=args.auto_shutdown_email, + ) diff --git a/testing/v2/installers/azure/build_azure_linux_network_requirements.txt b/testing/v2/installers/azure/build_azure_linux_network_requirements.txt new file mode 100644 index 00000000..466ceba1 --- /dev/null +++ b/testing/v2/installers/azure/build_azure_linux_network_requirements.txt @@ -0,0 +1,6 @@ +azure-identity>=1.7.0 +azure-mgmt-resource>=21.0.0 +azure-mgmt-network>=20.0.0 +azure-mgmt-compute>=27.0.0 +azure-mgmt-subscription>=3.0.0 +azure-mgmt-devtestlabs==9.0.0 \ No newline at end of file diff --git a/testing/v2/installers/install_v2/install.sh b/testing/v2/installers/install_v2/install.sh new file mode 100755 index 00000000..5921e957 --- /dev/null +++ b/testing/v2/installers/install_v2/install.sh @@ -0,0 +1,42 @@ +#!/usr/bin/env bash + +set -e + +# Check if the required arguments are provided +if [ $# -lt 3 ]; then + echo "Usage: $0 " + exit 1 +fi + +# Set the remote server details from the command-line arguments +user=$1 +hostname=$2 +password_file=$3 +branch=$4 + +# Store the original working directory +ORIGINAL_DIR="$(pwd)" + +# Get the directory of the script +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +# Change to the parent directory of the script +cd "$SCRIPT_DIR/.." + +# Copy the SSH key to the remote machine +./lib/copy_ssh_key.sh $user $hostname $password_file + +echo "Installing ansible" +ssh -o StrictHostKeyChecking=no $user@$hostname 'sudo apt-get update && sudo apt-get -y install ansible' + + +# Need to set up so we can checkout a particular branch or pull down a release +echo "Checking out code" +ssh -o StrictHostKeyChecking=no $user@$hostname "cd ~ && rm -rf LME && git clone https://github.com/cisagov/LME.git && cd LME && git checkout -t origin/${branch}" +echo "Code cloned to $HOME/LME" + +echo "Running ansible installer" +ssh -o StrictHostKeyChecking=no $user@$hostname "cd ~/LME && cp config/example.env config/lme-environment.env && ansible-playbook scripts/install_lme_local.yml" + +# Change back to the original directory +cd "$ORIGINAL_DIR" diff --git a/testing/v2/installers/install_v2/install_in_minimega.sh b/testing/v2/installers/install_v2/install_in_minimega.sh new file mode 100755 index 00000000..c46d5c37 --- /dev/null +++ b/testing/v2/installers/install_v2/install_in_minimega.sh @@ -0,0 +1,69 @@ +#!/bin/bash + +# Initialize variables +VM_NAME="" +VM_USER="" +MAX_ATTEMPTS=30 +SLEEP_INTERVAL=10 + +# Function to print usage +usage() { + echo "Usage: $0 -n -u " + echo " -n Specify the VM name" + echo " -u Specify the VM user" + exit 1 +} + +# Parse command-line arguments +while getopts "n:u:" opt; do + case $opt in + n) VM_NAME="$OPTARG" ;; + u) VM_USER="$OPTARG" ;; + *) usage ;; + esac +done + +# Check if required arguments are provided +if [[ -z "$VM_NAME" || -z "$VM_USER" ]]; then + echo "Error: Both VM name and VM user must be provided." + usage +fi + +get_ip() { + /opt/minimega/bin/minimega -e .json true .filter name="$VM_NAME" vm info | jq -r '.[].Data[].Networks[].IP4' +} + +echo "Waiting for IP assignment for VM: $VM_NAME (User: $VM_USER)" + +IP="" +for ((i=1; i<=MAX_ATTEMPTS; i++)); do + IP=$(get_ip) + + if [[ -z "$IP" || "$IP" == "null" ]]; then + echo "Attempt $i: No IP assigned yet. Waiting $SLEEP_INTERVAL seconds..." + + if [[ $i -eq $MAX_ATTEMPTS ]]; then + echo "Timeout: Failed to get IP for $VM_NAME after $MAX_ATTEMPTS attempts." + exit 1 + fi + + sleep $SLEEP_INTERVAL + else + echo "The IP of $VM_NAME is $IP" + break + fi +done + +echo "VM Name: $VM_NAME" +echo "VM User: $VM_USER" +echo "VM IP: $IP" + +ssh -o StrictHostKeyChecking=no $VM_USER@$IP 'sudo apt-get update && sudo apt-get -y install ansible' + +echo "Ansible installed successfully on $VM_NAME" + +ssh -o StrictHostKeyChecking=no $VM_USER@$IP 'cd ~ && git clone https://github.com/cisagov/LME.git' + +# Run the ansible installer here once it is merged to LME + + diff --git a/testing/v2/installers/lib/copy_ssh_key.sh b/testing/v2/installers/lib/copy_ssh_key.sh new file mode 100755 index 00000000..f1f7a36e --- /dev/null +++ b/testing/v2/installers/lib/copy_ssh_key.sh @@ -0,0 +1,31 @@ +#!/usr/bin/env bash + +# Check if the required arguments are provided +if [ $# -lt 3 ]; then + echo "Usage: $0 " + exit 1 +fi + +# Check if sshpass is installed +if ! command -v sshpass &> /dev/null; then + echo "Error: sshpass is not installed. Please install sshpass and try again." + exit 1 +fi + +# Set the remote server details from the command-line arguments +user=$1 +hostname=$2 +password_file=$3 + +# Set the SSH key path +ssh_key_path="$HOME/.ssh/id_rsa" + +# Generate an SSH key non-interactively if it doesn't exist +if [ ! -f "$ssh_key_path" ]; then + ssh-keygen -t rsa -N "" -f "$ssh_key_path" <</dev/null 2>&1 +fi +echo password_file $password_file ssh_key_path $ssh_key_path +ls $password_file +ls $ssh_key_path +# Use sshpass with the password file to copy the SSH key to the remote server +sshpass -f "$password_file" ssh-copy-id -o StrictHostKeyChecking=no -i "$ssh_key_path.pub" $user@$hostname diff --git a/testing/v2/installers/minimega/README.md b/testing/v2/installers/minimega/README.md new file mode 100644 index 00000000..372abc55 --- /dev/null +++ b/testing/v2/installers/minimega/README.md @@ -0,0 +1,67 @@ +# MinimegaSetup Scripts + +This repository contains a collection of scripts to automate the setup and installation of Minimega, a powerful tool for orchestrating and managing large-scale virtual machine experiments. + +## Scripts Overview + +1. `copy_ssh_key.sh`: Copies an SSH key to a remote server. +1. `create_bridge.sh`: Creates a network bridge for Minimega. +1. `install.sh`: Main installation script for setting up Minimega on a remote server. +1. `install_local.sh`: Installs Minimega on the local machine. +1. `set_gopath.sh`: Sets up the GOPATH for Go programming. +1. `update_packages.sh`: Updates and installs necessary packages. +1. `fix_dnsmasq.sh`: Stops and disables the dnsmasq service. + +## Usage + +### Remote Installation + +To install Minimega on a remote server, use the `install.sh` script: + +```bash +./install.sh +``` + +This script will: +- Copy the SSH key to the remote server +- Copy the Minimega directory to the remote server +- Update packages and reboot the server +- Set up DNS, GOPATH, and install Minimega +- Configure and start Minimega and Miniweb services +- Create a network bridge + +### Local Installation +Note: I don't have a machine to test this on but it follows the same pattern as the remote script. + +To install Minimega on your local machine, use the `install_local.sh` script: + + +```bash +sudo ./install_local.sh +``` + +This script performs similar operations as the remote installation but on the local machine. + +## Individual Scripts + +- `copy_ssh_key.sh`: Copies an SSH key to a remote server. Usage: `./copy_ssh_key.sh ` +- `create_bridge.sh`: Creates a network bridge named `mega_bridge`. +- `set_gopath.sh`: Sets up the GOPATH for a specified user. Usage: `sudo ./set_gopath.sh ` +- `update_packages.sh`: Updates the system and installs necessary packages. Run with sudo. +- `fix_dnsmasq.sh`: Stops and disables the dnsmasq service. Run with sudo. + +## Requirements + +- These scripts are designed to run on a Debian-based Linux system. +- sudo privileges are required for many operations. +- For remote installation, SSH access to the target server is necessary. + +## Notes + +- The `install.sh` script will reboot the remote server during the installation process. +- Make sure to review and understand each script before running, especially when using sudo privileges. +- The `password_file` used in `copy_ssh_key.sh` and `install.sh` should contain the SSH password for the remote server. + +## Disclaimer + +These scripts make significant changes to system configurations. Always test in a safe environment before using in production. \ No newline at end of file diff --git a/testing/v2/installers/minimega/check_dpkg_lock.sh b/testing/v2/installers/minimega/check_dpkg_lock.sh new file mode 100755 index 00000000..af43a8af --- /dev/null +++ b/testing/v2/installers/minimega/check_dpkg_lock.sh @@ -0,0 +1,31 @@ +#!/bin/bash + +# Function to check if the lock file exists and is held by a process +check_lock() { + if [ -f /var/lib/dpkg/lock-frontend ]; then + pid=$(fuser /var/lib/dpkg/lock-frontend 2>/dev/null) + if [ ! -z "$pid" ]; then + echo "Lock is held by process $pid: $(ps -o comm= -p $pid)" + return 0 + fi + fi + return 1 +} + +echo "Waiting for dpkg lock to be released..." + +# Loop until the lock is released +while check_lock; do + echo "Still waiting... Will check again in 10 seconds." + sleep 10 +done + +echo "Lock has been released. You can now run your apt commands." + +# Run the command passed as arguments to this script +if [ $# -gt 0 ]; then + echo "Executing command: $@" + "$@" +else + echo "No command specified. Exiting." +fi \ No newline at end of file diff --git a/testing/v2/installers/minimega/copy_ssh_key.sh b/testing/v2/installers/minimega/copy_ssh_key.sh new file mode 100755 index 00000000..f1f7a36e --- /dev/null +++ b/testing/v2/installers/minimega/copy_ssh_key.sh @@ -0,0 +1,31 @@ +#!/usr/bin/env bash + +# Check if the required arguments are provided +if [ $# -lt 3 ]; then + echo "Usage: $0 " + exit 1 +fi + +# Check if sshpass is installed +if ! command -v sshpass &> /dev/null; then + echo "Error: sshpass is not installed. Please install sshpass and try again." + exit 1 +fi + +# Set the remote server details from the command-line arguments +user=$1 +hostname=$2 +password_file=$3 + +# Set the SSH key path +ssh_key_path="$HOME/.ssh/id_rsa" + +# Generate an SSH key non-interactively if it doesn't exist +if [ ! -f "$ssh_key_path" ]; then + ssh-keygen -t rsa -N "" -f "$ssh_key_path" <</dev/null 2>&1 +fi +echo password_file $password_file ssh_key_path $ssh_key_path +ls $password_file +ls $ssh_key_path +# Use sshpass with the password file to copy the SSH key to the remote server +sshpass -f "$password_file" ssh-copy-id -o StrictHostKeyChecking=no -i "$ssh_key_path.pub" $user@$hostname diff --git a/testing/v2/installers/minimega/create_bridge.sh b/testing/v2/installers/minimega/create_bridge.sh new file mode 100755 index 00000000..b0a36331 --- /dev/null +++ b/testing/v2/installers/minimega/create_bridge.sh @@ -0,0 +1,4 @@ +#!/bin/sh +set -e +sudo ovs-vsctl add-br mega_bridge +sudo ovs-vsctl set bridge mega_bridge stp_enable=false \ No newline at end of file diff --git a/testing/v2/installers/minimega/fix_dnsmasq.sh b/testing/v2/installers/minimega/fix_dnsmasq.sh new file mode 100755 index 00000000..bcdd2485 --- /dev/null +++ b/testing/v2/installers/minimega/fix_dnsmasq.sh @@ -0,0 +1,3 @@ +#!/usr/bin/env bash +systemctl stop dnsmasq +systemctl disable dnsmasq diff --git a/testing/v2/installers/minimega/install.sh b/testing/v2/installers/minimega/install.sh new file mode 100755 index 00000000..5ac33d63 --- /dev/null +++ b/testing/v2/installers/minimega/install.sh @@ -0,0 +1,77 @@ +#!/usr/bin/env bash + +set -e + +# Check if the required arguments are provided +if [ $# -lt 3 ]; then + echo "Usage: $0 " + exit 1 +fi + +# Set the remote server details from the command-line arguments +user=$1 +hostname=$2 +password_file=$3 + +# Store the original working directory +ORIGINAL_DIR="$(pwd)" + +# Get the directory of the script +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +# Change to the parent directory of the script +cd "$SCRIPT_DIR/.." + +# Copy the SSH key to the remote machine +./minimega/copy_ssh_key.sh $user $hostname $password_file + +# Copy the minimega directory to the remote machine +scp -r ./minimega $user@$hostname:/home/$user + +# Run the update_packages.sh script on the remote machine this reboots the machine +ssh $user@$hostname "cd /home/$user/minimega && sudo ./update_packages.sh" + +# Reboot the server to apply the changes +ssh $user@$hostname "sudo shutdown -r now" || true + +echo "Server is rebooting..." + +# Loop until the server is reachable via SSH +echo "Waiting for the server to come back..." +while ! ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no $user@$hostname "exit" >/dev/null 2>&1; do + sleep 5 +done +echo "Server is back online." + +# Additional check: Verify that necessary services are running +echo "Verifying necessary services are running..." +while ! ssh -o ConnectTimeout=5 -o StrictHostKeyChecking=no $user@$hostname "ls" >/dev/null 2>&1; do + sleep 5 +done + +echo "Necessary services are running." + +# Fix the DNS settings +ssh $user@$hostname "cd /home/$user/minimega && sudo ./fix_dnsmasq.sh" + +# Set the GOPATH +ssh $user@$hostname "cd /home/$user/minimega && sudo ./set_gopath.sh '$user'" + +# Install minimega +ssh $user@$hostname "wget -q https://github.com/sandia-minimega/minimega/releases/download/2.9/minimega-2.9.deb && sudo apt install ./minimega-2.9.deb" + +# Set up the minimega service and start it +ssh $user@$hostname "cd /home/$user/minimega && sudo cp minimega.service /etc/systemd/system/ && sudo systemctl daemon-reload && sudo systemctl enable minimega && sudo systemctl start minimega" + +# Set up the miniweb service and start it +ssh $user@$hostname "cd /home/$user/minimega && sudo cp miniweb.service /etc/systemd/system/ && sudo systemctl daemon-reload && sudo systemctl enable miniweb && sudo systemctl start miniweb" + +# Set the path for minimega +ssh $user@$hostname "echo 'export PATH=\$PATH:/opt/minimega/bin/' | sudo tee -a /root/.bashrc" +ssh $user@$hostname "echo 'export PATH=\$PATH:/opt/minimega/bin/' >> /home/$user/.bashrc" + +# Create the bridge +ssh $user@$hostname "cd /home/$user/minimega && sudo ./create_bridge.sh" + +# Change back to the original directory +cd "$ORIGINAL_DIR" diff --git a/testing/v2/installers/minimega/install_local.sh b/testing/v2/installers/minimega/install_local.sh new file mode 100755 index 00000000..d4a57f58 --- /dev/null +++ b/testing/v2/installers/minimega/install_local.sh @@ -0,0 +1,29 @@ +#!/usr/bin/env bash +$user=$(whoami) +set -e + +sudo ./update_packages.sh + +# Fix the DNS settings +sudo ./fix_dnsmasq.sh + +# Set the GOPATH +sudo ./set_gopath.sh $user + +# Install minimega +wget -O /tmp/minimega-2.9.deb https://github.com/sandia-minimega/minimega/releases/download/2.9/minimega-2.9.deb + +sudo apt install /tmp/minimega-2.9.deb + +echo "export PATH=$PATH:/opt/minimega/bin/" >> /root/.bashrc + +# Set up the service and start minimega and miniweb services +sudo cp minimega.service miniweb.service /etc/systemd/system/ && sudo systemctl daemon-reload + +sudo systemctl enable minimega && sudo systemctl start minimega + +sudo systemctl enable miniweb && sudo systemctl start miniweb + +sudo ./create_bridge.sh + +sudo ./fix_dnsmasq.sh \ No newline at end of file diff --git a/testing/v2/installers/minimega/minimega.service b/testing/v2/installers/minimega/minimega.service new file mode 100644 index 00000000..5c7b27b6 --- /dev/null +++ b/testing/v2/installers/minimega/minimega.service @@ -0,0 +1,11 @@ +[Unit] +Description=minimega +After=network.target + +[Service] +ExecStart=/opt/minimega/bin/minimega -nostdin & +Restart=always +WorkingDirectory=/opt/minimega + +[Install] +WantedBy=multi-user.target \ No newline at end of file diff --git a/testing/v2/installers/minimega/miniweb.service b/testing/v2/installers/minimega/miniweb.service new file mode 100644 index 00000000..621e90b8 --- /dev/null +++ b/testing/v2/installers/minimega/miniweb.service @@ -0,0 +1,11 @@ +[Unit] +Description=minimega +After=network.target + +[Service] +ExecStart=/opt/minimega/bin/miniweb -level debug -logfile /var/log/miniweb.log -root /opt/minimega/web/web/ +Restart=always +WorkingDirectory=/opt/minimega + +[Install] +WantedBy=multi-user.target diff --git a/testing/v2/installers/minimega/set_gopath.sh b/testing/v2/installers/minimega/set_gopath.sh new file mode 100755 index 00000000..0a843aa8 --- /dev/null +++ b/testing/v2/installers/minimega/set_gopath.sh @@ -0,0 +1,11 @@ +#!/usr/bin/env bash + +user=$1 + +echo "export GOPATH=/home/user/work" >> /home/$user/.bashrc +echo "export GOROOT=/usr/lib/go" >> /home/$user/.bashrc +echo "export PATH=$PATH:/usr/lib/go/bin" >> /home/$user/.bashrc + +echo "export GOPATH=$HOME/work" >> ~/.bashrc +echo "export GOROOT=/usr/lib/go" >> ~/.bashrc +echo "export PATH=$PATH:/usr/lib/go/bin" >> ~/.bashrc \ No newline at end of file diff --git a/testing/v2/installers/minimega/update_packages.sh b/testing/v2/installers/minimega/update_packages.sh new file mode 100755 index 00000000..7f8c9c4b --- /dev/null +++ b/testing/v2/installers/minimega/update_packages.sh @@ -0,0 +1,48 @@ +#!/usr/bin/env bash + +if [[ $EUID -ne 0 ]]; then + echo "This script must be run with sudo or as root." + exit 1 +fi + +export DEBIAN_FRONTEND=noninteractive +apt-get update + +# Get Ubuntu version +ubuntu_version=$(lsb_release -rs) +major_version=$(echo $ubuntu_version | cut -d. -f1) + +# Common packages for both versions +common_packages=( + libpcap-dev + libreadline-dev + qemu-kvm + openvswitch-switch + dnsmasq + bird + build-essential + tmux + curl + wget + nano + git + unzip + golang + jq + qemu-utils + libguestfs-tools +) + +# Check Ubuntu version and install appropriate packages +if [ "$major_version" -lt 24 ]; then + echo "Ubuntu version is below 24. Installing packages for Ubuntu $ubuntu_version" + ./check_dpkg_lock.sh apt-get install -y "${common_packages[@]}" qemu +else + echo "Ubuntu version is 24 or above. Installing packages for Ubuntu $ubuntu_version" + ./check_dpkg_lock.sh apt-get install -y "${common_packages[@]}" \ + qemu-system \ + qemu-user \ + qemu-user-static \ + qemu-utils \ + qemu-block-extra +fi \ No newline at end of file diff --git a/testing/v2/installers/ubuntu_qcow_maker/README.md b/testing/v2/installers/ubuntu_qcow_maker/README.md new file mode 100644 index 00000000..1a3e8806 --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/README.md @@ -0,0 +1,94 @@ +# Ubuntu QCOW Maker + +This project contains a set of scripts to create and manage Ubuntu QCOW2 images and virtual machines using Minimega. The main purpose is to simplify the process of setting up and running Ubuntu VMs on a remote machine. + +## Quick Start + +To set up everything on a remote machine, use the `install.sh` script: + +```bash +./install.sh +``` + +Replace ``, ``, and `` with appropriate values for your remote machine. + +## Script Descriptions + +1. `install.sh`: Main installation script that sets up the environment on a remote machine. +2. `create_ubuntu_qcow.sh`: Creates an Ubuntu QCOW2 image with cloud-init configuration. +3. `create_vm_from_qcow.sh`: Creates a VM from the QCOW2 image with customizable options. +4. `create_tap.sh`: Creates a TAP interface for networking with customizable options. +5. `iptables.sh`: Sets up iptables rules for network connectivity with configurable interfaces. +6. `clear_cloud_config.sh`: Cleans up cloud-init artifacts from the image, with options for mount path and image location. +7. `get_ip_of_machine.sh`: Retrieves the IP address of a VM with a configurable number of attempts. +8. `wait_for_login.sh`: Waits for the VM to become accessible via SSH, with customizable timeout and interval. +9. `remove_test_files.sh`: Removes temporary files created during the process. +10. `setup_dnsmasq.sh`: Sets up dnsmasq for DHCP and DNS services with customizable IP ranges. + +## Prerequisites + +- Minimega installed on the remote machine +- SSH access to the remote machine +- Sufficient permissions to run scripts with sudo +- `cloud-image-utils` package (installed by the script if not present) +- `jq` command-line JSON processor (used in some scripts) + +## Usage + +1. Clone this repository to your local machine. +2. Ensure that the scripts have execute permissions: + ```bash + chmod +x *.sh + ``` +3. Run the `install.sh` script with appropriate parameters: + ```bash + ./install.sh + ``` + +This will set up the environment on the remote machine, create the QCOW2 image, and launch a VM. + +## Customization + +You can modify or use command-line options for the following scripts to customize the setup: + +- `create_ubuntu_qcow.sh`: Adjust VM specifications (memory, CPUs) or cloud-init configuration. +- `create_vm_from_qcow.sh`: Modify VM settings for the final VM. Use `-h` or `--help` to see available options. +- `create_tap.sh`: Customize TAP interface name and IP address using `-t` or `--tap` and `-i` or `--ip` options. +- `iptables.sh`: Customize network settings and firewall rules by specifying WAN and INTERNAL interfaces as arguments. +- `clear_cloud_config.sh`: Customize mount path and disk image location using `-m` or `--mount-path` and `-i` or `--image` options. +- `setup_dnsmasq.sh`: Customize IP ranges for DHCP using `-s` or `--start-ip`, `-r` or `--range-start`, and `-e` or `--range-end` options. + +## Troubleshooting + +- If you encounter network issues, check the output of `iptables.sh` for connectivity test results. +- Use `get_ip_of_machine.sh` to retrieve the IP address of a VM if needed. +- The `wait_for_login.sh` script can be used to verify when a VM is ready for SSH access. It includes a configurable number of attempts and sleep interval. +- If you're having issues with DNS or DHCP, check the configuration of `setup_dnsmasq.sh`. + +## Cleanup + +To remove temporary files created during the process, run: + +```bash +./remove_test_files.sh +``` + +## Note + +This project assumes you have Minimega installed and properly configured on the remote machine. Make sure you have the necessary permissions and that Minimega is running before using these scripts. + +## Security Considerations + +- The scripts use SSH key-based authentication for increased security. +- Ensure that the `password_file` used with `install.sh` is stored securely and deleted after use. +- Review and adjust the iptables rules in `iptables.sh` to match your security requirements. +- When using `setup_dnsmasq.sh`, ensure that the IP ranges are appropriate for your network and don't conflict with existing DHCP servers. + +## Troubleshooting + +If you encounter issues: +1. Check Minimega logs for any errors. +2. Ensure all prerequisites are installed and up-to-date. +3. Verify network settings and firewall rules. +4. Use the `--help` option with scripts that support it for usage information. +5. If you're having DHCP or DNS issues, check the dnsmasq configuration set by `setup_dnsmasq.sh`. diff --git a/testing/v2/installers/ubuntu_qcow_maker/clear_cloud_config.sh b/testing/v2/installers/ubuntu_qcow_maker/clear_cloud_config.sh new file mode 100755 index 00000000..1d6eea89 --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/clear_cloud_config.sh @@ -0,0 +1,77 @@ +#!/bin/bash +set -e + +# Default values +MOUNT_PATH="/mnt/disk_image" +DISK_IMAGE="/home/lme-user/ubuntu_qcow_maker/jammy-server-cloudimg-amd64.img" + +# Function to print usage +print_usage() { + echo "Usage: $0 [OPTIONS]" + echo "Options:" + echo " -m, --mount-path PATH Specify the mount path (default: $MOUNT_PATH)" + echo " -i, --image PATH Specify the path to the disk image (default: $DISK_IMAGE)" + echo " -h, --help Show this help message" +} + +# Parse command-line options +while [[ $# -gt 0 ]]; do + case $1 in + -m|--mount-path) + MOUNT_PATH="$2" + shift 2 + ;; + -i|--image) + DISK_IMAGE="$2" + shift 2 + ;; + -h|--help) + print_usage + exit 0 + ;; + *) + echo "Unknown option: $1" + print_usage + exit 1 + ;; + esac +done + +echo "Using mount path: $MOUNT_PATH" +echo "Using disk image: $DISK_IMAGE" + +sudo mkdir -p $MOUNT_PATH + +# Mount the image +sudo guestmount -a "$DISK_IMAGE" -m /dev/sda1 $MOUNT_PATH + +# Remove cloud-init artifacts +sudo rm -rf $MOUNT_PATH/var/lib/cloud/* + +# Remove the file that indicates cloud-init has already run +sudo rm -f $MOUNT_PATH/etc/cloud/cloud-init.disabled + +# Set up a default name server +sudo sed -i 's/#DNS=/DNS=8.8.8.8/g' $MOUNT_PATH/etc/systemd/resolved.conf + +# Truncate the machine-id file +sudo truncate -s 0 $MOUNT_PATH/etc/machine-id + +# Remove the file that stores the instance ID +sudo rm -f $MOUNT_PATH/var/lib/dbus/machine-id + +# Modify the netplan configuration created by cloud-init +NETPLAN_FILE=$MOUNT_PATH/etc/netplan/50-cloud-init.yaml +NEW_CONTENT=$(cat << EOF +network: + ethernets: + ens1: + dhcp4: true + dhcp6: true + version: 2 +EOF +) +echo "$NEW_CONTENT" | sudo tee "$NETPLAN_FILE" > /dev/null + +# Unmount the image +sudo umount $MOUNT_PATH diff --git a/testing/v2/installers/ubuntu_qcow_maker/create_tap.sh b/testing/v2/installers/ubuntu_qcow_maker/create_tap.sh new file mode 100755 index 00000000..ce4e977c --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/create_tap.sh @@ -0,0 +1,26 @@ +#!/usr/bin/env bash + +# Default values +TAP_NAME="100" +IP_ADDRESS="10.0.0.1/24" + +# Parse command line arguments +while [[ $# -gt 0 ]]; do + case $1 in + -t|--tap) + TAP_NAME="$2" + shift 2 + ;; + -i|--ip) + IP_ADDRESS="$2" + shift 2 + ;; + *) + echo "Unknown argument: $1" + exit 1 + ;; + esac +done + +# Execute the minimega command with the provided or default arguments +sudo /opt/minimega/bin/minimega -e tap create "$TAP_NAME" ip "$IP_ADDRESS" diff --git a/testing/v2/installers/ubuntu_qcow_maker/create_ubuntu_qcow.sh b/testing/v2/installers/ubuntu_qcow_maker/create_ubuntu_qcow.sh new file mode 100755 index 00000000..aed08bf1 --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/create_ubuntu_qcow.sh @@ -0,0 +1,152 @@ +#!/bin/bash + +set -e + +# Check if the script is run as root +if [ "$(id -u)" -ne 0 ]; then + echo "This script must be run as root" + exit 1 +fi + +# Set variables +export VM_NAME="ubuntu-builder" +#export VM_NAME="ubuntu-runner" +export IMG_URL="https://cloud-images.ubuntu.com/jammy/current/jammy-server-cloudimg-amd64.img" +export IMG_NAME="jammy-server-cloudimg-amd64.img" +export MEMORY="2048" # Memory size in MB, adjust as needed +export CPUS="2" # Number of CPUs, adjust as needed +export QMP_TIMEOUT="30s" # QMP timeout in seconds, adjust as needed + +get_vm_ip() { + #minimega -e .json true .filter name="$VM_NAME" vm info | jq -r '.[].Data[].Networks[].IP4' + /opt/minimega/bin/minimega -e .json true vm info | jq -r ".[] | select(.Data[].Name == \"$VM_NAME\") | .Data[].Networks[].IP4" +} + +# Path for the SSH keys +SSH_KEY_PATH="$HOME/.ssh/id_rsa" +# Check if SSH key already exists +if [ ! -f "$SSH_KEY_PATH" ]; then + echo "SSH key not found, generating a new one..." + ssh-keygen -t rsa -b 2048 -f "$SSH_KEY_PATH" -N "" -C "ubuntu-vm" +fi + +# Download the image if it doesn't exist +if [ ! -f "$IMG_NAME" ]; then + echo "Downloading image, this may take a while..." + wget -q $IMG_URL -O $IMG_NAME + echo "Image downloaded" +fi + +# Resize the downloaded image +./resize_qcow.sh + +# Install cloud-init package if not already installed +if ! command -v cloud-localds &> /dev/null; then + echo "cloud-localds tool not found, installing cloud-image-utils..." + sudo apt-get update + sudo apt-get install -y cloud-image-utils +fi + +# Create user-data file for cloud-init +cat > user-data < /dev/null; then + # Start minimega in the background if not running + /opt/minimega/bin/minimega & + # Give minimega a moment to start up + sleep 2 +fi + +# Create the MM file with the VM configuration +MM_FILE_PATH="$(pwd)/$VM_NAME.mm" +cat > "$MM_FILE_PATH" <&2 + exit 1 +fi + +# Set default values +VM_NAME="ubuntu-runner" +IMG_NAME="jammy-server-cloudimg-amd64.img" +MEMORY="2048" +CPUS="2" +QMP_TIMEOUT="30s" + +# Parse command line arguments +while [[ $# -gt 0 ]]; do + case $1 in + -n|--name) + VM_NAME="$2" + shift 2 + ;; + -i|--image) + IMG_NAME="$2" + shift 2 + ;; + -m|--memory) + MEMORY="$2" + shift 2 + ;; + -c|--cpus) + CPUS="$2" + shift 2 + ;; + -t|--timeout) + QMP_TIMEOUT="$2" + shift 2 + ;; + -h|--help) + print_usage + exit 0 + ;; + *) + echo "Unknown option: $1" >&2 + print_usage + exit 1 + ;; + esac +done + +# Export variables +export VM_NAME +export IMG_NAME +export MEMORY +export CPUS +export QMP_TIMEOUT + +# Path for the SSH keys +SSH_KEY_PATH="$HOME/.ssh/id_rsa" +# Check if SSH key already exists +if [ ! -f "$SSH_KEY_PATH" ]; then + echo "SSH key not found, generating a new one..." + ssh-keygen -t rsa -b 2048 -f "$SSH_KEY_PATH" -N "" -C "ubuntu-vm" +fi + +# Create the MM file with the VM configuration +MM_FILE_PATH="$(pwd)/$VM_NAME.mm" +cat > "$MM_FILE_PATH" <&2 + exit 1 +fi + +# Create, configure, and launch the VM using the MM file +/opt/minimega/bin/minimega -e "read $MM_FILE_PATH" + +echo "VM $VM_NAME has been created and started." diff --git a/testing/v2/installers/ubuntu_qcow_maker/get_ip_of_machine.sh b/testing/v2/installers/ubuntu_qcow_maker/get_ip_of_machine.sh new file mode 100755 index 00000000..2716b38c --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/get_ip_of_machine.sh @@ -0,0 +1,25 @@ +#!/bin/bash + +VM_NAME="$1" +MAX_ATTEMPTS=30 +SLEEP_INTERVAL=10 + +get_ip() { + /opt/minimega/bin/minimega -e .json true .filter name="$VM_NAME" vm info | jq -r '.[].Data[].Networks[].IP4' +} + +echo "Waiting for IP assignment for VM: $VM_NAME" + +for ((i=1; i<=MAX_ATTEMPTS; i++)); do + IP=$(get_ip) + + if [[ -n "$IP" && "$IP" != "null" ]]; then + echo "The IP of $VM_NAME is $IP" + exit 0 + fi + + echo "Attempt $i: No IP assigned yet. Waiting $SLEEP_INTERVAL seconds..." + sleep $SLEEP_INTERVAL +done + +echo "Timeout: Failed to get IP for $VM_NAME after $MAX_ATTEMPTS attempts." diff --git a/testing/v2/installers/ubuntu_qcow_maker/install.sh b/testing/v2/installers/ubuntu_qcow_maker/install.sh new file mode 100755 index 00000000..3243949b --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/install.sh @@ -0,0 +1,62 @@ +#!/usr/bin/env bash +set -e + +# Function to print usage +print_usage() { + echo "Usage: $0 [num_cpus] [memory_mb]" + echo "Required parameters:" + echo " : The username for the remote server" + echo " : The hostname or IP address of the remote server" + echo " : The file containing the password for the remote server" + echo "Optional parameters:" + echo " [num_cpus]: Number of CPUs for the VM (default: 2)" + echo " [memory_mb]: Amount of memory in MB for the VM (default: 2048)" +} + +# Check if all required arguments are provided +if [ $# -lt 3 ]; then + print_usage + exit 1 +fi + +# Set the remote server details from the command-line arguments +user=$1 +hostname=$2 +password_file=$3 + +# Set default values for CPU and memory +num_cpus=${4:-2} +memory_mb=${5:-2048} + +# Store the original working directory +ORIGINAL_DIR="$(pwd)" + +# Get the directory of the script +SCRIPT_DIR="$(cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)" + +# Change to the parent directory of the script +cd "$SCRIPT_DIR/.." + +# Copy the SSH key to the remote machine +./lib/copy_ssh_key.sh $user $hostname $password_file + +# Copy the qcow maker directory to the remote machine +scp -r ./ubuntu_qcow_maker $user@$hostname:/home/$user + +# Run the update_packages.sh script on the remote machine this reboots the machine +ssh $user@$hostname "cd /home/$user/ubuntu_qcow_maker && sudo ./create_ubuntu_qcow.sh" + +# Create a tap interface on the remote machine +ssh $user@$hostname "cd /home/$user/ubuntu_qcow_maker && sudo ./create_tap.sh" + +# Setup dnsmasq on the remote machine +ssh $user@$hostname "cd /home/$user/ubuntu_qcow_maker && sudo ./setup_dnsmasq.sh" + +# Set up the iptables rules on the remote machine +ssh $user@$hostname "cd /home/$user/ubuntu_qcow_maker && sudo ./iptables.sh" + +# Create the VM on the remote machine with the specified CPU and memory +ssh $user@$hostname "cd /home/$user/ubuntu_qcow_maker && sudo ./create_vm_from_qcow.sh -c $num_cpus -m $memory_mb" + +# Change back to the original directory +cd "$ORIGINAL_DIR" diff --git a/testing/v2/installers/ubuntu_qcow_maker/iptables.sh b/testing/v2/installers/ubuntu_qcow_maker/iptables.sh new file mode 100755 index 00000000..23b01af6 --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/iptables.sh @@ -0,0 +1,47 @@ +#!/bin/bash + +# Default values +WAN=${1:-eth0} +INTERNAL=${2:-mega_tap0} + +echo "Using WAN interface: $WAN" +echo "Using INTERNAL interface: $INTERNAL" + +# Enable IP forwarding +sysctl -w net.ipv4.ip_forward=1 + +# Flush existing rules +iptables -F +iptables -t nat -F + +# Set up NAT +iptables -t nat -A POSTROUTING -o $WAN -j MASQUERADE + +# Allow all forwarding from internal network to WAN (both TCP and UDP) +iptables -A FORWARD -i $INTERNAL -o $WAN -j ACCEPT + +# Allow established and related incoming connections +iptables -A FORWARD -i $WAN -o $INTERNAL -m state --state RELATED,ESTABLISHED -j ACCEPT + +echo "Firewall rules have been updated." + +# Check VM internet connectivity +VM_IP=$(ip addr show $INTERNAL | grep -oP '(?<=inet\s)\d+(\.\d+){3}') + +if [ -z "$VM_IP" ]; then + echo "Could not determine VM IP address. Please check manually." +else + echo "Checking internet connectivity from VM ($VM_IP)..." + if ping -c 3 -I $VM_IP 8.8.8.8 > /dev/null 2>&1; then + echo "Internet connectivity test successful." + else + echo "Internet connectivity test failed. Please check your configuration." + fi + + echo "Testing DNS resolution..." + if nslookup -timeout=5 google.com > /dev/null 2>&1; then + echo "DNS resolution test successful." + else + echo "DNS resolution test failed. Please check your DNS configuration." + fi +fi \ No newline at end of file diff --git a/testing/v2/installers/ubuntu_qcow_maker/launch_multiple_vms.sh b/testing/v2/installers/ubuntu_qcow_maker/launch_multiple_vms.sh new file mode 100755 index 00000000..58f77f55 --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/launch_multiple_vms.sh @@ -0,0 +1,23 @@ +#!/bin/bash + +# Check if an argument is provided, otherwise use default value of 1 +NUM_VMS=${1:-1} + +# Validate that NUM_VMS is a positive integer +if ! [[ "$NUM_VMS" =~ ^[1-9][0-9]*$ ]]; then + echo "Error: Please provide a positive integer for the number of VMs." + echo "Usage: $0 [number_of_vms]" + exit 1 +fi + +echo "Creating $NUM_VMS VM(s)..." + +for i in $(seq 1 $NUM_VMS) +do + VM_NAME="ubuntu-runner-$i" + echo "Creating VM: $VM_NAME" + sudo ./create_vm_from_qcow.sh -n $VM_NAME + sleep 10 # Wait a bit between VM creations +done + +echo "All $NUM_VMS VM(s) created. Use 'minimega vm info' to see their status and IP addresses." \ No newline at end of file diff --git a/testing/v2/installers/ubuntu_qcow_maker/remove_test_files.sh b/testing/v2/installers/ubuntu_qcow_maker/remove_test_files.sh new file mode 100755 index 00000000..caa1970b --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/remove_test_files.sh @@ -0,0 +1,6 @@ +#!/usr/bin/env bash +# Use this one for testing +#rm -rf jammy-server-cloudimg-amd64.img seed.qcow2 ubuntu-builder.mm user-data + +# We want to save the jammy image for future use +rm -rf seed.qcow2 ubuntu-builder.mm user-data diff --git a/testing/v2/installers/ubuntu_qcow_maker/resize_fs.sh b/testing/v2/installers/ubuntu_qcow_maker/resize_fs.sh new file mode 100755 index 00000000..4936523b --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/resize_fs.sh @@ -0,0 +1,7 @@ +#!/bin/bash +sudo parted /dev/sda ---pretend-input-tty <&2; usage ;; + : ) echo "Invalid option: $OPTARG requires an argument" 1>&2; usage ;; + esac +done + +# Set variables to default values if not provided +IMAGE_PATH=${IMAGE_PATH:-$DEFAULT_IMAGE_PATH} +DESIRED_SIZE=${DESIRED_SIZE:-$DEFAULT_SIZE} + +# Check if the image file exists +if [ ! -f "$IMAGE_PATH" ]; then + echo "Error: Image file $IMAGE_PATH does not exist." + exit 1 +fi + +# Get the current size of the image in bytes +CURRENT_SIZE=$(qemu-img info "$IMAGE_PATH" --output=json | jq -r '.["virtual-size"]') +DESIRED_SIZE_BYTES=$(to_bytes $DESIRED_SIZE) + +if [ $CURRENT_SIZE -eq $DESIRED_SIZE_BYTES ]; then + echo "Disk image is already $DESIRED_SIZE. No resize needed." +elif [ $CURRENT_SIZE -gt $DESIRED_SIZE_BYTES ]; then + echo "Error: Current size ($CURRENT_SIZE bytes) is larger than desired size ($DESIRED_SIZE_BYTES bytes). Shrinking the image is not supported." + exit 1 +else + echo "Resizing disk image to $DESIRED_SIZE" + qemu-img resize "$IMAGE_PATH" "$DESIRED_SIZE" + if [ $? -eq 0 ]; then + echo "Disk image successfully resized to $DESIRED_SIZE" + else + echo "Error: Failed to resize disk image" + exit 1 + fi +fi + +echo "Current disk image size:" +qemu-img info "$IMAGE_PATH" | grep 'virtual size' \ No newline at end of file diff --git a/testing/v2/installers/ubuntu_qcow_maker/setup_dnsmasq.sh b/testing/v2/installers/ubuntu_qcow_maker/setup_dnsmasq.sh new file mode 100755 index 00000000..4688422d --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/setup_dnsmasq.sh @@ -0,0 +1,48 @@ +#!/bin/bash + +# Default values +START_IP="10.0.0.1" +RANGE_START="10.0.0.2" +RANGE_END="10.0.0.254" + +# Function to print usage +print_usage() { + echo "Usage: $0 [OPTIONS]" + echo "Options:" + echo " -s, --start-ip IP Set the start IP (default: $START_IP)" + echo " -r, --range-start IP Set the range start IP (default: $RANGE_START)" + echo " -e, --range-end IP Set the range end IP (default: $RANGE_END)" + echo " -h, --help Display this help message" +} + +# Parse command line arguments +while [[ $# -gt 0 ]]; do + case $1 in + -s|--start-ip) + START_IP="$2" + shift 2 + ;; + -r|--range-start) + RANGE_START="$2" + shift 2 + ;; + -e|--range-end) + RANGE_END="$2" + shift 2 + ;; + -h|--help) + print_usage + exit 0 + ;; + *) + echo "Unknown option: $1" + print_usage + exit 1 + ;; + esac +done + +# Set up dnsmasq for all VMs +/opt/minimega/bin/minimega -e "dnsmasq start $START_IP $RANGE_START $RANGE_END" + +echo "dnsmasq has been set up for the IP range $RANGE_START to $RANGE_END" \ No newline at end of file diff --git a/testing/v2/installers/ubuntu_qcow_maker/ubuntu-runner.mm b/testing/v2/installers/ubuntu_qcow_maker/ubuntu-runner.mm new file mode 100644 index 00000000..869dd40a --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/ubuntu-runner.mm @@ -0,0 +1,9 @@ +clear vm config +shell sleep 10 +vm config memory 2048 +vm config vcpus 2 +vm config disk /home/cbaxley/src/LME/testing/v2/installers/ubuntu_qcow_maker/jammy-server-cloudimg-amd64.img +vm config snapshot true +vm config net 100 +vm launch kvm ubuntu-runner +vm start ubuntu-runner diff --git a/testing/v2/installers/ubuntu_qcow_maker/wait_for_login.sh b/testing/v2/installers/ubuntu_qcow_maker/wait_for_login.sh new file mode 100755 index 00000000..c6d3867b --- /dev/null +++ b/testing/v2/installers/ubuntu_qcow_maker/wait_for_login.sh @@ -0,0 +1,60 @@ +#!/bin/bash + +# VM name +VM_NAME="ubuntu-builder" # Replace with your actual VM name + +# SSH user +SSH_USER="vmuser" # Replace with the appropriate username + +# Path to SSH key (if using key-based authentication) +SSH_KEY_PATH="$HOME/.ssh/id_rsa" # Adjust this path as needed + +# Maximum number of attempts to get IP and SSH +MAX_ATTEMPTS=30 +SLEEP_INTERVAL=10 + +get_vm_ip() { + #minimega -e .json true .filter name="$VM_NAME" vm info | jq -r '.[].Data[].Networks[].IP4' + /opt/minimega/bin/minimega -e .json true vm info | jq -r ".[] | select(.Data[].Name == \"$VM_NAME\") | .Data[].Networks[].IP4" +} + +wait_for_ssh() { + local ip=$1 + for i in $(seq 1 $MAX_ATTEMPTS); do + if ssh -o StrictHostKeyChecking=no -o ConnectTimeout=5 -i "$SSH_KEY_PATH" "${SSH_USER}@${ip}" exit 2>/dev/null; then + echo "SSH connection established." + return 0 + fi + echo "Attempt $i: Waiting for SSH to become available..." + sleep $SLEEP_INTERVAL + done + echo "Timed out waiting for SSH connection." + return 1 +} + +# Main loop +for attempt in $(seq 1 $MAX_ATTEMPTS); do + echo "Attempt $attempt: Getting VM IP..." + IP=$(get_vm_ip) + echo $IP + + if [[ -n "$IP" && "$IP" != "null" ]]; then + echo "Got IP: $IP. Waiting for SSH..." + if wait_for_ssh "$IP"; then + echo "Successfully connected to VM at $IP." + echo "Sleeping to wait for config to finish" + sleep 60 + ssh -o StrictHostKeyChecking=no -i "$SSH_KEY_PATH" "${SSH_USER}@${IP}" "echo 'Builder VM is ready'" + exit 0 + else + echo "Failed to establish SSH connection." + exit 1 + fi + fi + + echo "No IP found. Waiting before next attempt..." + sleep $SLEEP_INTERVAL +done + +echo "Failed to get VM IP after $MAX_ATTEMPTS attempts." +exit 1