Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add quickstart, fix ADMIN_USER -> ADMIN_LOGIN #61

Merged
merged 1 commit into from
Jun 28, 2024
Merged

add quickstart, fix ADMIN_USER -> ADMIN_LOGIN #61

merged 1 commit into from
Jun 28, 2024

Conversation

butonic
Copy link
Member

@butonic butonic commented May 17, 2024

I added a quickstart with examples to get people up and running with a seed run, a simple test, the 6m ramping and the 1h ramping configs.

cc @fschade @dragotin

the 6min example produced this on my dev machine:

          /\      |‾‾| /‾‾/   /‾‾/   
     /\  /  \     |  |/  /   /  /    
    /  \/    \    |     (   /   ‾‾\  
   /          \   |  |\  \ |  (‾)  | 
  / __________ \  |__| \__\ \_____/ .io

  execution: local
     script: packages/k6-tests/artifacts/koko-platform-000-mixed-ramping-k6.js
     output: -

  scenarios: (100.00%) 8 scenarios, 75 max VUs, 6m30s max duration (incl. graceful stop):
           * add_remove_tag_100: Up to 5 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: add_remove_tag_100, gracefulStop: 30s)
           * create_remove_group_share_090: Up to 5 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: create_remove_group_share_090, gracefulStop: 30s)
           * create_space_080: Up to 1 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: create_space_080, gracefulStop: 30s)
           * create_upload_rename_delete_folder_and_file_040: Up to 10 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: create_upload_rename_delete_folder_and_file_040, gracefulStop: 30s)
           * download_050: Up to 10 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: download_050, gracefulStop: 30s)
           * navigate_file_tree_020: Up to 20 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: navigate_file_tree_020, gracefulStop: 30s)
           * sync_client_110: Up to 20 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: sync_client_110, gracefulStop: 30s)
           * user_group_search_070: Up to 4 looping VUs for 6m0s over 3 stages (gracefulRampDown: 30s, exec: user_group_search_070, gracefulStop: 30s)


     ✓ authn -> logonResponse - status
     ✓ authn -> authorizeResponse - status
     ✓ authn -> accessTokenResponse - status
     ✓ client -> role.getMyDrives - status
     ✓ client -> resource.getResourceProperties - status
     ✓ client -> resource.createResource - status
     ✓ client -> resource.downloadResource - status
     ✓ client -> resource.uploadResource - status
     ✓ client -> resource.moveResource - status
     ✓ client -> resource.deleteResource - status
     ✓ client -> tag.getTags - status -- (SKIPPED)
     ✓ client -> tag.createTag - status -- (SKIPPED)
     ✗ client -> share.createShare - status
      ↳  0% — ✓ 0 / ✗ 23
     ✓ client -> tag.addTagToResource - status
     ✓ client -> tag.removeTagToResource - status
     ✗ client -> share.deleteShare - status
      ↳  0% — ✓ 0 / ✗ 23
     ✓ client -> search.searchForSharees - status
     ✓ client -> application.createDrive - status
     ✓ client -> drive.deactivateDrive - status
     ✓ client -> drive.deleteDrive - status

     checks.........................: 98.76% ✓ 3683     ✗ 46  
     data_received..................: 1.6 GB 4.2 MB/s
     data_sent......................: 832 MB 2.1 MB/s
     http_req_blocked...............: avg=98.94µs  min=200ns   med=278ns    max=10.13ms  p(90)=542ns    p(95)=632ns   
     http_req_connecting............: avg=4.45µs   min=0s      med=0s       max=3.19ms   p(90)=0s       p(95)=0s      
     http_req_duration..............: avg=64.56ms  min=28.57ms med=45.77ms  max=955.8ms  p(90)=80.35ms  p(95)=154.89ms
       { expected_response:true }...: avg=64.78ms  min=28.57ms med=45.76ms  max=955.8ms  p(90)=80.66ms  p(95)=155.71ms
     http_req_failed................: 1.24%  ✓ 46       ✗ 3635
     http_req_receiving.............: avg=2.39ms   min=29µs    med=82.24µs  max=571.39ms p(90)=133.04µs p(95)=248.82µs
     http_req_sending...............: avg=821.88µs min=36.84µs med=100.11µs max=360.79ms p(90)=148.68µs p(95)=250.47µs
     http_req_tls_handshaking.......: avg=90.8µs   min=0s      med=0s       max=9.88ms   p(90)=0s       p(95)=0s      
     http_req_waiting...............: avg=61.35ms  min=28.45ms med=45.45ms  max=953.84ms p(90)=76.34ms  p(95)=128.96ms
     http_reqs......................: 3681   9.438425/s
     iteration_duration.............: avg=6.78s    min=2.03s   med=2.04s    max=1m6s     p(90)=30.05s   p(95)=30.21s  
     iterations.....................: 3020   7.74356/s
     vus............................: 3      min=0      max=75
     vus_max........................: 75     min=16     max=75


running (6m30.0s), 00/75 VUs, 3020 complete and 12 interrupted iterations
add_remove_tag_100             ✓ [======================================] 1/5 VUs    6m0s
create_remove_group_share_090  ✓ [======================================] 1/5 VUs    6m0s
create_space_080               ✓ [======================================] 0/1 VUs    6m0s
create_upload_rename_delete... ✓ [======================================] 00/10 VUs  6m0s
download_050                   ✓ [======================================] 00/10 VUs  6m0s
navigate_file_tree_020         ✓ [======================================] 00/20 VUs  6m0s
sync_client_110                ✓ [======================================] 00/20 VUs  6m0s
user_group_search_070          ✓ [======================================] 0/4 VUs    6m0s

The sharing tests are currently broken because the space needs to be manually shared with the users. IIRC @fschade has a working branch that uses a group to make the setup easier. So, not part of this PR.

Signed-off-by: Jörn Friedrich Dreyer <jfd@butonic.de>
@fschade
Copy link
Collaborator

fschade commented May 17, 2024

Night Shift? Nice thanks, I cannot read it right now, I’m back at my computer tomorrow.

in the meantime, the group share feature is part of https://github.com/owncloud/cdperf/tree/next it also contains some more goodie’s.

I hope to find some time soon and give it the final polish.

thanks for taking care 🤗

@butonic butonic self-assigned this Jun 26, 2024
@butonic butonic requested a review from fschade June 26, 2024 07:45
Copy link
Collaborator

@fschade fschade left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

thanks pal, there is another pr open which changes a few things again, i will update the readme later if something is related there.

@fschade fschade merged commit 1d80add into main Jun 28, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants