Skip to content
This repository has been archived by the owner on Mar 18, 2024. It is now read-only.

CI pool generation using snapshotPool is not respecting maxAllocation value #1416

Open
tabishalmas-sage opened this issue Oct 4, 2023 · 4 comments
Labels
analysis To be decided on how to solution/fix

Comments

@tabishalmas-sage
Copy link

Describe the bug
When using the snapshot pool to generate CI pool, it keeps generating new scratch org in CI pool and not respecting the value of maxAllocation in the CI Pool.
For example, we have 5 available scratch orgs in CI pool and max allocation is also 5. So when we run the CI generation workflow again, it should not generate any new orgs. But when we add snapshotPool tag in the config, it keeps generating 5 new orgs each times and increases to 10,15,20,.... as many times it runs. If we remove snapshotPool tag in the config, this behaviour does not occur.

To Reproduce
Steps to reproduce the behavior:

  • Create releaseConfigFile for Snapshot pool and use "maxAllocation": 20 and "tag":"snapshot"
  • Create releaseConfigFile for CI pool and use "maxAllocation": 5, and "snapshotPool": "snapshot"
  • Run github workflow to generate snapshot pool
  • Now run another github action workflow to generate CI pool. Run it multiple times.

Expected behavior
After running the CI pool generation job, it keeps generating new scratch orgs.
For example, if you run CI job 3 times, the number of scratch org will be 15 and not 5 as mentioned in maxAllocation in CI pool config.

Screenshots
If applicable, add screenshots to help explain your problem.

Platform Details (please complete the following information):

  • OS: MacOs
  • Version [e.g. CLI Version eg: 1.6.6] @dxatscale/sfpowerscripts/23.4.2 darwin-arm64 node-v20.5.1
  • Salesforce CLI(sfdx cli) Version: @salesforce/cli/2.7.11 darwin-arm64 node-v20.5.1
  • CI Platform: GtiHub

Additional context
Run sfp pool:list after running CI job multiple times with a known tag and devhub alias, like

sfp pool:list --tag ci --targetdevhubusername HubOrg

sfpowerscripts should print out a list of scratch orgs in the pool more than maxAllocation capacity. Similarly to:

======== Scratch org Details ========
Used Scratch Orgs in the pool: 0
Unused Scratch Orgs in the Pool : 15 

Scratch Orgs being provisioned in the Pool : 5
@github-actions github-actions bot added the analysis To be decided on how to solution/fix label Oct 4, 2023
@azlam-abdulsalam
Copy link
Contributor

Thanks @tabishalmas-sage .. Will look into this

@azlam-abdulsalam
Copy link
Contributor

I think this is working as intended, as the second run is not creating a pool, its consuming from the snapshot, so in the initial stage you have created 20.. which is indeed the capacity. Now on subsequent runs (3 attempts), you are basically requesting 5 on each run, as the max allocation in the second stage is basically saying consume only 5 from the snapshot pool.

Is it making sense? @alanjaouen @tabishalmas-sage

@cjbradshaw
Copy link
Contributor

Hi @azlam-abdulsalam In the example the ci pool has a max capacity of 5 but when using the snapshot pool it goes well beyond that - each run of replish ci seems to add an additional 5 - is that expected behaviour ?
I know @dieffrei has an article about snapshots (https://medium.com/@dieffrei/dx-scale-scratch-org-pools-1dbecfcda8f8) which has helped us to understand how these might work - is there any other docs that describe this ?
We'd assused the snapshot pool is just a feeder pool for our "ci" and "dev" pools and those scratch org pool configs would still work as before, just the orgs in them are based off the snapshot. E.g. so maxAllocation of the "ci" pool would never go beyond it's max (5 in this example) but it's not what we're seeing

@azlam-abdulsalam
Copy link
Contributor

@cjbradshaw I see how this can be problematic, need to revise the design

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
analysis To be decided on how to solution/fix
Projects
None yet
Development

No branches or pull requests

3 participants