Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure maintained functionality for new releases #3815

Open
bastik-1001 opened this issue Apr 13, 2024 · 4 comments
Open

Ensure maintained functionality for new releases #3815

bastik-1001 opened this issue Apr 13, 2024 · 4 comments
Labels
Feature request New feature or idea

Comments

@bastik-1001
Copy link
Contributor

Is your feature request related to a problem or use case?

Changes to code can break things, which should not happen, but it happens anyway and while one could focus on being careful, it's probably a good idea to test functions of the code and the program's interface in some automated way.

Describe the solution you'd like

Ideally, there would be some automated way to test if a function still behaves like intended after there have been changes. I surely can't tell what the best approach is, but testing if something is broken, ideally before a new version gets released, is very useful. There are hopefully people that are aware on how to do this in the best way, or at least some way. So I create this ticket as a reminder to do something about it and for someone to reach out to provide some insight or even put that into action.

Describe alternatives you've considered

No response

@bastik-1001 bastik-1001 added the Feature request New feature or idea label Apr 13, 2024
@love-code-yeyixiao
Copy link
Contributor

love-code-yeyixiao commented Apr 14, 2024

This is almost impossible to do unless you use a generative AI model with debugging capabilities and a virtual machine that gives the AI control and observation, this is because it is difficult to judge function functions without running them, especially to make the automation program understand the purpose of the program changes.
And the suggestion that I give will also take a lot of time (although it will certainly take less time than manual testing, at the expense of computational resources)

@DavidXanatos
Copy link
Member

Yea what @love-code-yeyixiao said.
That said there was one special case where unit testing would work symlink handling, but this is a very isolated and easy to test mechanism. Testing it with the batch files provided by @offhub and others made it easy to pinpoint the issues.
But for the rest of Sandboxie, I don't think its really feasible, what we could do may be would be to have a list of programs to check if they still run Ok after a larger change, for example before I upload a release I always check if it can run MsEdge successfully.

@bastik-1001
Copy link
Contributor Author

Thank you for the input. I still intend to keep this issue open. Maybe it won't be for unit tests, but for ways to ensure function and avoiding things being broken.

@bastik-1001 bastik-1001 changed the title Create/Write and perform automated unittests Ensure maintained functionality for new releases Apr 27, 2024
@bastik-1001
Copy link
Contributor Author

I updated the title as it did not reflect what the state of this issue is. It's about means to validate that a new version did not break things that worked, however that might be accomplished. It's not supposed to put more load on your shoulders, although more testing seems to be that.

If you consider the things that are done to be enough, this issue does not serve a purpose and can be closed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Feature request New feature or idea
Projects
None yet
Development

No branches or pull requests

3 participants