Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Optimization premature convergence #42

Open
LHY19990707 opened this issue Jul 25, 2024 · 5 comments
Open

Optimization premature convergence #42

LHY19990707 opened this issue Jul 25, 2024 · 5 comments

Comments

@LHY19990707
Copy link

Hi,

First of all, thank you for your invaluable contributions to the field of topology optimization.

I am trying to use the Bi-directional Evolutionary Structural Optimization (BESO) method for the topological optimization of steel plates based on stiffness. However, I am facing some confusion in the process:

When the number of failure elements becomes too large, the decaying mechanism is triggered, resulting in the mass addition and removal coefficients tending to be the same. This, in turn, causes the sensitivity index to remain relatively unchanged, leading to an early termination of the optimization process. At this point, the volume constraint is not satisfied. I am puzzled by this phenomenon.

Additionally, I would like to understand the difference between stiffness-based and failure index-based optimization. Are the convergence criteria the same for these two approaches? Furthermore, I am struggling to comprehend the decaying mechanism in detail.

I would greatly appreciate your guidance and insights in resolving these issues. Your expertise in this field would be invaluable in helping me progress with my research.

Thank you in advance for your assistance.

Best regards,
Huiyang

@fandaL
Copy link
Collaborator

fandaL commented Jul 25, 2024

The decaying mechanism is made to fix the mass on the actual value if failing elements are present (Failure index > 1). There are some parameters which control that. Available parameters with default values and comments are in
https://github.com/calculix/beso/blob/master/beso_conf.py

You can try to define/change parameters in your beso_conf.py file:

FI_violated_tolerance = 100 will start decaying when 100 elements exceeds allowable stress (or failure index in domain_FI). Elements overstressed from the beginning are not counted (e.g., stress concentration due to initial boundary conditions and loads). If you set a large number, it will continue removing elements until mass_goal_ratio is achieved

decay_coefficient = -0.2 controls decaying, i.e. how quickly mass_addition_ratio and mass_removal_ratio decreases and practically stops change of the design. When setting decay_coefficient = 0 there should not be decaying at all (elements will be still swapping from solid to void and back). Decaying is like artificial dumping of the changes involved in each iteration. It is controlled by exponential function
exp(k * i)
where k is decay_coefficient, i is iteration since the decaying have started
One explanation is also in the original paper https://www.engmech.cz/improc/2017/0590.pdf

optimization_base = "stiffness" uses strain energy density as a measure for sensitivities which is common in topology optimization. “failure_index” is more experimental setting and uses failure index divided by density of the element, which might be useful when some specific failure modes are to be included, but it is more difficult to define the optimization to obtain meaningful results.

@LHY19990707
Copy link
Author

The decaying mechanism is made to fix the mass on the actual value if failing elements are present (Failure index > 1). There are some parameters which control that. Available parameters with default values and comments are in https://github.com/calculix/beso/blob/master/beso_conf.py

You can try to define/change parameters in your beso_conf.py file:

FI_violated_tolerance = 100 will start decaying when 100 elements exceeds allowable stress (or failure index in domain_FI). Elements overstressed from the beginning are not counted (e.g., stress concentration due to initial boundary conditions and loads). If you set a large number, it will continue removing elements until mass_goal_ratio is achieved

decay_coefficient = -0.2 controls decaying, i.e. how quickly mass_addition_ratio and mass_removal_ratio decreases and practically stops change of the design. When setting decay_coefficient = 0 there should not be decaying at all (elements will be still swapping from solid to void and back). Decaying is like artificial dumping of the changes involved in each iteration. It is controlled by exponential function exp(k * i) where k is decay_coefficient, i is iteration since the decaying have started One explanation is also in the original paper https://www.engmech.cz/improc/2017/0590.pdf

optimization_base = "stiffness" uses strain energy density as a measure for sensitivities which is common in topology optimization. “failure_index” is more experimental setting and uses failure index divided by density of the element, which might be useful when some specific failure modes are to be included, but it is more difficult to define the optimization to obtain meaningful results.

Thanks for your patient reply! It seems that my doubts have been largely addressed. I still have a few remaining questions for you:

  1. My understanding of the decaying strategy is this: When the stress state in the current iterations exceeds the allowable value, the volume is no longer reduced, and further optimization is carried out at the current volume to bring the stress down to the allowable limit. I'm not sure if I fully understand this strategy.

  2. I have noticed that in topology optimization problems with stress constraints, the sensitivity of the element usually involves solving the partial derivative of the stress with respect to the design variables. However, in your paper and code, the failure index/element density is used instead. I would like to ask what are the benefits of such a treatment. What are the differences between the failure index and partial derivative in representing sensitivity?

  3. In your code, the criterion for judging convergence is that the failure index or strain energy of the element no longer produces a large change, and the volume constraint is not strictly satisfied. I don't quite understand the logic behind this approach.

Thank you in advance for taking the time to address my remaining questions. I appreciate your patience and the valuable insights you have provided.

Best regards,
Huiyang

@fandaL
Copy link
Collaborator

fandaL commented Jul 26, 2024

  1. Yes, the mass is fixed when there are overstressed elements. When the stress drops, other iteration continues to remove mass. The decaying tries to stabilize the solution by making changes smaller.

  2. To calculate partial derivatives one needs global matrices of the FE task to do analytical derivatives or to apply adjoint method. Finite difference method is not usable due to high number of variables. Contrary, use of failure index with decaying strategy is pure heuristic which does not require access to the solver code, so you can use any static analysis model as an input. The cost for this is that it will typically converge worse and the solution will be further from true optimum.

  3. BESO method sequentially removes volume from the initial full model and so if the stress constraint is not satisfied at some iteration, the stress will typically increase even more if the volume continues to decrease towards the volume constraint. So the code tries to stabilize the solution at the volume which is close to the first occurrence of the stress limit (this is again rough heuristic). Quite different situation is, if you use SIMP method, which uses continuous densities and so the volume constraint can be satisfied in any iteration, whereas algorithm tries to distribute material to decrease the stress. If SIMP method has difficulties with satisfaction of the stress constraint, it might lead to the result which has many shadow elements (not clearly solid or void), but volume constraint can be still satisfied.

@pouryaheidari96
Copy link

Need Assistance with Installation and Setup of beso

Hi,

I am trying to install and set up the beso project, but I have encountered some issues. I would like to ask for your help and guidance as the developer of this project.

The problem I am facing is as follows:

When I attempt to run beso_main.py, I receive the following error:

FileNotFoundError: [Errno 2] No such file or directory: '.\\Plane_Mesh.inp'

This error indicates that the input file Plane_Mesh.inp is not found, although I have created this file in the beso directory.

Here are the steps I have taken:

  1. Cloned the beso repository from GitHub.
  2. Created the Plane_Mesh.inp file in the beso directory.
  3. Ran beso_main.py using the command python beso_main.py.

I would appreciate it if you could provide guidance on how to resolve this issue or ensure that the installation and setup process is done correctly.

Thank you very much for your help.

Best regards,

@fandaL
Copy link
Collaborator

fandaL commented Jul 27, 2024

Examples are not updated. The easiest way is to follow example 4 using FreeCAD GUI
But you can set the analysis manually of course. Try to define path to calculix, path to your working folder (containing inp file), and file name, e.g.

path_calculix = '/tmp/.mount_FreeCA6HdF6X/usr/bin/ccx'
path = '/tmp/myworkingfolder'
file_name = 'Plane_Mesh.inp'

python files can be in some other directory independently since some code version

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants