You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We use a clustered environment for deployments without any downtime. When we deployed wicked in version 1.0.0-rc.11 into one of our clusters while in another cluster 1.0.0-rc.9 was still running. (Both wicked-portals are connected to the same postgres instance.) Both wicked-portals started to report errors like:
warn: [+4687ms] portal-api:principal Detected an updated config hash in the database, exiting: WscUPt8OcjvJ3JtbD3UFUw0ycN8= !== LyOXCup8ZqhHzm3WQE/tSs5KL6o=
error: [+ 1ms] portal-api:principal Force Quit API
The text was updated successfully, but these errors were encountered:
Even after fixing this one, the rc.9 will not be able to happily run alongside rc.12+. In the next release, the rc.12+ API container will not allow the rc.9 to take it down, and will make sure that it remains "on the top of the league", so to speak. But what will have changed here is that the rc.12+ API will accept calls from wicked clients prior to rc.12 as well, so there should be no big impact on this.
See also #190, which contains more error scenarios and the previous fixing attempt.
We use a clustered environment for deployments without any downtime. When we deployed wicked in version 1.0.0-rc.11 into one of our clusters while in another cluster 1.0.0-rc.9 was still running. (Both wicked-portals are connected to the same postgres instance.) Both wicked-portals started to report errors like:
The text was updated successfully, but these errors were encountered: