-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add service metadata monitoring #57
Conversation
…llection into ssong-service-metadata
Hmm build is broken even though |
Okay the issue is that in
to
it also fails, since the events plugin also tries to start a watch thread. I'm going to remove the dummy test for now, we can consider how to fix this later. |
case type | ||
when 'ADDED' | ||
get_pods_for_service(endpoint).each {|pod| @pods_to_services[pod] << service unless @pods_to_services[pod].include? service} | ||
when 'MODIFIED' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not sure if I'm following the MODIFIED
case, can you give some explanation?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, the MODIFIED
case was the hardest part. There's two cases to cover:
- The service was modified such that there are more pods than there once were (increasing the number of replicas). This is the easier case. We simply add the service to
@pods_to_services[pod]
unless it already exists. - The service was modified such that there are now fewer pods than there once were (downscaling the number of replicas). This is the harder case, since you don't know which pods were removed you have to actually search the map for mentions of the service, and delete it if it's not one of the pods we were told about in the
MODIFIED
event. Then if there are no services for that pod in the map, we can remove that key from the map.
I offhandedly asked Chris about it at some point, since the "looking through the entire map for mentions of the service" part really irked me... but he seemed to think it wasn't that big of an issue to be worth "over-optimizing prematurely", since this case likely will not happen often. I'm open to any suggestions 😅
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for the detailed explanation! I'm fine with current approach
desired_pods.each {|pod| @pods_to_services[pod] |= [service]} | ||
@pods_to_services.each do |pod, services| | ||
if services.include? service | ||
services.delete service unless desired_pods.include? pod |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does deleting service
from services
equivalent to deleting it from @pods_to_services[pod]
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I believe services
is a reference to the array in the value
* swap out linux binary * update role:
This reverts commit 795f3b2.
Decided this was a really stupidly large PR (mostly UT's and JSON I promise). Really the only big addition is just
service_monitor.rb
.I used much of the watch thread logic from the Events plugin, but we don't need to persist state on restart since we don't care about data duplication.
Next step is to actually read from this hash and attach as metadata to the record. Decided to put that on hold until we make the changes in SUMO-112136 which will make this part easier.
If anything doesn't make sense or it looks inefficient or messy please let me know and I will fix 🙂 For an idea of what input and output looks like, feel free to look at the test JSONs and UT's.