New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make podTopologyHints protected by lock #95111
Conversation
Hi @choury. Thanks for your PR. I'm waiting for a kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
/test pull-kubernetes-node-kubelet-serial-topology-manager |
/retest |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
/retest |
/test pull-kubernetes-e2e-gce-ubuntu-containerd |
/remove-lifecycle stale |
"k8s.io/api/core/v1" | ||
v1 "k8s.io/api/core/v1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: this change is unnecessary.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand, but that still doesn't mean it should be part of this PR.
If I had been the reviewer on those PRs I would have asked them to remove that change.
func (s *scope) Lock() { | ||
s.mutex.Lock() | ||
} | ||
|
||
func (s *scope) Unlock() { | ||
s.mutex.Unlock() | ||
} | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe what I mentioned before wasn't clear.
I wasn't opposed to having a getter/setter around s.podTopologyHints
that applies a lock before accessing it. I just didn't want that to be part of the outward facing Scope
interface. It's fine to put it on the underlying concrete scope
type though.
As such, I think a better abstraction here would be:
func (s *scope) getTopologyHints(podUID string, containerName string) TopologyHint {
s.mutex.Lock()
defer s.mutex.Unlock()
return s. podTopologyHints[podUID][containerName]
}
func (s *scope) setTopologyHints(podUID string, containerName string, th TopologyHint) {
s.mutex.Lock()
defer s.mutex.Unlock()
if s. podTopologyHints[podUID)] == nil {
s. podTopologyHints[podUID)] = make(map[string]TopologyHint)
}
s. podTopologyHints[podUID][containerName] = th
}
func (s *scope) GetAffinity(podUID string, containerName string) TopologyHint { | ||
s.mutex.Lock() | ||
defer s.mutex.Unlock() | ||
return s.podTopologyHints[podUID][containerName] | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Following my advice above, this would then become:
func (s *scope) GetAffinity(podUID string, containerName string) TopologyHint {
return s.getTopologyHints(podUID, containerName)
}
s.Lock() | ||
if (s.podTopologyHints)[string(pod.UID)] == nil { | ||
(s.podTopologyHints)[string(pod.UID)] = make(map[string]TopologyHint) | ||
} | ||
|
||
klog.Infof("[topologymanager] Topology Affinity for (pod: %v container: %v): %v", format.Pod(pod), container.Name, bestHint) | ||
(s.podTopologyHints)[string(pod.UID)][container.Name] = bestHint | ||
s.Unlock() | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And this would become:
klog.Infof("[topologymanager] Topology Affinity for (pod: %v container: %v): %v", format.Pod(pod), container.Name, bestHint)
s.setTopologyHints(string(pod.UID), container.Name, bestHint)
klog.Infof("[topologymanager] Topology Affinity for (pod: %v container: %v): %v", format.Pod(pod), container.Name, bestHint) | ||
|
||
s.Lock() | ||
if (s.podTopologyHints)[string(pod.UID)] == nil { | ||
(s.podTopologyHints)[string(pod.UID)] = make(map[string]TopologyHint) | ||
} | ||
|
||
(s.podTopologyHints)[string(pod.UID)][container.Name] = bestHint | ||
s.Unlock() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And this would become:
klog.Infof("[topologymanager] Topology Affinity for (pod: %v container: %v): %v", format.Pod(pod), container.Name, bestHint)
s.setTopologyHints(string(pod.UID), container.Name, bestHint)
Hi @choury. I'm happy to approve this / merge this once the requested changes are made. Thanks. |
It crashed kubelet by "concurrent map read and map write"
/test pull-kubernetes-e2e-gce-ubuntu-containerd |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: choury, klueska The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/hold cancel |
This should probably be cherry-picked back to 1.20 as well: Happy to lgtm/approve it once its created. |
@klueska I will cherry pick it soon after merged. |
Cherry pick of kubernetes#95111 on release-1.19
Cherry pick of kubernetes#95111 on release-1.18
[release-1.19] cherry-pick of #95111: make podTopologyHints protected by lock
…-upstream-release-1.20 [release-1.20] Automated cherry pick of #95111: make podTopologyHints protected by lock
[release-1.18] cherry-pick of #95111: make podTopologyHints protected by lock
What type of PR is this?
/kind bug
What this PR does / why we need it:
It crashed kubelet by
"concurrent map read and map write"
Which issue(s) this PR fixes:
Fixes #
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: