New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remove GET job and retries for status updates #105214
Conversation
Doing a GET right before retrying has 2 problems: - It can masquerade conflicts - It adds an additional delay As for retries, we are better of going through the sync backoff. In the case of conflict, we know that there was a Job update that would trigger another sync, so there is no need to do a rate limited requeue.
84ae217
to
eebd678
Compare
/test pull-kubernetes-integration (to spot any flakiness) |
/assign @soltysh |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/triage accepted
/priority important-soon
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: alculquicondor, soltysh The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
utilruntime.HandleError(fmt.Errorf("Error syncing job: %v", err)) | ||
jm.queue.AddRateLimited(key) | ||
utilruntime.HandleError(fmt.Errorf("syncing job: %w", err)) | ||
if !apierrors.IsConflict(err) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we're seeing an issue on clusters after being upgraded from v1.22.x to v1.23.8: if the CreatePods here fails to create pods due to resourcequota conflict errors, this change here is skipping requeue-ing the Job resource and causes the Job to remain stuck without any status for hours (edit: forever, since it is never re-synced). This wasn't an issue before v1.23.x.
job_controller anonymized logs:
kube-controller-manager[26872]: I0707 10:41:11.927375 26872 job_controller.go:498] enqueueing job nnnn/jjjj
kube-controller-manager[26872]: I0707 10:41:11.965003 26872 job_controller.go:1444] Failed creation, decrementing expectations for job "nnnn"/"jjjj"
kube-controller-manager[26872]: I0707 10:41:11.965085 26872 event.go:294] "Event occurred" object="nnnn/jjjj" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Operation cannot be fulfilled on resourcequotas \"nnnn-quota\": the object has been modified; please apply your changes to the latest version and try again"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is it never resynced? Isn't Job controller set up to requeue everything every X minutes?
Edit: In general, every workload controller should resync (go through everything again) at X minute intervals controlled by MinSyncPeriod, and so unless you get insanely unlucky with conflict writes, should eventually succeed at updating, since the object in the cache should get updated within seconds, and should get resynced.
So we want to know what is "forever" in "remain stuck without any status forever" in what you observed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
#111026 fixes the "forgets until next resync", but doesn't handle any of the implications of "forever".
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alculquicondor sees a default 12h resync, which is not intended (in the fundamental kube arch sense), so I'll spelunk to see why it's 12h and how we got to the point like this. Most controllers should probably resync more frequently to avoid failures like this having 12h durations (if we resynced more frequently, this would have resolved in 15m, which would have been impactful for a human watching it but would have degraded more gracefully).
Also, this points to an opportunity where failure injection in tests would have caught this much sooner, and been an opportunity to strengthen our posture.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So we want to know what is "forever" in "remain stuck without any status forever" in what you observed.
yes, i meant the Job resource remained stuck without controller sync for more than 2 hours since the first sync attempt and forget, after which it was getting cleaned up by the application, as these are scheduled/periodic jobs. we do run with the kube-controller-manager's sharedInformer flag default --min-resync-period=12h0m0s
, so didn't get to observe if such stuck jobs were eventually enqueued again by informer cache resync for sync by the job controller.
What type of PR is this?
/kind bug
/kind flake
What this PR does / why we need it:
Remove GET job and retries for status updates
Also, in the case of conflict, we know that there was a Job update that would trigger another sync, so there is no need to do a rate limited requeue.
Doing a GET right before retrying has 2 problems:
As for retries, we are better of going through the sync backoff.
/sig apps
/area workload-api/job
Which issue(s) this PR fixes:
Fixes #105199
Special notes for your reviewer:
Does this PR introduce a user-facing change?