Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Remove GET job and retries for status updates #105214

Merged
merged 1 commit into from Sep 27, 2021

Conversation

alculquicondor
Copy link
Member

@alculquicondor alculquicondor commented Sep 23, 2021

What type of PR is this?

/kind bug
/kind flake

What this PR does / why we need it:

Remove GET job and retries for status updates

Also, in the case of conflict, we know that there was a Job update that would trigger another sync, so there is no need to do a rate limited requeue.

Doing a GET right before retrying has 2 problems:

  • It can masquerade conflicts
  • It adds an additional delay

As for retries, we are better of going through the sync backoff.

/sig apps
/area workload-api/job

Which issue(s) this PR fixes:

Fixes #105199

Special notes for your reviewer:

Does this PR introduce a user-facing change?

Fix job controller syncs: In case of conflicts, ensure that the sync happens with the most up to date information. Improves reliability of JobTrackingWithFinalizers.

@k8s-ci-robot k8s-ci-robot added release-note Denotes a PR that will be considered when it comes time to generate release notes. kind/bug Categorizes issue or PR as related to a bug. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. kind/flake Categorizes issue or PR as related to a flaky test. sig/apps Categorizes an issue or PR as relevant to SIG Apps. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Sep 23, 2021
Doing a GET right before retrying has 2 problems:
- It can masquerade conflicts
- It adds an additional delay

As for retries, we are better of going through the sync backoff.

In the case of conflict, we know that there was a Job update that would trigger another sync, so there is no need to do a rate limited requeue.
@alculquicondor
Copy link
Member Author

/test pull-kubernetes-integration (to spot any flakiness)
/test pull-kubernetes-e2e-gce-ubuntu-containerd

@alculquicondor
Copy link
Member Author

/assign @soltysh

Copy link
Contributor

@soltysh soltysh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/triage accepted
/priority important-soon
/lgtm
/approve

@k8s-ci-robot k8s-ci-robot added triage/accepted Indicates an issue or PR is ready to be actively worked on. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. lgtm "Looks good to me", indicates that a PR is ready to be merged. and removed needs-triage Indicates an issue or PR lacks a `triage/foo` label and requires one. needs-priority Indicates a PR lacks a `priority/foo` label and requires one. labels Sep 27, 2021
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: alculquicondor, soltysh

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Sep 27, 2021
@k8s-ci-robot k8s-ci-robot merged commit aec9acd into kubernetes:master Sep 27, 2021
@k8s-ci-robot k8s-ci-robot added this to the v1.23 milestone Sep 27, 2021
utilruntime.HandleError(fmt.Errorf("Error syncing job: %v", err))
jm.queue.AddRateLimited(key)
utilruntime.HandleError(fmt.Errorf("syncing job: %w", err))
if !apierrors.IsConflict(err) {
Copy link
Contributor

@ravilr ravilr Jul 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alculquicondor @soltysh

we're seeing an issue on clusters after being upgraded from v1.22.x to v1.23.8: if the CreatePods here fails to create pods due to resourcequota conflict errors, this change here is skipping requeue-ing the Job resource and causes the Job to remain stuck without any status for hours (edit: forever, since it is never re-synced). This wasn't an issue before v1.23.x.

job_controller anonymized logs:

kube-controller-manager[26872]: I0707 10:41:11.927375   26872 job_controller.go:498] enqueueing job nnnn/jjjj
kube-controller-manager[26872]: I0707 10:41:11.965003   26872 job_controller.go:1444] Failed creation, decrementing expectations for job "nnnn"/"jjjj"
kube-controller-manager[26872]: I0707 10:41:11.965085   26872 event.go:294] "Event occurred" object="nnnn/jjjj" kind="Job" apiVersion="batch/v1" type="Warning" reason="FailedCreate" message="Error creating: Operation cannot be fulfilled on resourcequotas \"nnnn-quota\": the object has been modified; please apply your changes to the latest version and try again"

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why is it never resynced? Isn't Job controller set up to requeue everything every X minutes?

Edit: In general, every workload controller should resync (go through everything again) at X minute intervals controlled by MinSyncPeriod, and so unless you get insanely unlucky with conflict writes, should eventually succeed at updating, since the object in the cache should get updated within seconds, and should get resynced.

So we want to know what is "forever" in "remain stuck without any status forever" in what you observed.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

#111026 fixes the "forgets until next resync", but doesn't handle any of the implications of "forever".

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@alculquicondor sees a default 12h resync, which is not intended (in the fundamental kube arch sense), so I'll spelunk to see why it's 12h and how we got to the point like this. Most controllers should probably resync more frequently to avoid failures like this having 12h durations (if we resynced more frequently, this would have resolved in 15m, which would have been impactful for a human watching it but would have degraded more gracefully).

Also, this points to an opportunity where failure injection in tests would have caught this much sooner, and been an opportunity to strengthen our posture.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So we want to know what is "forever" in "remain stuck without any status forever" in what you observed.

yes, i meant the Job resource remained stuck without controller sync for more than 2 hours since the first sync attempt and forget, after which it was getting cleaned up by the application, as these are scheduled/periodic jobs. we do run with the kube-controller-manager's sharedInformer flag default --min-resync-period=12h0m0s, so didn't get to observe if such stuck jobs were eventually enqueued again by informer cache resync for sync by the job controller.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. kind/bug Categorizes issue or PR as related to a bug. kind/flake Categorizes issue or PR as related to a flaky test. lgtm "Looks good to me", indicates that a PR is ready to be merged. priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. release-note Denotes a PR that will be considered when it comes time to generate release notes. sig/apps Categorizes an issue or PR as relevant to SIG Apps. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Job controller: Updates might override stale data
5 participants