New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update gce-pd volume topology label to GA #98700
Conversation
@Jiawei0227: This issue is currently awaiting triage. If a SIG or subproject determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/retest |
@@ -1243,7 +1243,7 @@ func InitGcePdDriver() storageframework.TestDriver { | |||
}, | |||
SupportedFsType: supportedTypes, | |||
SupportedMountOption: sets.NewString("debug", "nouid32"), | |||
TopologyKeys: []string{v1.LabelFailureDomainBetaZone}, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmmm let's double check but I think we have upgrade tests than run N-1 e2e version against a N version cluster. We may need to make some changes to the older e2e to handle both labels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
so there's a couple of variations of tests, but IIRC, one variation will run N-1 e2e.test on an N-1 cluster, upgrade the cluster, and then run N-1 e2e.test on the version N cluster
I run the e2e binary on new cluster with my change and the topology test pass.
./e2e.test --ginkgo.focus=".*In-tree Volumes.*gcepd.*topology.*" -provider gce -gce-project=jiaweiwang-gke-multi-cloud-dev -gce-zone=us-central1-b
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (delayed binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":8,"completed":1,"skipped":1030,"failed":0}
{"msg":"PASSED [sig-storage] In-tree Volumes [Driver: gcepd] [Testpattern: Dynamic PV (immediate binding)] topology should provision a volume and schedule a pod with AllowedTopologies","total":8,"completed":2,"skipped":2859,"failed":0}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you be more specific if you have concerns on which tests?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you run e2e.test from 1 version before, ie release 1.20?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I build the e2e.test from release-1.20 branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Did you also run the regional PD tests? (it requires a regional cluster). I see some parts of that test are validating the labels on the PV object, so I would expect a 1.20 test validating beta labels will fail on a 1.21 cluster that uses GA labels.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems regional PD is checking PV labels. So we need to fix the 1.20 release branch e2e test to not break the version skew test.
#98733 PR out for review and cherry-pick approval.
/assign @mattcary |
oops forgot one cc @divyenpatel @xing-yang as fyi |
977e548
to
e76f60d
Compare
/retest |
/lgtm |
Can you also update the release notes saying that newly provisioned PVs will no longer have the beta label, so any outside tools need to be fixed (and add an ACTION REQUIRED) |
Done |
@@ -515,7 +515,10 @@ func (g *Cloud) GetLabelsForVolume(ctx context.Context, pv *v1.PersistentVolume) | |||
|
|||
// If the zone is already labeled, honor the hint | |||
name := pv.Spec.GCEPersistentDisk.PDName | |||
zone := pv.Labels[v1.LabelFailureDomainBetaZone] | |||
zone := pv.Labels[v1.LabelTopologyZone] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, We are no longer accepting enhancements on legacy-cloud-providers. Any such enhancements must be done out of tree. As this is clearly just a bug fix going to let it through.
/approve |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: cheftako, Jiawei0227, msau42 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/retest |
Hi @Jiawei0227 , sorry to trouble but I'm hitting an error due to the missing labels of pv. I'm not sure if it is related to this PR. In version 1.18, neither the failure-domain.beta.kubernetes.io/zone nor topology.kubernetes.io/zone is correctly set for pv. I guess it is because the in-tree plugin was deprecated and the logic of label set was gone simultaneously but I don't have any concrete evidence to prove it since I'm not expert on this area. Could you please share more insights of this? Thanks. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
This is a follow-up PR for #97823
This PR updates the gce-pd volume topology label to use GA version. Previously we were using beta label(FailureDomain) for the in-tree volumes. But as the Beta label has been deprecated for a while and is scheduled to be removed soon. We should update this to use GA label instead.
Which issue(s) this PR fixes:
Fix #92237 by upgrading the label to GA version.
Special notes for your reviewer:
Does this PR introduce a user-facing change?:
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.:
/sig storage
/cc @msau42