New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
kubeadm: skip validating pod subnet against node-cidr-mask when allocate-node-cidrs is set to be false #98984
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: SataQiu The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cc @aojea If |
f1abc03
to
f8c3dda
Compare
@@ -501,8 +501,10 @@ func ValidateNetworking(c *kubeadm.ClusterConfiguration, fldPath *field.Path) fi | |||
} | |||
if len(c.Networking.PodSubnet) != 0 { | |||
allErrs = append(allErrs, ValidateIPNetFromString(c.Networking.PodSubnet, constants.MinimumAddressesInPodSubnet, isDualStack, field.NewPath("podSubnet"))...) | |||
// Pod subnet was already validated, we need to validate now against the node-mask | |||
allErrs = append(allErrs, ValidatePodSubnetNodeMask(c.Networking.PodSubnet, c, field.NewPath("podSubnet"))...) | |||
if c.ControllerManager.ExtraArgs["allocate-node-cidrs"] != "false" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
we need unit tests cmd/kubeadm/app/phases/controlplane/manifests_test.go
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @aojea !
I'm sorry for the delay.
Could you please have a look?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If allocation-node-cidrs
is not explicitly configured, it will be defaulted as true
by kubeadm.
So we should validate the Pod subnet against the node-mask when c.ControllerManager.ExtraArgs["allocate-node-cidrs"] != "false"
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think is correct but let's double check with @neolit123 ^^^
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the KCM flag allocate-node-cidrs
is false
by default, so it makes sense to validate in kubeadm only if the the user has set the flag to true
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If the PodSubnet
is set and the ExtraArgs["allocate-node-cidrs"]
is not set, kubeadm will add allocate-node-cidrs=true
flag for KCM automatically.
In this case, can we skip this validation?
Or should we not add the allocate-node-cidrs
flag for KCM automatically?
kubernetes/cmd/kubeadm/app/phases/controlplane/manifests.go
Lines 307 to 308 in 43e8405
if cfg.Networking.PodSubnet != "" { | |
defaultArguments["allocate-node-cidrs"] = "true" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
the KCM docs say that cluster-cidr
requires allocate-node-cidrs
, so we should be setting it to true
in the default args. adding the flag by default seems correct to me. this make it easier for the user and not require them to pass flags when they wish to set a pod subnet.
extraArgs are merged over default args.
so if the user was explicit about allocate-node-cidrs==false
, we should respect it and skip validation in ValidateNetworking.
so the diff here seems valid to me.
right |
/test pull-kubernetes-e2e-kind |
/triage accepted |
f8c3dda
to
15c43e2
Compare
15c43e2
to
88a404c
Compare
this lgtm , defer to @neolit123 to double check this logic #98984 (comment) |
@@ -423,7 +423,7 @@ func ValidateServiceSubnetSize(subnetStr string, fldPath *field.Path) field.Erro | |||
} | |||
|
|||
// ValidatePodSubnetNodeMask validates that the relation between podSubnet and node-masks is correct | |||
func ValidatePodSubnetNodeMask(subnetStr string, c *kubeadm.ClusterConfiguration, fldPath *field.Path) field.ErrorList { | |||
func ValidatePodSubnetNodeMask(subnetStr string, c *kubeadm.ClusterConfiguration, isDualStack bool, fldPath *field.Path) field.ErrorList { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i think we should leave the signature of ValidatePodSubnetNodeMask the way it is.
"dual stack" is a feature gate, and functions like getClusterNodeMask can continue extracting the value from the ClusterConfiguration:
isDualStack := features.Enabled(c.FeatureGates, features.IPv6DualStack)
isIPv6
is not a feature gate, it is a condition that matters to getClusterNodeMask
with this i also means that we don't have to do the vadation._test.go changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Emm, just found isDualStack
is already calculated in parent function and a similar function ValidateIPNetFromString
has already done like this. 😅
kubernetes/cmd/kubeadm/app/apis/kubeadm/validation/validation.go
Lines 495 to 506 in 43e8405
isDualStack := features.Enabled(c.FeatureGates, features.IPv6DualStack) | |
if len(c.Networking.ServiceSubnet) != 0 { | |
allErrs = append(allErrs, ValidateIPNetFromString(c.Networking.ServiceSubnet, constants.MinimumAddressesInServiceSubnet, isDualStack, field.NewPath("serviceSubnet"))...) | |
// Service subnet was already validated, we need to validate now the subnet size | |
allErrs = append(allErrs, ValidateServiceSubnetSize(c.Networking.ServiceSubnet, field.NewPath("serviceSubnet"))...) | |
} | |
if len(c.Networking.PodSubnet) != 0 { | |
allErrs = append(allErrs, ValidateIPNetFromString(c.Networking.PodSubnet, constants.MinimumAddressesInPodSubnet, isDualStack, field.NewPath("podSubnet"))...) | |
// Pod subnet was already validated, we need to validate now against the node-mask | |
allErrs = append(allErrs, ValidatePodSubnetNodeMask(c.Networking.PodSubnet, c, field.NewPath("podSubnet"))...) | |
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ok, this seems like a separate refactor that can be done in the future.
…ate-node-cidrs is set to be false
88a404c
to
2ba178c
Compare
/lgtm |
/retest |
What type of PR is this?
/kind bug
What this PR does / why we need it:
kubeadm: skip validating pod subnet against node-cidr-mask when allocate-node-cidrs is set to be false
Which issue(s) this PR fixes:
Ref kubernetes/kubeadm#2391
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: