[OCPNODE-4505]: Automation creation of OCP-57401#31142
[OCPNODE-4505]: Automation creation of OCP-57401#31142asahay19 wants to merge 1 commit intoopenshift:mainfrom
Conversation
|
Pipeline controller notification For optional jobs, comment This repository is configured in: automatic mode |
WalkthroughAdds a Ginkgo serial disruptive-longrunning e2e test that creates ImageDigestMirrorSet and ImageTagMirrorSet, waits for MCP rollouts on worker and master, reads ChangesImageMirrorSet E2E Test & Node Helpers
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~22 minutes 🚥 Pre-merge checks | ✅ 11 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (11 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@test/extended/node/node_e2e/node.go`:
- Around line 331-332: In verifyRegistryBlocked, replace the fixed "j >= i-5"
backward line-limit with a scan that continues until you hit the previous
registry boundary or start of file: starting from j := i-1 loop while j >= 0 and
!strings.Contains(lines[j], "[[registry]]") (stop before the prior
"[[registry]]") and check for strings.Contains(lines[j], "blocked = true"); do
the same for the symmetric forward scan (the other occurrence around lines
340-341) so both searches stop at the neighboring "[[registry]]" block marker
instead of a hard ±5 line window.
- Around line 182-183: The test uses fixed cluster-scoped mirror-set names
"digest-mirror" and "tag-mirror" which can collide between runs; change the code
that constructs those mirror-set resource Names to append a unique suffix (e.g.,
test run id, timestamp, or random string) so each invocation creates
uniquely-named mirror-sets, and propagate that generated name to all places that
reference "digest-mirror" and "tag-mirror" (the mirror-set creation structs and
any subsequent lookups/assertions) so tests no longer fail with AlreadyExists
from leftover artifacts.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: 2dc75fc1-8480-42ae-b540-eab4ad3c4e55
📒 Files selected for processing (1)
test/extended/node/node_e2e/node.go
|
Scheduling required tests: |
| initialWorkerSpec := imagepolicy.GetMCPCurrentSpecConfigName(oc, "worker") | ||
| initialMasterSpec := imagepolicy.GetMCPCurrentSpecConfigName(oc, "master") | ||
|
|
||
| createdIDMS, err := configClient.ImageDigestMirrorSets().Create(ctx, idms, metav1.CreateOptions{}) |
There was a problem hiding this comment.
Coderabbit pointed out that this could fail with AlreadExists. Why not use Apply() rather than create? or you need to check it if exists first before create.
There was a problem hiding this comment.
Yeah, I addressed CodeRabbit comments. Resource names now include a random suffix via utilrand.String(5) (e.g., digest-mirror-xk4qz), so AlreadyExists collisions from leftover artifacts won't happen.
Using Apply() would work too, but it could silently overwrite a resource from a concurrent test run rather than failing explicitly, which is less safe. With unique names + Create(), we get a clean resource every time and would catch genuine conflicts.
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: asahay19 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
test/extended/node/node_utils.go (1)
746-764: ⚡ Quick win
GetFirstReadyWorkerNodeis inconsistent with the rest of the file: no context, N+1 subprocess callsEvery other function in this file takes a
context.Contextand uses the Kubernetes client directly.GetFirstReadyWorkerNodeinstead:
- Takes no context — callers cannot propagate cancellation or timeouts.
- Issues one subprocess call to list node names, then one more per node to read its
Readycondition — the existinggetNodesByLabel+isNodeInReadyStatehelpers already cover this in a single API round-trip.Consider implementing it with the existing helpers:
♻️ Proposed refactor
-func GetFirstReadyWorkerNode(oc *exutil.CLI) string { - nodeNames, err := oc.AsAdmin().WithoutNamespace().Run("get").Args( - "nodes", "-l", "node-role.kubernetes.io/worker", - "-o=jsonpath={.items[*].metadata.name}", - ).Output() - o.Expect(err).NotTo(o.HaveOccurred()) - workers := strings.Fields(nodeNames) - o.Expect(workers).NotTo(o.BeEmpty(), "no worker nodes found") - - for _, w := range workers { - status, statusErr := oc.AsAdmin().WithoutNamespace().Run("get").Args( - "nodes", w, - "-o=jsonpath={.status.conditions[?(@.type=='Ready')].status}", - ).Output() - if statusErr == nil && status == "True" { - return w - } - } - return "" -} +func GetFirstReadyWorkerNode(ctx context.Context, oc *exutil.CLI) string { + workers, err := getNodesByLabel(ctx, oc, "node-role.kubernetes.io/worker") + o.Expect(err).NotTo(o.HaveOccurred()) + o.Expect(workers).NotTo(o.BeEmpty(), "no worker nodes found") + + for _, w := range workers { + if isNodeInReadyState(&w) { + return w.Name + } + } + o.Expect(false).To(o.BeTrue(), "no Ready worker node found") + return "" // unreachable +}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the rest with a brief reason, keep changes minimal, and validate. In `@test/extended/node/node_utils.go` around lines 746 - 764, GetFirstReadyWorkerNode should accept a context.Context and use the existing Kubernetes client helpers instead of spawning kubectl per-node: change the signature of GetFirstReadyWorkerNode to take ctx context.Context, call getNodesByLabel(ctx, oc, "node-role.kubernetes.io/worker") to get nodes, then iterate and call isNodeInReadyState(ctx, oc, node) (or the equivalent helper names present) to find the first Ready node and return its name; remove the subprocess Run("get") calls and preserve the same return semantics (empty string if none found).
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
Inline comments:
In `@test/extended/node/node_utils.go`:
- Around line 755-764: GetFirstReadyWorkerNode currently returns an empty string
when no worker with Ready=True is found, causing downstream cryptic failures;
change the post-loop behavior in GetFirstReadyWorkerNode (the function that
iterates workers and calls oc.AsAdmin().WithoutNamespace().Run("get").Args(...))
to fail the test immediately with a clear message (e.g. call the test
framework's fail/assert function or t.Fatalf/Failf) indicating "no ready worker
node found" instead of returning "" so callers never receive an invalid node
name.
---
Nitpick comments:
In `@test/extended/node/node_utils.go`:
- Around line 746-764: GetFirstReadyWorkerNode should accept a context.Context
and use the existing Kubernetes client helpers instead of spawning kubectl
per-node: change the signature of GetFirstReadyWorkerNode to take ctx
context.Context, call getNodesByLabel(ctx, oc, "node-role.kubernetes.io/worker")
to get nodes, then iterate and call isNodeInReadyState(ctx, oc, node) (or the
equivalent helper names present) to find the first Ready node and return its
name; remove the subprocess Run("get") calls and preserve the same return
semantics (empty string if none found).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Central YAML (inherited)
Review profile: CHILL
Plan: Enterprise
Run ID: ef32c950-ef14-4009-bfa5-3caf3802f43c
📒 Files selected for processing (2)
test/extended/node/node_e2e/node.gotest/extended/node/node_utils.go
🚧 Files skipped from review as they are similar to previous changes (1)
- test/extended/node/node_e2e/node.go
| for _, w := range workers { | ||
| status, statusErr := oc.AsAdmin().WithoutNamespace().Run("get").Args( | ||
| "nodes", w, | ||
| "-o=jsonpath={.status.conditions[?(@.type=='Ready')].status}", | ||
| ).Output() | ||
| if statusErr == nil && status == "True" { | ||
| return w | ||
| } | ||
| } | ||
| return "" |
There was a problem hiding this comment.
GetFirstReadyWorkerNode silently returns "" when no ready worker is found
Line 753 asserts the worker list is non-empty, but if every worker node is found yet none passes the Ready=True check, the function returns "" without any assertion failure. A caller receiving "" as a node name will fail later with a cryptic "node not found" error rather than a clear test message.
🐛 Proposed fix
for _, w := range workers {
status, statusErr := oc.AsAdmin().WithoutNamespace().Run("get").Args(
"nodes", w,
"-o=jsonpath={.status.conditions[?(@.type=='Ready')].status}",
).Output()
if statusErr == nil && status == "True" {
return w
}
}
- return ""
+ o.Expect(false).To(o.BeTrue(), "no Ready worker node found among %v", workers)
+ return "" // unreachable; satisfies compiler
}🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.
In `@test/extended/node/node_utils.go` around lines 755 - 764,
GetFirstReadyWorkerNode currently returns an empty string when no worker with
Ready=True is found, causing downstream cryptic failures; change the post-loop
behavior in GetFirstReadyWorkerNode (the function that iterates workers and
calls oc.AsAdmin().WithoutNamespace().Run("get").Args(...)) to fail the test
immediately with a clear message (e.g. call the test framework's fail/assert
function or t.Fatalf/Failf) indicating "no ready worker node found" instead of
returning "" so callers never receive an invalid node name.
|
Scheduling required tests: |
|
@asahay19: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
PR Summary
This test automates manual test case OCP-57401 by verifying end-to-end creation of both ImageDigestMirrorSet (IDMS) and ImageTagMirrorSet (ITMS) resources. It creates an IDMS and ITMS with both
AllowContactingSourceandNeverContactSourcemirror policies, waits for MachineConfigPool rollouts, and then validates that/etc/containers/registries.confon a worker node contains the correct digest-only and tag-only mirror entries along with blocked = true for NeverContactSource registries.This test is already present in openshift-test-private. As part of migrating test cases , I am automating it origin.
Polarian Link: https://polarion.engineering.redhat.com/polarion/#/project/OSE/workitem?id=OCP-57401
Jira task : https://redhat.atlassian.net/browse/OCPNODE-4505
Output
Tested locally on 4.22 openshift cluster and it got passed.
./openshift-tests run-test '[sig-node][Suite:openshift/disruptive-longrunning][Disruptive][Serial] ImageTagMirrorSet and ImageDigestMirrorSet [OTP] Create ImageDigestMirrorSet and ImageTagMirrorSet and verify registries.conf [OCP-57401]'PTAL @cpmeadors @BhargaviGudi
Summary by CodeRabbit