Skip to content

OTA-1550 Automate accept risks for OTA-1543#2226

Open
JianLi-RH wants to merge 1 commit intoopenshift:mainfrom
JianLi-RH:e2e_accept_risks_OTA-1543
Open

OTA-1550 Automate accept risks for OTA-1543#2226
JianLi-RH wants to merge 1 commit intoopenshift:mainfrom
JianLi-RH:e2e_accept_risks_OTA-1543

Conversation

@JianLi-RH
Copy link

@JianLi-RH JianLi-RH commented Mar 12, 2026

This will be a manual case (introduces a new label "manual") because it needs too long time to be finished.
Eipc: https://issues.redhat.com/browse/OTA-1543

We can run it follow below steps:

$ make build
$ ./oc-tests-ext run-test "[Jira:\"oc adm upgrade\"] accept risks exposed to conditional updates Accepted Risks for OCP Cluster Updates"

The case needs upstream: https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph and channel "OCP-88175"

Summary by CodeRabbit

  • Tests

    • Added a TechPreview-gated e2e test suite covering upgrade help output, recommended vs non-recommended version visibility, attempt warnings for non-recommended upgrades, accept-risk operations (add/replace/clear), override removal/restoration, polling/timeouts, and consistency between upgrade output and cluster status.
  • Tools

    • Added test utilities for HTTP resource retrieval, cluster version parsing, JSON-patch updates, and fluent CLI command helpers.

@openshift-ci openshift-ci bot added the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 12, 2026
@coderabbitai
Copy link

coderabbitai bot commented Mar 12, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review

Walkthrough

Adds a Ginkgo E2E test suite for oc adm upgrade and oc adm upgrade accept flows plus test utilities for JSON patching, HTTP fetch, and cluster-version parsing used by the new tests.

Changes

Cohort / File(s) Summary
New Upgrade Test Suite
test/e2e/upgrade.go
Adds a Ginkgo-based end-to-end test covering help output, fetching a faux upstream graph, patching ClusterVersion to point at the graph, parsing recommended vs non-recommended versions, polling upgrade output/status, attempting non-recommended upgrades, and exhaustive --accept-risk add/replace/clear scenarios with error checks.
Utility Enhancements
test/e2e/util.go
Adds JSONPatchOperation and ClusterVersion types, (*CLI).Patch method for JSON patching, GetWebResource HTTP GET helper, GetVersion to parse cluster version, isTechPreview helper, and minor CLI command chaining adjustment.

Sequence Diagram(s)

sequenceDiagram
    participant Test
    participant OC as "oc CLI"
    participant API as "ClusterVersion API"
    participant Web as "Upstream Graph (HTTP)"

    Test->>OC: run `oc adm upgrade --help` / `oc adm upgrade accept --help`
    Test->>Web: fetch faux upstream graph (GetWebResource)
    Web-->>Test: return graph JSON
    Test->>OC: patch ClusterVersion to point to faux upstream (CLI.Patch)
    OC->>API: PATCH /apis/config.openshift.io/v1/clusterversions
    API-->>OC: updated ClusterVersion
    Test->>OC: run `oc adm upgrade` (polling for available versions)
    OC->>API: GET /status for ClusterVersion
    API-->>OC: clusterversion status (available/desired versions)
    OC-->>Test: upgrade output (recommended / non-recommended)
    Test->>OC: `oc adm upgrade --accept-risk` (add/replace/clear)
    OC->>API: PATCH .spec.desiredUpdate.acceptRisks
    API-->>OC: updated acceptRisks status
    OC-->>Test: verify acceptRisks reflected in output/status (polling)
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Test Structure And Quality ⚠️ Warning Test exhibits single responsibility violation, error handling issues in cleanup, missing assertion messages, and inconsistency with Ginkgo v2 patterns. Replace defer statements with g.DeferCleanup(), add meaningful failure messages to all Expect() calls, break monolithic test into focused tests, and align with Ginkgo v2 conventions.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 85.71% which is sufficient. The required threshold is 80.00%.
Stable And Deterministic Test Names ✅ Passed All test names in upgrade.go are static and deterministic with no dynamic content like generated IDs, timestamps, or resource names.
Title check ✅ Passed The title directly references the ticket numbers (OTA-1550 and OTA-1543) and clearly indicates the main change: automating accept risks functionality for OTA-1543. This accurately summarizes the primary purpose of the changeset.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
📝 Coding Plan
  • Generate coding plan for human review comments

Comment @coderabbitai help to get the list of available commands and usage tips.

@openshift-ci openshift-ci bot requested review from ardaguclu and tchap March 12, 2026 09:43
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 12, 2026

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: JianLi-RH
Once this PR has been reviewed and has the lgtm label, please assign atiratree for approval. For more information see the Code Review Process.

The full list of commands accepted by this bot can be found here.

Details Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@JianLi-RH
Copy link
Author

/uncc @ardaguclu @tchap

@openshift-ci openshift-ci bot removed request for ardaguclu and tchap March 12, 2026 09:48
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🧹 Nitpick comments (4)
test/e2e/upgrade.go (4)

229-241: Remove or properly skip the commented-out upgrade code.

This commented-out block creates maintenance burden. If the upgrade step is intentionally deferred (e.g., due to long runtime), consider using g.Skip() with a reason, or a separate test spec with appropriate labels. If it's obsolete, remove it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 229 - 241, The commented-out upgrade block
inside the test that uses g.By("Upgrade to not recommend version") should be
removed or explicitly skipped; replace the commented code with a clear
g.Skip(...) call (with a concise reason) in the same test/spec or move the
upgrade logic into a separate test spec annotated/labeled for long-running E2E
upgrades (so it’s not hidden in comments). Locate the block around the
g.By("Upgrade to not recommend version") call and either delete the commented
lines or add a g.Skip invocation there, or extract the logic into a new
It/Context with appropriate labels for deferred/long-running upgrade tests.

52-52: Use camelCase for local variables per Go conventions.

Variables like new_channel, recommend_version, notrecommend_version_1 use snake_case. Go convention is camelCase (e.g., newChannel, recommendVersion, notRecommendVersion1).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` at line 52, The variables in test/e2e/upgrade.go use
snake_case; rename them to camelCase to follow Go conventions: change
new_channel to newChannel, recommend_version to recommendVersion,
notrecommend_version_1 to notRecommendVersion1 (and any similar snake_case
variables) and update every declaration and all references/usages (assignments,
function calls, comparisons) to the new names so the code compiles and behavior
is unchanged.

15-27: Remove misplaced GinkgoRecover and empty hooks.

defer g.GinkgoRecover() at the Describe level has no effect—it only runs once when the suite is set up, not during test execution. Additionally, empty BeforeEach/AfterEach blocks add no value.

♻️ Suggested cleanup
 var _ = g.Describe(`[Jira:"oc adm upgrade"] accept`, func() {
-
-	defer g.GinkgoRecover()
-
 	var (
 		oc = NewCLI("oc", KubeConfigPath())
 	)
-
-	g.BeforeEach(func() {
-	})
-
-	g.AfterEach(func() {
-	})

 	g.It("Accepted Risks for OCP Cluster Updates", g.Label("TechPreview", "88175", "Slow", "manual"), func() {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 15 - 27, Remove the misplaced defer
g.GinkgoRecover() call and the empty g.BeforeEach / g.AfterEach hooks from the
Describe block: the GinkgoRecover defer at the Describe level (in the Describe
function that declares oc and calls g.GinkgoRecover) should be deleted and any
empty BeforeEach/AfterEach functions removed to clean up the test; keep the
Describe, the oc variable declaration (NewCLI("oc", KubeConfigPath())), and any
real setup/teardown only if needed.

125-137: Migrate from deprecated wait.Poll to wait.PollUntilContextTimeout.

wait.Poll is deprecated in k8s.io/apimachinery and will be removed in a future Kubernetes release. The context-based wait.PollUntilContextTimeout is the recommended replacement, providing better cancellation support. This pattern appears at lines 125, 144, 168, and 207.

♻️ Example migration
+		ctx := context.Background()
-		err = wait.Poll(10*time.Second, 10*time.Minute, func() (bool, error) {
+		err = wait.PollUntilContextTimeout(ctx, 10*time.Second, 10*time.Minute, true, func(ctx context.Context) (bool, error) {
 			out, err = oc.Run("adm").Args("upgrade").Output()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 125 - 137, The test uses the deprecated
wait.Poll; replace calls to wait.Poll with context-aware
wait.PollUntilContextTimeout by creating a context with timeout (matching the
original 10*time.Minute) and passing a cancelable context into
wait.PollUntilContextTimeout, update the polling callback signature to accept
the context and return (bool, error) as before, and call cancel when done;
update the specific usage here (the anonymous function passed to wait.Poll, the
variable names out/err and the surrounding AssertWaitPollNoErr call) so the
behavior and retry interval (10*time.Second) remain the same; repeat the same
migration at the other occurrences noted (lines referencing wait.Poll at the
other locations) to remove the deprecated API usage.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/e2e/upgrade.go`:
- Around line 71-77: The chained type assertions retrieving version strings
(e.g., recommend_version, notrecommend_version_1/2/3) are unsafe and may panic;
update the extraction to check types and presence before casting: after
confirming nodes from result, for each index 1–4 fetch nodes[i], assert it's a
map[string]interface{} (with an ok check), assert the "version" field exists and
is a string (with an ok check), and use o.Expect to fail tests with clear
messages if any check fails; reference the variables recommend_version and
notrecommend_version_1/2/3 and replace the current direct assertions with
guarded checks that provide descriptive errors instead of allowing panics.
- Around line 36-54: GetVersion(oc) currently discards its error so the later
o.Expect(err).NotTo(o.HaveOccurred()) is checking an uninitialized err; change
the call to capture and check the error (use clusterVersion, err :=
GetVersion(oc) and immediately o.Expect(err).NotTo(o.HaveOccurred())), and then
ensure any subsequent o.Expect(err) assertions correspond to the most recent
oc.Run/... call that returned err (or remove the stray o.Expect(err) if it was
meant to assert an earlier, already-checked call); reference symbols:
GetVersion, clusterVersion, err, and the
oc.AsAdmin().WithoutNamespace().Run("get")... calls.

In `@test/e2e/util.go`:
- Around line 1291-1300: GetVersion may panic by indexing version[0..2] when
fullVersion isn't three dot-separated parts; update the function (GetVersion) to
validate the split result before accessing indices: check len(version) and if
fewer than 3 parts either return a descriptive error or populate ClusterVersion
with FullVersion and empty Major/Minor/Patch (or default values) to avoid index
out of range; ensure the returned ClusterVersion always reflects FullVersion and
does not perform unsafe indexing on the version slice.
- Around line 409-427: In GetWebResource, the function currently returns nil,
err when resp.StatusCode != http.StatusOK but err is nil, causing callers to see
(nil, nil); change this to return an explicit error (e.g., fmt.Errorf including
resp.StatusCode and http.StatusText) so non-200 responses are reported; locate
the status-check in GetWebResource and replace the nil error return with a
constructed error (optionally include response body or URL for more context)
while keeping resp.Body.Close and subsequent io.ReadAll logic intact.

---

Nitpick comments:
In `@test/e2e/upgrade.go`:
- Around line 229-241: The commented-out upgrade block inside the test that uses
g.By("Upgrade to not recommend version") should be removed or explicitly
skipped; replace the commented code with a clear g.Skip(...) call (with a
concise reason) in the same test/spec or move the upgrade logic into a separate
test spec annotated/labeled for long-running E2E upgrades (so it’s not hidden in
comments). Locate the block around the g.By("Upgrade to not recommend version")
call and either delete the commented lines or add a g.Skip invocation there, or
extract the logic into a new It/Context with appropriate labels for
deferred/long-running upgrade tests.
- Line 52: The variables in test/e2e/upgrade.go use snake_case; rename them to
camelCase to follow Go conventions: change new_channel to newChannel,
recommend_version to recommendVersion, notrecommend_version_1 to
notRecommendVersion1 (and any similar snake_case variables) and update every
declaration and all references/usages (assignments, function calls, comparisons)
to the new names so the code compiles and behavior is unchanged.
- Around line 15-27: Remove the misplaced defer g.GinkgoRecover() call and the
empty g.BeforeEach / g.AfterEach hooks from the Describe block: the
GinkgoRecover defer at the Describe level (in the Describe function that
declares oc and calls g.GinkgoRecover) should be deleted and any empty
BeforeEach/AfterEach functions removed to clean up the test; keep the Describe,
the oc variable declaration (NewCLI("oc", KubeConfigPath())), and any real
setup/teardown only if needed.
- Around line 125-137: The test uses the deprecated wait.Poll; replace calls to
wait.Poll with context-aware wait.PollUntilContextTimeout by creating a context
with timeout (matching the original 10*time.Minute) and passing a cancelable
context into wait.PollUntilContextTimeout, update the polling callback signature
to accept the context and return (bool, error) as before, and call cancel when
done; update the specific usage here (the anonymous function passed to
wait.Poll, the variable names out/err and the surrounding AssertWaitPollNoErr
call) so the behavior and retry interval (10*time.Second) remain the same;
repeat the same migration at the other occurrences noted (lines referencing
wait.Poll at the other locations) to remove the deprecated API usage.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9dc82d2f-d56e-48de-a469-7d566a8383f4

📥 Commits

Reviewing files that changed from the base of the PR and between b0ae763 and 8025d85.

📒 Files selected for processing (2)
  • test/e2e/upgrade.go
  • test/e2e/util.go

@JianLi-RH JianLi-RH force-pushed the e2e_accept_risks_OTA-1543 branch 2 times, most recently from 57d292b to 1be2183 Compare March 12, 2026 10:00
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

♻️ Duplicate comments (1)
test/e2e/upgrade.go (1)

36-39: ⚠️ Potential issue | 🟠 Major

Handle the GetVersion error before using clusterVersion.

Line 36 still discards the error, so clusterVersion.FullVersion can be empty and later assertions point at the graph fetch instead of the real failure.

🐛 Proposed fix
-		clusterVersion, _ := GetVersion(oc)
+		clusterVersion, err := GetVersion(oc)
+		o.Expect(err).NotTo(o.HaveOccurred())
 
 		g.By("checking the help info of `oc adm upgrade --help`")
 		out, err := oc.Run("adm").Args("upgrade", "--help").Output()
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 36 - 39, GetVersion's error is being
ignored; change the code that calls GetVersion (the call assigning
clusterVersion, _ := GetVersion(oc)) to capture and check the error (e.g.,
clusterVersion, err := GetVersion(oc) and fail the test or return if err != nil)
before using clusterVersion.FullVersion; ensure any subsequent assertions (and
the oc.Run("adm").Args("upgrade", "--help") block) only run after verifying
GetVersion succeeded.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/e2e/upgrade.go`:
- Around line 63-64: The graph query currently hardcodes "arch=amd64" when
building fullGraph (using variables graph, new_channel,
clusterVersion.FullVersion and then calling GetWebResource), which breaks on
non-amd64 clusters; change the code to determine the cluster architecture at
runtime (e.g., read the cluster's arch from clusterVersion or the cluster API)
and substitute that value into the query string, or if the test does not support
the cluster's arch, explicitly skip the test for unsupported arches instead of
forcing amd64. Ensure fullGraph uses the derived arch value (or the skip) before
calling GetWebResource.
- Around line 117-120: Capture the original .spec.desiredUpdate.acceptRisks
value at test start (before any oc.Run("adm").Args("upgrade"... mutations) and
store it in a variable; then explicitly set a known starting state (e.g., empty
list) before exercising the accept-risks flows (the blocks that call
oc.Run("adm").Args("upgrade","accept","--clear") and the related --replace tests
between the current region and lines 122-205) so the test is hermetic; finally,
in the test cleanup/teardown restore the saved original value back to
.spec.desiredUpdate.acceptRisks to avoid leaking this test's changes to later
tests. Ensure you reference the same oc.Run invocations (oc, out, err) and the
upgrade accept/replace flows when adding the snapshot/normalize/restore steps.
- Around line 69-82: The code currently indexes nodes with fixed positions
(extractVersion(1), extractVersion(2), extractVersion(3), extractVersion(4))
which will panic if fewer than five nodes are returned; before calling
extractVersion, assert the slice length (nodes) is at least 5 using
o.Expect(len(nodes) >= 5). Alternatively, avoid positional indexing by finding
nodes by a stable property (e.g., id/version/role) and call extractVersion on
the found index(s) instead; update recommendVersion/notRecommendVersion*
assignments to use the validated indexes or the discovered indexes from the
property-based lookup.
- Around line 84-94: The code unconditionally issues a JSON Patch remove using
the string output stored in overrides which will fail if /spec/overrides doesn't
exist and will not preserve structured JSON for re-adding; change the logic to
GET the clusterversion as JSON (use
oc.AsAdmin().WithoutNamespace().Run("get").Args("clusterversion","version","-o","json").Output())
and unmarshal to check for the presence of .spec.overrides, if present marshal
that node as JSON (or keep it as an interface{} value) then perform the Patch
remove (using JSONp{"remove","/spec/overrides",nil}) and only install the defer
add patch when you actually removed it, using the previously captured structured
JSON value for the add operation (refer to the overrides variable, the Patch
calls, and the defer block).

In `@test/e2e/util.go`:
- Around line 410-413: GetWebResource currently calls http.Get which uses the
default client with no timeout; create and use an http.Client with a sensible
Timeout (e.g., 5–15s) and call client.Get instead inside GetWebResource so the
request will fail fast if the external endpoint stalls, then continue to handle
resp and err as before. Ensure you replace the http.Get call in GetWebResource
with client.Get using the new client and keep existing response-body
handling/closing logic.

---

Duplicate comments:
In `@test/e2e/upgrade.go`:
- Around line 36-39: GetVersion's error is being ignored; change the code that
calls GetVersion (the call assigning clusterVersion, _ := GetVersion(oc)) to
capture and check the error (e.g., clusterVersion, err := GetVersion(oc) and
fail the test or return if err != nil) before using clusterVersion.FullVersion;
ensure any subsequent assertions (and the oc.Run("adm").Args("upgrade",
"--help") block) only run after verifying GetVersion succeeded.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: ce413192-fdcd-4d3e-9789-496857174b55

📥 Commits

Reviewing files that changed from the base of the PR and between 8025d85 and 57d292b.

📒 Files selected for processing (2)
  • test/e2e/upgrade.go
  • test/e2e/util.go

Comment on lines +63 to +64
fullGraph := fmt.Sprintf("%s?channel=%s&version=%s&arch=amd64", graph, new_channel, clusterVersion.FullVersion)
graphData, err := GetWebResource(fullGraph)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't hardcode arch=amd64 in the graph query.

This spec only skips on TechPreview. On arm64/s390x/ppc64le clusters, querying fauxinnati as amd64 can return the wrong graph or no graph at all. Please derive the arch from the cluster here, or explicitly skip unsupported multi-arch cases.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 63 - 64, The graph query currently
hardcodes "arch=amd64" when building fullGraph (using variables graph,
new_channel, clusterVersion.FullVersion and then calling GetWebResource), which
breaks on non-amd64 clusters; change the code to determine the cluster
architecture at runtime (e.g., read the cluster's arch from clusterVersion or
the cluster API) and substitute that value into the query string, or if the test
does not support the cluster's arch, explicitly skip the test for unsupported
arches instead of forcing amd64. Ensure fullGraph uses the derived arch value
(or the skip) before calling GetWebResource.

Comment on lines +69 to +82
nodes, ok := result["nodes"].([]interface{})
o.Expect(ok).To(o.BeTrue(), "get nodes failed")

extractVersion := func(index int) string {
node, ok := nodes[index].(map[string]interface{})
o.Expect(ok).To(o.BeTrue(), "node %d is not a map", index)
version, ok := node["version"].(string)
o.Expect(ok).To(o.BeTrue(), "node %d missing version string", index)
return version
}
recommendVersion := extractVersion(1)
notRecommendVersion1 := extractVersion(2)
notRecommendVersion2 := extractVersion(3)
notRecommendVersion3 := extractVersion(4)
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Guard the fixed node indexes before calling extractVersion.

extractVersion(4) will panic if the graph ever returns fewer than five nodes. At minimum, assert the length before indexing; ideally, select the target nodes by property instead of position.

🛡️ Minimal fix
 		nodes, ok := result["nodes"].([]interface{})
 		o.Expect(ok).To(o.BeTrue(), "get nodes failed")
+		o.Expect(len(nodes)).To(o.BeNumerically(">=", 5), "expected at least 5 nodes in the graph")
 
 		extractVersion := func(index int) string {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 69 - 82, The code currently indexes nodes
with fixed positions (extractVersion(1), extractVersion(2), extractVersion(3),
extractVersion(4)) which will panic if fewer than five nodes are returned;
before calling extractVersion, assert the slice length (nodes) is at least 5
using o.Expect(len(nodes) >= 5). Alternatively, avoid positional indexing by
finding nodes by a stable property (e.g., id/version/role) and call
extractVersion on the found index(s) instead; update
recommendVersion/notRecommendVersion* assignments to use the validated indexes
or the discovered indexes from the property-based lookup.

Comment on lines +117 to +120
g.By("clear risks when the accept risk list is empty")
out, err = oc.Run("adm").Args("upgrade", "accept", "--clear").Output()
o.Expect(err).NotTo(o.HaveOccurred())
o.Expect(out).To(o.ContainSubstring("info: Accept risks are not changed"))
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Make the accept-risks flow hermetic from the start.

Lines 117-120 assume .spec.desiredUpdate.acceptRisks is already empty, and the final --replace leaves the cluster modified even on success. Snapshot the original value before the first mutation, normalize to a known starting state, and restore it in cleanup so later specs do not inherit this test's upgrade policy.

Also applies to: 122-205

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 117 - 120, Capture the original
.spec.desiredUpdate.acceptRisks value at test start (before any
oc.Run("adm").Args("upgrade"... mutations) and store it in a variable; then
explicitly set a known starting state (e.g., empty list) before exercising the
accept-risks flows (the blocks that call
oc.Run("adm").Args("upgrade","accept","--clear") and the related --replace tests
between the current region and lines 122-205) so the test is hermetic; finally,
in the test cleanup/teardown restore the saved original value back to
.spec.desiredUpdate.acceptRisks to avoid leaking this test's changes to later
tests. Ensure you reference the same oc.Run invocations (oc, out, err) and the
upgrade accept/replace flows when adding the snapshot/normalize/restore steps.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (5)
test/e2e/util.go (1)

409-427: ⚠️ Potential issue | 🟠 Major

Add a deadline to this HTTP fetch.

http.Get uses the default client with no timeout. If the external upgrade-graph endpoint stalls, the test will hang indefinitely until the outer test timeout fires.

⏱️ Proposed fix
 func GetWebResource(url string) ([]byte, error) {
 	e2e.Logf("Getting web resource: %s", url)
-	resp, err := http.Get(url)
+	client := &http.Client{Timeout: 30 * time.Second}
+	resp, err := client.Get(url)
 	if err != nil {
 		return nil, err
 	}

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/util.go` around lines 409 - 427, GetWebResource uses http.Get with
no timeout which can hang; change it to perform the request with a deadline by
creating an http.Client with a reasonable Timeout (or use context.WithTimeout +
http.NewRequestWithContext) and use that client to do the request inside
GetWebResource; ensure you still close resp.Body and return bodyBytes/error
unchanged (refer to function GetWebResource and variables resp/bodyBytes) and
pick a sensible timeout (e.g. several seconds or configurable) so the test won't
stall.
test/e2e/upgrade.go (4)

70-83: ⚠️ Potential issue | 🟠 Major

Guard the fixed node indexes before calling extractVersion.

extractVersion(4) will panic if the graph returns fewer than five nodes. Add a length assertion before indexing.

🛡️ Minimal fix
 		nodes, ok := result["nodes"].([]interface{})
 		o.Expect(ok).To(o.BeTrue(), "get nodes failed")
+		o.Expect(len(nodes)).To(o.BeNumerically(">=", 5), "expected at least 5 nodes in the graph, got %d", len(nodes))

 		extractVersion := func(index int) string {

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 70 - 83, The code calls extractVersion with
fixed indexes (1..4) without ensuring nodes has at least five elements, which
can panic; before calling extractVersion, assert the slice length (e.g., check
len(nodes) >= 5) using the existing test helper (o.Expect) or similar guard, and
fail the test with a clear message if there are too few nodes so
extractVersion(1), extractVersion(2), extractVersion(3), and extractVersion(4)
are only invoked when nodes contains the required indexes.

118-120: ⚠️ Potential issue | 🟠 Major

Make the accept-risks flow hermetic.

The test assumes .spec.desiredUpdate.acceptRisks is empty at the start, but doesn't verify or restore this state. If a previous test or cluster state left risks configured, the test may fail spuriously or leave the cluster modified after completion.

Capture the original acceptRisks value at test start and restore it in cleanup (e.g., via defer).

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 118 - 120, Before calling the adm upgrade
accept --clear flow, read and save the current .spec.desiredUpdate.acceptRisks
for the ClusterVersion resource (e.g., by using the existing oc client used in
this test file) and install a defer to restore that saved acceptRisks value at
cleanup; specifically, just prior to the adm command invocation around the
oc.Run("adm").Args("upgrade","accept","--clear") call capture the current
acceptRisks, and in a deferred function issue the corresponding patch/update
(using the oc client) to set .spec.desiredUpdate.acceptRisks back to the
original value so the test is hermetic and leaves cluster state unchanged.

63-64: ⚠️ Potential issue | 🟠 Major

Don't hardcode arch=amd64 in the graph query.

This spec only skips on TechPreview. On arm64/s390x/ppc64le clusters, querying fauxinnati as amd64 can return the wrong graph or no graph at all. Please derive the arch from the cluster here, or explicitly skip unsupported multi-arch cases.

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 63 - 64, The query string hardcodes
"arch=amd64" when building fullGraph which fails on non-amd64 clusters; change
the code that constructs fullGraph (using variables graph, new_channel,
clusterVersion.FullVersion) to derive the architecture from the test cluster
(e.g., read the cluster/node architecture field you already have accessible in
this test—use the clusterVersion or node status/architecture value instead of
the literal "amd64") or, if multi-arch is unsupported here, add an explicit skip
for unsupported architectures before constructing fullGraph; update the format
string to use the derived arch variable rather than the hardcoded value.

85-95: ⚠️ Potential issue | 🟠 Major

Unconditional remove of /spec/overrides will fail when the field doesn't exist.

The comment says "if exists" but the code unconditionally executes a JSON Patch "remove" operation. Per RFC 6902, "remove" fails if the target path doesn't exist. Additionally, the jsonpath output (a string with quotes) used in the "add" restore won't faithfully recreate the original structure.

Check whether /spec/overrides exists before attempting removal, and capture it as structured JSON for proper restoration.

,

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 85 - 95, The current code always issues a
JSON Patch "remove" on clusterversion/version /spec/overrides using the string
output in overrides, causing errors when the path is absent and preventing
faithful restore; change the logic to first GET clusterversion/version -o json
(via oc.AsAdmin().WithoutNamespace().Run(...).Args(...).Output()), parse the
returned JSON to check if spec.overrides exists, only call Patch(...,
[]JSONp{{"remove","/spec/overrides",nil}}) when the field is present, and when
saving for restore store the actual parsed JSON value (not the jsonpath string)
and use that structured value in the deferred Patch add operation (JSONp add
with the parsed overrides) so removal is conditional and restoration preserves
structure.
🧹 Nitpick comments (6)
test/e2e/upgrade.go (4)

23-27: Remove empty lifecycle hooks.

The BeforeEach and AfterEach blocks are empty and serve no purpose. Remove them to reduce noise.

🧹 Proposed fix
-	g.BeforeEach(func() {
-	})
-
-	g.AfterEach(func() {
-	})
-
 	g.It("Accepted Risks for OCP Cluster Updates", g.Label("TechPreview", "88175", "Slow", "manual"), func() {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 23 - 27, Remove the no-op test lifecycle
hooks: delete the empty BeforeEach and AfterEach blocks (the calls to
g.BeforeEach(func() {}) and g.AfterEach(func() {})) from the test setup so there
are no empty hooks left; ensure no other setup/teardown logic is accidentally
removed and that the test suite still compiles after removing these functions.

235-246: Commented-out test code should be removed or tracked.

The "Upgrade to not recommend version" step is fully commented out. If this is intentional and deferred work, consider adding a TODO with a tracking issue reference. Otherwise, remove dead code.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 235 - 246, The block under the test step
g.By("Upgrade to not recommend version") contains fully commented-out code
(calls like oc.Run("adm").Args("upgrade", "--to", notRecommendVersion3) and the
wait.Poll/assert logic) and should be cleaned up: either delete the dead
commented code or replace it with a single-line TODO that references a tracking
issue/PR and explains why the upgrade test is disabled (include the variables
notRecommendVersion3/notRecommendVersion1 in the TODO for clarity); ensure the
surrounding test flow (the g.By call) is updated accordingly so no misleading
empty step remains.

17-17: Incorrect usage of g.GinkgoRecover().

GinkgoRecover() is intended to be called inside goroutines spawned during test execution to catch panics and report them as test failures. Placing defer g.GinkgoRecover() at the Describe block level has no effect since the main test flow doesn't need recovery in this manner.

Remove this line.

🔧 Proposed fix
 var _ = g.Describe(`[Jira:"oc adm upgrade"] accept`, func() {
-
-	defer g.GinkgoRecover()

 	var (
 		oc = NewCLI("oc", KubeConfigPath())
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` at line 17, Remove the top-level defer g.GinkgoRecover()
call in test/e2e/upgrade.go — GinkgoRecover should only be deferred inside
goroutines to convert panics into test failures, so delete the lone "defer
g.GinkgoRecover()" (found in the Describe/test setup) and, if you have any
goroutines in this file, wrap their entry functions with a deferred
g.GinkgoRecover() inside the goroutine bodies instead.

131-142: wait.Poll is deprecated; use wait.PollUntilContextTimeout instead.

wait.Poll has been deprecated in favor of context-aware variants. This applies to all polling loops in this file (lines 131, 150, 174, 213).

♻️ Example refactor for the first polling loop
-		err = wait.Poll(10*time.Second, 10*time.Minute, func() (bool, error) {
+		err = wait.PollUntilContextTimeout(context.Background(), 10*time.Second, 10*time.Minute, true, func(ctx context.Context) (bool, error) {
 			out, err = oc.Run("adm").Args("upgrade").Output()
 			if err != nil {
 				e2e.Logf("get oc adm upgrade output failed. Trying again")
 				return false, nil
 			}
 			if !strings.Contains(out, notRecommendVersion1) {
 				e2e.Logf("non-recommend version %s is not available. Trying again", notRecommendVersion1)
 				return false, nil
 			}
 			return true, nil
 		})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 131 - 142, Replace deprecated wait.Poll
usage with the context-aware wait.PollUntilContextTimeout: change the call to
wait.PollUntilContextTimeout(ctx, 10*time.Second, 10*time.Minute, func(ctx
context.Context) (bool, error) { ... }) (or use
context.Background()/context.TODO() if no test context is available), update the
callback signature to accept context.Context, keep the same logic calling
oc.Run("adm").Args("upgrade").Output() and checking strings.Contains(out,
notRecommendVersion1), and add the "context" import; apply the same replacement
for the other polling loops (the ones around lines 150, 174, 213) to use
wait.PollUntilContextTimeout consistently.
test/e2e/util.go (2)

41-46: Consider more descriptive field names for JSONp.

The field names Oper, Path, Valu are abbreviated inconsistently. While the JSON serialization uses standard JSON Patch field names (op, path, value), the Go field names could be clearer (e.g., Op or Operation, Value).

This is a minor readability nit.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/util.go` around lines 41 - 46, The JSONp struct uses abbreviated Go
field names (Oper, Path, Valu) which hurts readability; rename the struct fields
to clearer identifiers (e.g., Op or Operation, Path (keep or PathName), and
Value) while keeping the json tags unchanged (`json:"op"`, `json:"path"`,
`json:"value,omitempty"`) so external JSON stays the same; update any usages of
JSONp, JSONp.Oper, JSONp.Path, and JSONp.Valu throughout tests to the new field
names to ensure compilation.

1025-1031: Using o.Expect in a utility function can cause panics outside Ginkgo context.

isTechPreview uses o.Expect(err).NotTo(o.HaveOccurred()) which will panic if called from code that isn't running inside a Ginkgo test. Consider returning an error or a tuple (bool, error) to make this function safer and more reusable.

♻️ Proposed safer signature
-func isTechPreview(oc *CLI) bool {
+func isTechPreview(oc *CLI) (bool, error) {
 	techPreview, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("clusterversion", "version", "-o=jsonpath='{.status.conditions[?(@.type==\"Upgradeable\")].message}'").Output()
-	o.Expect(err).NotTo(o.HaveOccurred())
+	if err != nil {
+		return false, err
+	}
 	e2e.Logf("Upgradeable: %v\n", techPreview)
-	return strings.Contains(techPreview, "TechPreviewNoUpgrade")
+	return strings.Contains(techPreview, "TechPreviewNoUpgrade"), nil
 }

Then update callers (in upgrade.go) to handle the error:

isTech, err := isTechPreview(oc)
o.Expect(err).NotTo(o.HaveOccurred())
if !isTech {
    g.Skip("Skipping test: only tech-preview clusters supported")
}
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/util.go` around lines 1025 - 1031, Change isTechPreview to return
(bool, error) instead of asserting with o.Expect: detect the cluster by running
oc.AsAdmin().WithoutNamespace().Run("get")...Output(), then return false with
the encountered error if err != nil, otherwise return strings.Contains(...),
nil; update all callers (e.g., in upgrade.go) to check the error (isTech, err :=
isTechPreview(oc); o.Expect(err).NotTo(o.HaveOccurred())) and then use isTech to
decide whether to skip (g.Skip) or continue, removing any direct o.Expect calls
from inside isTechPreview.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/e2e/upgrade.go`:
- Around line 50-51: The code discards errors when fetching original cluster
values into oldUpstream and oldChannel via
oc.AsAdmin().WithoutNamespace().Run("get").Args(...).Output(), which can lead
the deferred restoration to overwrite real config with empty strings; update
both calls to capture and check the returned error (e.g., errUpstream,
errChannel) and if either is non-nil, fail the test or return the error (use
t.Fatalf or return) instead of proceeding, so the deferred restoration block
that writes oldUpstream/oldChannel back only runs when those values were
successfully read.

---

Duplicate comments:
In `@test/e2e/upgrade.go`:
- Around line 70-83: The code calls extractVersion with fixed indexes (1..4)
without ensuring nodes has at least five elements, which can panic; before
calling extractVersion, assert the slice length (e.g., check len(nodes) >= 5)
using the existing test helper (o.Expect) or similar guard, and fail the test
with a clear message if there are too few nodes so extractVersion(1),
extractVersion(2), extractVersion(3), and extractVersion(4) are only invoked
when nodes contains the required indexes.
- Around line 118-120: Before calling the adm upgrade accept --clear flow, read
and save the current .spec.desiredUpdate.acceptRisks for the ClusterVersion
resource (e.g., by using the existing oc client used in this test file) and
install a defer to restore that saved acceptRisks value at cleanup;
specifically, just prior to the adm command invocation around the
oc.Run("adm").Args("upgrade","accept","--clear") call capture the current
acceptRisks, and in a deferred function issue the corresponding patch/update
(using the oc client) to set .spec.desiredUpdate.acceptRisks back to the
original value so the test is hermetic and leaves cluster state unchanged.
- Around line 63-64: The query string hardcodes "arch=amd64" when building
fullGraph which fails on non-amd64 clusters; change the code that constructs
fullGraph (using variables graph, new_channel, clusterVersion.FullVersion) to
derive the architecture from the test cluster (e.g., read the cluster/node
architecture field you already have accessible in this test—use the
clusterVersion or node status/architecture value instead of the literal "amd64")
or, if multi-arch is unsupported here, add an explicit skip for unsupported
architectures before constructing fullGraph; update the format string to use the
derived arch variable rather than the hardcoded value.
- Around line 85-95: The current code always issues a JSON Patch "remove" on
clusterversion/version /spec/overrides using the string output in overrides,
causing errors when the path is absent and preventing faithful restore; change
the logic to first GET clusterversion/version -o json (via
oc.AsAdmin().WithoutNamespace().Run(...).Args(...).Output()), parse the returned
JSON to check if spec.overrides exists, only call Patch(...,
[]JSONp{{"remove","/spec/overrides",nil}}) when the field is present, and when
saving for restore store the actual parsed JSON value (not the jsonpath string)
and use that structured value in the deferred Patch add operation (JSONp add
with the parsed overrides) so removal is conditional and restoration preserves
structure.

In `@test/e2e/util.go`:
- Around line 409-427: GetWebResource uses http.Get with no timeout which can
hang; change it to perform the request with a deadline by creating an
http.Client with a reasonable Timeout (or use context.WithTimeout +
http.NewRequestWithContext) and use that client to do the request inside
GetWebResource; ensure you still close resp.Body and return bodyBytes/error
unchanged (refer to function GetWebResource and variables resp/bodyBytes) and
pick a sensible timeout (e.g. several seconds or configurable) so the test won't
stall.

---

Nitpick comments:
In `@test/e2e/upgrade.go`:
- Around line 23-27: Remove the no-op test lifecycle hooks: delete the empty
BeforeEach and AfterEach blocks (the calls to g.BeforeEach(func() {}) and
g.AfterEach(func() {})) from the test setup so there are no empty hooks left;
ensure no other setup/teardown logic is accidentally removed and that the test
suite still compiles after removing these functions.
- Around line 235-246: The block under the test step g.By("Upgrade to not
recommend version") contains fully commented-out code (calls like
oc.Run("adm").Args("upgrade", "--to", notRecommendVersion3) and the
wait.Poll/assert logic) and should be cleaned up: either delete the dead
commented code or replace it with a single-line TODO that references a tracking
issue/PR and explains why the upgrade test is disabled (include the variables
notRecommendVersion3/notRecommendVersion1 in the TODO for clarity); ensure the
surrounding test flow (the g.By call) is updated accordingly so no misleading
empty step remains.
- Line 17: Remove the top-level defer g.GinkgoRecover() call in
test/e2e/upgrade.go — GinkgoRecover should only be deferred inside goroutines to
convert panics into test failures, so delete the lone "defer g.GinkgoRecover()"
(found in the Describe/test setup) and, if you have any goroutines in this file,
wrap their entry functions with a deferred g.GinkgoRecover() inside the
goroutine bodies instead.
- Around line 131-142: Replace deprecated wait.Poll usage with the context-aware
wait.PollUntilContextTimeout: change the call to
wait.PollUntilContextTimeout(ctx, 10*time.Second, 10*time.Minute, func(ctx
context.Context) (bool, error) { ... }) (or use
context.Background()/context.TODO() if no test context is available), update the
callback signature to accept context.Context, keep the same logic calling
oc.Run("adm").Args("upgrade").Output() and checking strings.Contains(out,
notRecommendVersion1), and add the "context" import; apply the same replacement
for the other polling loops (the ones around lines 150, 174, 213) to use
wait.PollUntilContextTimeout consistently.

In `@test/e2e/util.go`:
- Around line 41-46: The JSONp struct uses abbreviated Go field names (Oper,
Path, Valu) which hurts readability; rename the struct fields to clearer
identifiers (e.g., Op or Operation, Path (keep or PathName), and Value) while
keeping the json tags unchanged (`json:"op"`, `json:"path"`,
`json:"value,omitempty"`) so external JSON stays the same; update any usages of
JSONp, JSONp.Oper, JSONp.Path, and JSONp.Valu throughout tests to the new field
names to ensure compilation.
- Around line 1025-1031: Change isTechPreview to return (bool, error) instead of
asserting with o.Expect: detect the cluster by running
oc.AsAdmin().WithoutNamespace().Run("get")...Output(), then return false with
the encountered error if err != nil, otherwise return strings.Contains(...),
nil; update all callers (e.g., in upgrade.go) to check the error (isTech, err :=
isTechPreview(oc); o.Expect(err).NotTo(o.HaveOccurred())) and then use isTech to
decide whether to skip (g.Skip) or continue, removing any direct o.Expect calls
from inside isTechPreview.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 9b489c72-c85e-40a8-8d24-dbdfcbbad04c

📥 Commits

Reviewing files that changed from the base of the PR and between 57d292b and 1be2183.

📒 Files selected for processing (2)
  • test/e2e/upgrade.go
  • test/e2e/util.go

@JianLi-RH JianLi-RH force-pushed the e2e_accept_risks_OTA-1543 branch 2 times, most recently from ee76ab8 to 834aa52 Compare March 12, 2026 13:32
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (4)
test/e2e/upgrade.go (3)

70-70: Use snake_case for local variable naming consistency.

Go convention prefers camelCase for local variables. new_channel should be newChannel for consistency with the rest of the codebase (e.g., oldUpstream, oldChannel).

✏️ Suggested rename
 		graph := "https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"
-		new_channel := "OCP-88175"
+		newChannel := "OCP-88175"
 		_, err = oc.AsAdmin().WithoutNamespace().Patch("", "clusterversion/version", []JSONp{
 			{"add", "/spec/upstream", graph},
-			{"add", "/spec/channel", new_channel},
+			{"add", "/spec/channel", newChannel},
 		})

Also update references at lines 81 and any other occurrences.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` at line 70, Rename the local variable new_channel to
newChannel to follow Go naming conventions and match surrounding variables like
oldUpstream and oldChannel; update every reference to new_channel (including the
usage near the upgrade test where it's passed/compared/assigned) to use
newChannel so identifiers remain consistent and compile.

15-27: Remove unnecessary constructs: defer GinkgoRecover() and empty hooks.

  1. defer g.GinkgoRecover() at line 17 is typically used inside goroutines to catch panics; at the Describe level it serves no purpose since Ginkgo already handles panics in specs.
  2. The empty BeforeEach and AfterEach hooks (lines 23-27) add noise without functionality.
♻️ Proposed cleanup
 var _ = g.Describe(`[Jira:"oc adm upgrade"] accept`, func() {
-
-	defer g.GinkgoRecover()
-
 	var (
 		oc = NewCLI("oc", KubeConfigPath())
 	)
-
-	g.BeforeEach(func() {
-	})
-
-	g.AfterEach(func() {
-	})

 	g.It("Accepted Risks for OCP Cluster Updates", g.Label("TechPreview", "88175", "Slow", "manual"), func() {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 15 - 27, Remove the unnecessary defer
g.GinkgoRecover() call and the empty g.BeforeEach and g.AfterEach hooks from the
g.Describe block; locate the g.Describe(`[Jira:"oc adm upgrade"] accept`, func()
{ ... }) declaration and delete the g.GinkgoRecover() line and the empty
BeforeEach/AfterEach functions (they add no behavior), leaving only the variable
declarations (e.g., oc := NewCLI("oc", KubeConfigPath())) and any real test
code.

240-251: Remove or address the commented-out upgrade block.

Large blocks of commented-out code reduce readability. If this functionality is planned for future implementation, replace it with a TODO comment or track it in the issue tracker. Otherwise, remove it.

♻️ Suggested cleanup
 		g.By("Upgrade to not recommend version")
-		// out, err = oc.Run("adm").Args("upgrade", "--to", notRecommendVersion3).Output()
-		// o.Expect(err).NotTo(o.HaveOccurred())
-		// expectedStatus := fmt.Sprintf("Cluster version is %s", notRecommendVersion3)
-		// err = wait.Poll(30*time.Second, 90*time.Minute, func() (bool, error) {
-		// 	out, _ = oc.Run("get").Args("clusterversion").Output()
-		// 	if !strings.Contains(out, expectedStatus) {
-		// 		return false, nil
-		// 	}
-		// 	return true, nil
-		// })
-		// AssertWaitPollNoErr(err, fmt.Sprintf("upgrade to version %s failed", notRecommendVersion1))
+		// TODO(OTA-1543): Implement actual upgrade verification once runtime allows longer test execution
+		e2e.Logf("Skipping actual upgrade to %s - requires extended runtime", notRecommendVersion3)
 	})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 240 - 251, Remove the large commented-out
upgrade block (the lines using oc.Run("adm").Args("upgrade", "--to",
notRecommendVersion3) and subsequent wait.Poll/AssertWaitPollNoErr logic) or
replace it with a single-line TODO referencing a tracking issue; if keeping for
future work, add a TODO comment indicating the issue/PR number and why it's
disabled and keep only a minimal one-line note instead of the entire commented
code, and ensure any referenced identifiers (notRecommendVersion3,
expectedStatus, wait.Poll, AssertWaitPollNoErr, oc.Run) are either removed from
the file if unused or left only in active code paths.
test/e2e/util.go (1)

41-46: Consider using conventional field name Value instead of Valu.

The field name Valu is non-standard and may confuse readers expecting the typical Value spelling. While the JSON tag ensures correct serialization, the Go identifier could be clearer.

✏️ Suggested rename
 type JSONp struct {
 	Oper string      `json:"op"`
 	Path string      `json:"path"`
-	Valu interface{} `json:"value,omitempty"`
+	Value interface{} `json:"value,omitempty"`
 }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/util.go` around lines 41 - 46, The struct JSONp uses a non-standard
field identifier Valu which is confusing; rename the field to Value (i.e.,
change the struct field from Valu to Value while keeping the existing json tag
`json:"value,omitempty"`) so the code reads clearer and remains
backwards-compatible for JSON encoding; update any references to JSONp.Valu
throughout the test/e2e package to JSONp.Value to compile.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/e2e/upgrade.go`:
- Around line 36-49: The json.Unmarshal call shadows the outer err (using :=)
and its error isn't checked, so if Unmarshal fails the deferred add will try to
restore invalid data; change `err := json.Unmarshal(...)` to `err =
json.Unmarshal(...)` to avoid shadowing, immediately check `err` (e.g.
o.Expect(err).NotTo(o.HaveOccurred()) or return/fail) before using `data`, and
only install the deferred Patch add(...) after a successful unmarshal so the
defer restores valid data.

In `@test/e2e/util.go`:
- Around line 170-177: In the Patch function, the json.Marshal call's error is
checked after logging, causing a misleading empty log when marshal fails; move
the e2e.Logf("will patching: ...", ...) to after the error check (i.e., after
verifying err from json.Marshal) and ensure you log the marshal error via
e2e.Logf or return it immediately if err != nil; reference the Patch function
and the variables p and err from the json.Marshal(...) call to locate where to
reorder the logging.

---

Nitpick comments:
In `@test/e2e/upgrade.go`:
- Line 70: Rename the local variable new_channel to newChannel to follow Go
naming conventions and match surrounding variables like oldUpstream and
oldChannel; update every reference to new_channel (including the usage near the
upgrade test where it's passed/compared/assigned) to use newChannel so
identifiers remain consistent and compile.
- Around line 15-27: Remove the unnecessary defer g.GinkgoRecover() call and the
empty g.BeforeEach and g.AfterEach hooks from the g.Describe block; locate the
g.Describe(`[Jira:"oc adm upgrade"] accept`, func() { ... }) declaration and
delete the g.GinkgoRecover() line and the empty BeforeEach/AfterEach functions
(they add no behavior), leaving only the variable declarations (e.g., oc :=
NewCLI("oc", KubeConfigPath())) and any real test code.
- Around line 240-251: Remove the large commented-out upgrade block (the lines
using oc.Run("adm").Args("upgrade", "--to", notRecommendVersion3) and subsequent
wait.Poll/AssertWaitPollNoErr logic) or replace it with a single-line TODO
referencing a tracking issue; if keeping for future work, add a TODO comment
indicating the issue/PR number and why it's disabled and keep only a minimal
one-line note instead of the entire commented code, and ensure any referenced
identifiers (notRecommendVersion3, expectedStatus, wait.Poll,
AssertWaitPollNoErr, oc.Run) are either removed from the file if unused or left
only in active code paths.

In `@test/e2e/util.go`:
- Around line 41-46: The struct JSONp uses a non-standard field identifier Valu
which is confusing; rename the field to Value (i.e., change the struct field
from Valu to Value while keeping the existing json tag `json:"value,omitempty"`)
so the code reads clearer and remains backwards-compatible for JSON encoding;
update any references to JSONp.Valu throughout the test/e2e package to
JSONp.Value to compile.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: cdffda8b-653d-4dd6-a933-ff34c3fd1110

📥 Commits

Reviewing files that changed from the base of the PR and between 1be2183 and 834aa52.

📒 Files selected for processing (2)
  • test/e2e/upgrade.go
  • test/e2e/util.go

Comment on lines +36 to +49
g.By("remove overrides from clusterversion version if exists")
overrides, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("clusterversion", "version", "-o", "jsonpath={.spec.overrides}").Output()
if err == nil && strings.Contains(overrides, "ClusterImagePolicy") {
e2e.Logf("overrides: <<%s>>", overrides)
var data []map[string]interface{}
err := json.Unmarshal([]byte(overrides), &data)
_, err = oc.AsAdmin().WithoutNamespace().Patch("", "clusterversion/version", []JSONp{
{"remove", "/spec/overrides", nil},
})
o.Expect(err).NotTo(o.HaveOccurred())
defer oc.AsAdmin().WithoutNamespace().Patch("", "clusterversion/version", []JSONp{
{"add", "/spec/overrides", data},
})
}
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Bug: json.Unmarshal error is never checked.

Line 41 declares a new err variable that shadows the outer scope, and line 42 immediately overwrites it. If json.Unmarshal fails, data will be nil/empty, and the deferred add operation will restore invalid data.

🐛 Proposed fix
 		g.By("remove overrides from clusterversion version if exists")
 		overrides, err := oc.AsAdmin().WithoutNamespace().Run("get").Args("clusterversion", "version", "-o", "jsonpath={.spec.overrides}").Output()
 		if err == nil && strings.Contains(overrides, "ClusterImagePolicy") {
 			e2e.Logf("overrides: <<%s>>", overrides)
 			var data []map[string]interface{}
-			err := json.Unmarshal([]byte(overrides), &data)
+			if err := json.Unmarshal([]byte(overrides), &data); err != nil {
+				e2e.Failf("Failed to unmarshal overrides: %v", err)
+			}
 			_, err = oc.AsAdmin().WithoutNamespace().Patch("", "clusterversion/version", []JSONp{
 				{"remove", "/spec/overrides", nil},
 			})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 36 - 49, The json.Unmarshal call shadows
the outer err (using :=) and its error isn't checked, so if Unmarshal fails the
deferred add will try to restore invalid data; change `err :=
json.Unmarshal(...)` to `err = json.Unmarshal(...)` to avoid shadowing,
immediately check `err` (e.g. o.Expect(err).NotTo(o.HaveOccurred()) or
return/fail) before using `data`, and only install the deferred Patch add(...)
after a successful unmarshal so the defer restores valid data.

@JianLi-RH JianLi-RH force-pushed the e2e_accept_risks_OTA-1543 branch from 834aa52 to 56a5ee3 Compare March 13, 2026 01:44
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (5)
test/e2e/util.go (1)

173-176: ⚠️ Potential issue | 🟡 Minor

Move patch payload logging after marshal error handling.

On Line 174, p is logged before the marshal error check on Line 175. If marshal fails, the log is misleading.

Suggested fix
 func (c *CLI) Patch(namespace string, resource string, patch []JSONPatchOperation) (output string, err error) {
 	p, err := json.Marshal(patch)
-	e2e.Logf("will patching: %s with: %s in namespace: %s", resource, string(p), namespace)
 	if err != nil {
 		e2e.Logf("Marshal json Error - json.Marshal: '%v'", err)
 		return "", err
 	}
+	e2e.Logf("will patching: %s with: %s in namespace: %s", resource, string(p), namespace)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/util.go` around lines 173 - 176, The log statement prints the
marshaled patch payload before checking json.Marshal error, which can be
misleading; move the e2e.Logf("will patching: %s with: %s in namespace: %s",
resource, string(p), namespace) so it executes only after verifying err == nil
(i.e., after json.Marshal(patch) and the nil-check), and keep the existing error
log e2e.Logf("Marshal json Error - json.Marshal: '%v'", err) unchanged to report
marshal failures.
test/e2e/upgrade.go (4)

88-101: ⚠️ Potential issue | 🟠 Major

Guard node count before fixed-position indexing.

extractVersion(4) can panic when the graph returns fewer than five nodes; add a length assertion before Line 98-101 calls.

Suggested fix
 		nodes, ok := result["nodes"].([]interface{})
 		o.Expect(ok).To(o.BeTrue(), "get nodes failed")
+		o.Expect(len(nodes)).To(o.BeNumerically(">=", 5), "expected at least 5 nodes in graph response")
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 88 - 101, The test assumes at least five
nodes but indexes nodes[1..4] directly which can panic; before calling
extractVersion (and before using nodes), assert the length: check len(nodes) >=
5 (or adapt to expected count) and fail the test with a clear message if not, or
replace the fixed-index calls with a safe loop over existing nodes; update
references around variables nodes and the extractVersion function and the calls
that assign recommendVersion / notRecommendVersion1/2/3 to use this guarded
length check.

42-43: ⚠️ Potential issue | 🟠 Major

Check json.Unmarshal error before patching overrides.

On Line 42, the unmarshal error is overwritten on Line 43 before it is validated, so malformed overrides can propagate into patch/restore logic.

Suggested fix
 		if err == nil && strings.Contains(overrides, "ClusterImagePolicy") {
 			e2e.Logf("overrides: <<%s>>", overrides)
 			var data []map[string]interface{}
-			err := json.Unmarshal([]byte(overrides), &data)
+			err = json.Unmarshal([]byte(overrides), &data)
+			o.Expect(err).NotTo(o.HaveOccurred(), "failed to unmarshal .spec.overrides")
 			_, err = oc.Patch("", "clusterversion/version", []JSONPatchOperation{
 				{"remove", "/spec/overrides", nil},
 			})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 42 - 43, The code currently calls
json.Unmarshal into variable err and then immediately reassigns err with the
result of oc.Patch (overwriting any unmarshal error); check and handle the
json.Unmarshal error before calling oc.Patch to prevent malformed overrides from
being used—specifically, after json.Unmarshal([]byte(overrides), &data) verify
err is nil and return or fail the test if not, then proceed to call oc.Patch("",
"clusterversion/version", []JSONPatchOperation{...}); do the same guard for any
later restore/patch logic that expects valid unmarshaled data.

82-83: ⚠️ Potential issue | 🟠 Major

Avoid hardcoding arch=amd64 in graph query.

Line 82 forces amd64, which can produce incorrect graph behavior on arm64/s390x/ppc64le/multi-arch clusters.

Suggested fix
-		fullGraph := fmt.Sprintf("%s?channel=%s&version=%s&arch=amd64", graph, newChannel, clusterVersion.FullVersion)
+		arch := ClusterArchitecture(oc)
+		if arch == string(MULTI) {
+			g.Skip("Skipping test: fauxinnati graph query requires a single architecture")
+		}
+		fullGraph := fmt.Sprintf("%s?channel=%s&version=%s&arch=%s", graph, newChannel, clusterVersion.FullVersion, arch)
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 82 - 83, The graph query hardcodes
arch=amd64 which breaks non-amd64 clusters; change the construction of fullGraph
to use the actual runtime architecture (e.g. set arch := runtime.GOARCH and use
that in fmt.Sprintf) or derive the cluster architecture from cluster metadata if
available, then call GetWebResource(fullGraph) with the computed arch instead of
the literal "amd64"; add the runtime import if you use runtime.GOARCH and update
references to fullGraph/graph/newChannel/clusterVersion.FullVersion accordingly.

124-127: ⚠️ Potential issue | 🟠 Major

Make accept-risks flow hermetic and restore original state.

This flow assumes an empty starting state and does not restore the original spec.desiredUpdate.acceptRisks at the end, so it can leak cluster state across specs.

Also applies to: 151-155, 170-239

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 124 - 127, The test block beginning with
g.By("clear risks when the accept risk list is empty") mutates cluster state
without restoring it; save the current spec.desiredUpdate.acceptRisks before
calling oc.Run("adm").Args("upgrade","accept","--clear").Output(), and after the
assertions restore the original value (use the same oc.Run/oc.Apply helpers used
elsewhere in this file to patch the ClusterVersion). If the test assumes an
empty starting list, make it hermetic by first capturing the original
acceptRisks, explicitly setting acceptRisks to an empty list for the test, and
then deferring or ensuring a final patch that restores the saved acceptRisks;
apply the same pattern to the other affected blocks referenced (around the other
tests in the 151-155 and 170-239 ranges).
🧹 Nitpick comments (1)
test/e2e/upgrade.go (1)

24-29: Remove empty hooks and dead commented block to reduce noise.

Empty BeforeEach/AfterEach and the commented upgrade block add maintenance overhead without behavior.

Also applies to: 241-252

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 24 - 29, Remove the no-op test hooks and
dead commented upgrade block to reduce noise: delete the empty BeforeEach and
AfterEach function blocks (the g.BeforeEach and g.AfterEach declarations) and
remove the commented upgrade block referenced around lines 241-252 so only
active test logic remains; ensure no remaining references expect those hooks
before committing.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@test/e2e/upgrade.go`:
- Around line 47-49: The deferred oc.Patch calls silently ignore errors; replace
each defer oc.Patch(...) with g.DeferCleanup wrappers that call oc.Patch and
assert no error (e.g. g.DeferCleanup(func(){ if err := oc.Patch("",
"clusterversion/version", []JSONPatchOperation{{"add","/spec/overrides",data}});
g.Expect(err).NotTo(HaveOccurred()) })) so cleanup failures are reported—do this
for both occurrences of oc.Patch and reference the oc.Patch function and
g.DeferCleanup/g.Expect in the changes.

---

Duplicate comments:
In `@test/e2e/upgrade.go`:
- Around line 88-101: The test assumes at least five nodes but indexes
nodes[1..4] directly which can panic; before calling extractVersion (and before
using nodes), assert the length: check len(nodes) >= 5 (or adapt to expected
count) and fail the test with a clear message if not, or replace the fixed-index
calls with a safe loop over existing nodes; update references around variables
nodes and the extractVersion function and the calls that assign recommendVersion
/ notRecommendVersion1/2/3 to use this guarded length check.
- Around line 42-43: The code currently calls json.Unmarshal into variable err
and then immediately reassigns err with the result of oc.Patch (overwriting any
unmarshal error); check and handle the json.Unmarshal error before calling
oc.Patch to prevent malformed overrides from being used—specifically, after
json.Unmarshal([]byte(overrides), &data) verify err is nil and return or fail
the test if not, then proceed to call oc.Patch("", "clusterversion/version",
[]JSONPatchOperation{...}); do the same guard for any later restore/patch logic
that expects valid unmarshaled data.
- Around line 82-83: The graph query hardcodes arch=amd64 which breaks non-amd64
clusters; change the construction of fullGraph to use the actual runtime
architecture (e.g. set arch := runtime.GOARCH and use that in fmt.Sprintf) or
derive the cluster architecture from cluster metadata if available, then call
GetWebResource(fullGraph) with the computed arch instead of the literal "amd64";
add the runtime import if you use runtime.GOARCH and update references to
fullGraph/graph/newChannel/clusterVersion.FullVersion accordingly.
- Around line 124-127: The test block beginning with g.By("clear risks when the
accept risk list is empty") mutates cluster state without restoring it; save the
current spec.desiredUpdate.acceptRisks before calling
oc.Run("adm").Args("upgrade","accept","--clear").Output(), and after the
assertions restore the original value (use the same oc.Run/oc.Apply helpers used
elsewhere in this file to patch the ClusterVersion). If the test assumes an
empty starting list, make it hermetic by first capturing the original
acceptRisks, explicitly setting acceptRisks to an empty list for the test, and
then deferring or ensuring a final patch that restores the saved acceptRisks;
apply the same pattern to the other affected blocks referenced (around the other
tests in the 151-155 and 170-239 ranges).

In `@test/e2e/util.go`:
- Around line 173-176: The log statement prints the marshaled patch payload
before checking json.Marshal error, which can be misleading; move the
e2e.Logf("will patching: %s with: %s in namespace: %s", resource, string(p),
namespace) so it executes only after verifying err == nil (i.e., after
json.Marshal(patch) and the nil-check), and keep the existing error log
e2e.Logf("Marshal json Error - json.Marshal: '%v'", err) unchanged to report
marshal failures.

---

Nitpick comments:
In `@test/e2e/upgrade.go`:
- Around line 24-29: Remove the no-op test hooks and dead commented upgrade
block to reduce noise: delete the empty BeforeEach and AfterEach function blocks
(the g.BeforeEach and g.AfterEach declarations) and remove the commented upgrade
block referenced around lines 241-252 so only active test logic remains; ensure
no remaining references expect those hooks before committing.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Repository: openshift/coderabbit/.coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e49e15bc-ccd0-46e9-be15-b58484d4c9c2

📥 Commits

Reviewing files that changed from the base of the PR and between 834aa52 and 56a5ee3.

📒 Files selected for processing (2)
  • test/e2e/upgrade.go
  • test/e2e/util.go

Comment on lines +47 to +49
defer oc.Patch("", "clusterversion/version", []JSONPatchOperation{
{"add", "/spec/overrides", data},
})
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

cat -n test/e2e/upgrade.go | head -100

Repository: openshift/oc

Length of output: 4564


🏁 Script executed:

# Search for Patch method definition to understand return type
rg -A 5 "func.*Patch\(" test/e2e/ -t go | head -50

Repository: openshift/oc

Length of output: 482


🏁 Script executed:

# Search for g.DeferCleanup usage patterns in the test suite
rg "g\.DeferCleanup" test/e2e/ -B 2 -A 5 | head -80

Repository: openshift/oc

Length of output: 430


🏁 Script executed:

# Look for other deferred cleanup patterns in the test suite
rg "defer.*Patch\(" test/e2e/ -B 2 -A 2

Repository: openshift/oc

Length of output: 617


🏁 Script executed:

# Get the full Patch method to understand better
rg -A 15 "func \(c \*CLI\) Patch" test/e2e/util.go

Repository: openshift/oc

Length of output: 701


🏁 Script executed:

# Look at the surrounding context for line 43-45 to see proper error handling
sed -n '40,50p' test/e2e/upgrade.go

Repository: openshift/oc

Length of output: 465


🏁 Script executed:

# Check if there are any other similar patterns in upgrade.go
rg "defer.*Run\(" test/e2e/upgrade.go -B 1 -A 1

Repository: openshift/oc

Length of output: 38


Add error handling to deferred cleanup patches.

Both deferred oc.Patch() calls at lines 47-49 and 77-80 drop errors silently. If cleanup fails, the cluster remains in a mutated state with no indication of failure. Use g.DeferCleanup() with error assertions to catch restoration failures:

Suggested fix
-			defer oc.Patch("", "clusterversion/version", []JSONPatchOperation{
-				{"add", "/spec/overrides", data},
-			})
+			g.DeferCleanup(func() {
+				_, restoreErr := oc.Patch("", "clusterversion/version", []JSONPatchOperation{
+					{"add", "/spec/overrides", data},
+				})
+				o.Expect(restoreErr).NotTo(o.HaveOccurred())
+			})
@@
-		defer oc.Patch("", "clusterversion/version", []JSONPatchOperation{
-			{"add", "/spec/upstream", oldUpstream},
-			{"add", "/spec/channel", oldChannel},
-		})
+		g.DeferCleanup(func() {
+			_, restoreErr := oc.Patch("", "clusterversion/version", []JSONPatchOperation{
+				{"add", "/spec/upstream", oldUpstream},
+				{"add", "/spec/channel", oldChannel},
+			})
+			o.Expect(restoreErr).NotTo(o.HaveOccurred())
+		})
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@test/e2e/upgrade.go` around lines 47 - 49, The deferred oc.Patch calls
silently ignore errors; replace each defer oc.Patch(...) with g.DeferCleanup
wrappers that call oc.Patch and assert no error (e.g. g.DeferCleanup(func(){ if
err := oc.Patch("", "clusterversion/version",
[]JSONPatchOperation{{"add","/spec/overrides",data}});
g.Expect(err).NotTo(HaveOccurred()) })) so cleanup failures are reported—do this
for both occurrences of oc.Patch and reference the oc.Patch function and
g.DeferCleanup/g.Expect in the changes.

@JianLi-RH JianLi-RH force-pushed the e2e_accept_risks_OTA-1543 branch 2 times, most recently from e6448d3 to 61ec941 Compare March 13, 2026 03:03
@JianLi-RH JianLi-RH changed the title [WIP] OTA-1550 Automate accept risks for OTA-1543 OTA-1550 Automate accept risks for OTA-1543 Mar 13, 2026
@openshift-ci openshift-ci bot removed the do-not-merge/work-in-progress Indicates that a PR should not merge because it is a work in progress. label Mar 13, 2026
@JianLi-RH
Copy link
Author

Although I haven't get a passed run for the PR, I think you can start to review it, it almost done.

/cc @hongkailiu

@openshift-ci openshift-ci bot requested a review from hongkailiu March 13, 2026 03:08
@JianLi-RH
Copy link
Author

The case handles non-tp clusters well:

[jianl@jianl-thinkpadt14gen4 oc]$ ./oc-tests-ext run-test "[Jira:\"oc adm upgrade\"] accept Accepted Risks for OCP Cluster Updates"
  Running Suite:  - /home/jianl/1_code/oc
  =======================================
  Random Seed: 1773300558 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [Jira:"oc adm upgrade"] accept Accepted Risks for OCP Cluster Updates [TechPreview, 88175, Slow, manual]
  /home/jianl/1_code/oc/test/e2e/upgrade.go:35
I0312 15:29:19.825986  462009 util.go:52] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-12-053102.kubeconfig get clusterversion version -o=jsonpath='{.status.conditions[?(@.type=="Upgradeable")].message}''
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-12-053102.kubeconfig get clusterversion version -o=jsonpath='{.status.conditions[?(@.type=="Upgradeable")].message}''
I0312 15:29:20.688743  462009 util.go:52] Upgradeable: 'Disabling ownership via cluster version overrides prevents upgrades between minor or major versions. Please remove overrides before requesting a minor or major version update.'
  Upgradeable: 'Disabling ownership via cluster version overrides prevents upgrades between minor or major versions. Please remove overrides before requesting a minor or major version update.'

    [SKIPPED] in [It] - /home/jianl/1_code/oc/test/e2e/upgrade.go:37 @ 03/12/26 15:29:20.689
  S [SKIPPED] [0.863 seconds]
  [Jira:"oc adm upgrade"] accept [It] Accepted Risks for OCP Cluster Updates [TechPreview, 88175, Slow, manual]
  /home/jianl/1_code/oc/test/e2e/upgrade.go:35

    [SKIPPED] Skipping test: only tech-preview clusters supported
    In [It] at: /home/jianl/1_code/oc/test/e2e/upgrade.go:37 @ 03/12/26 15:29:20.689
  ------------------------------

  Ran 0 of 1 Specs in 0.864 seconds
  SUCCESS! -- 0 Passed | 0 Failed | 0 Pending | 1 Skipped
[
  {
    "name": "[Jira:\"oc adm upgrade\"] accept Accepted Risks for OCP Cluster Updates",
    "lifecycle": "blocking",
    "duration": 863,
    "startTime": "2026-03-12 07:29:19.825609 UTC",
    "endTime": "2026-03-12 07:29:20.689364 UTC",
    "result": "skipped",
    "output": "Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-12-053102.kubeconfig get clusterversion version -o=jsonpath='{.status.conditions[?(@.type==\"Upgradeable\")].message}''\nUpgradeable: 'Disabling ownership via cluster version overrides prevents upgrades between minor or major versions. Please remove overrides before requesting a minor or major version update.'\n\n  [SKIPPED] in [It] - /home/jianl/1_code/oc/test/e2e/upgrade.go:37 @ 03/12/26 15:29:20.689\n\n skip [/home/jianl/1_code/oc/test/e2e/upgrade.go:37]: Skipping test: only tech-preview clusters supported\n"
  }
][jianl@jianl-thinkpadt14gen4 oc]$ 

@JianLi-RH
Copy link
Author

Need update fauxinnati.

What we expect risks are:

4.22.0-ec.1: SomeInvokerThing
4.22.0-ec.2: SomeInvokerThing, SomeChannelThing
4.22.0-ec.3: SomeInfrastructureThing

But now fauxinnati and test case are inconsistent:

[jianl@jianl-thinkpadt14gen4 oc]$ ./oc-tests-ext run-test "[Jira:\"oc adm upgrade\"] accept Accepted Risks for OCP Cluster Updates"
  Running Suite:  - /home/jianl/1_code/oc
  =======================================
  Random Seed: 1773367978 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [Jira:"oc adm upgrade"] accept Accepted Risks for OCP Cluster Updates [TechPreview, 88175, Slow, Manual]
  /home/jianl/1_code/oc/test/e2e/upgrade.go:30
I0313 10:12:58.940943  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -o=jsonpath='{.status.conditions[?(@.type=="Upgradeable")].message}''
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -o=jsonpath='{.status.conditions[?(@.type=="Upgradeable")].message}''
I0313 10:13:00.173509  120433 util.go:60] Upgradeable: 'Cluster should not be upgraded between minor or major versions for multiple reasons: ClusterVersionOverridesSet,FeatureGates_RestrictedFeatureGates_TechPreviewNoUpgrade
* Disabling ownership via cluster version overrides prevents upgrades between minor or major versions. Please remove overrides before requesting a minor or major version update.
* Cluster operator config-operator should not be upgraded between minor or major versions: FeatureGatesUpgradeable: "TechPreviewNoUpgrade" does not allow updates'
  Upgradeable: 'Cluster should not be upgraded between minor or major versions for multiple reasons: ClusterVersionOverridesSet,FeatureGates_RestrictedFeatureGates_TechPreviewNoUpgrade
  * Disabling ownership via cluster version overrides prevents upgrades between minor or major versions. Please remove overrides before requesting a minor or major version update.
  * Cluster operator config-operator should not be upgraded between minor or major versions: FeatureGatesUpgradeable: "TechPreviewNoUpgrade" does not allow updates'

    STEP: remove overrides from clusterversion version if exists @ 03/13/26 10:13:00.173
I0313 10:13:00.173561  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -o jsonpath={.spec.overrides}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -o jsonpath={.spec.overrides}'
I0313 10:13:01.397743  120433 util.go:60] overrides: <<[{"group":"config.openshift.io","kind":"ClusterImagePolicy","name":"openshift","namespace":"","unmanaged":true}]>>
  overrides: <<[{"group":"config.openshift.io","kind":"ClusterImagePolicy","name":"openshift","namespace":"","unmanaged":true}]>>
I0313 10:13:01.397833  120433 util.go:60] will patching: clusterversion/version with: [{"op":"remove","path":"/spec/overrides"}] in namespace: 
  will patching: clusterversion/version with: [{"op":"remove","path":"/spec/overrides"}] in namespace: 
I0313 10:13:01.397849  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig patch clusterversion/version --type=json --patch [{"op":"remove","path":"/spec/overrides"}]'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig patch clusterversion/version --type=json --patch [{"op":"remove","path":"/spec/overrides"}]'
I0313 10:13:02.900891  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -o=jsonpath={.status.desired.version}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -o=jsonpath={.status.desired.version}'
    STEP: checking the help info of `oc adm upgrade --help` @ 03/13/26 10:13:04.117
I0313 10:13:04.117887  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade --help'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade --help'
    STEP: checking the help info of `oc adm upgrade accept --help` @ 03/13/26 10:13:04.197
I0313 10:13:04.197140  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade accept --help'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade accept --help'
    STEP: patch fauxinnati upstream @ 03/13/26 10:13:04.274
I0313 10:13:04.274869  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -ojsonpath={.spec.upstream}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -ojsonpath={.spec.upstream}'
I0313 10:13:05.453182  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -ojsonpath={.spec.channel}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -ojsonpath={.spec.channel}'
I0313 10:13:06.726994  120433 util.go:60] will patching: clusterversion/version with: [{"op":"add","path":"/spec/upstream","value":"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"},{"op":"add","path":"/spec/channel","value":"OCP-88175"}] in namespace: 
  will patching: clusterversion/version with: [{"op":"add","path":"/spec/upstream","value":"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"},{"op":"add","path":"/spec/channel","value":"OCP-88175"}] in namespace: 
I0313 10:13:06.727035  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig patch clusterversion/version --type=json --patch [{"op":"add","path":"/spec/upstream","value":"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"},{"op":"add","path":"/spec/channel","value":"OCP-88175"}]'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig patch clusterversion/version --type=json --patch [{"op":"add","path":"/spec/upstream","value":"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"},{"op":"add","path":"/spec/channel","value":"OCP-88175"}]'
    STEP: check if new upstream has enough target versions @ 03/13/26 10:13:08.324
I0313 10:13:08.324777  120433 util.go:60] Getting web resource: https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph?channel=OCP-88175&version=4.22.0-0.nightly-2026-03-11-034211&arch=amd64
  Getting web resource: https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph?channel=OCP-88175&version=4.22.0-0.nightly-2026-03-11-034211&arch=amd64
    STEP: check default output of `oc adm upgrade` @ 03/13/26 10:13:09.574
I0313 10:13:09.575066  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
    STEP: check output of `oc adm upgrade --include-not-recommended` @ 03/13/26 10:13:10.758
I0313 10:13:10.758886  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade --include-not-recommended'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade --include-not-recommended'
    STEP: upgrade to a non recommend version @ 03/13/26 10:13:11.872
I0313 10:13:11.873049  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade --to 4.22.0-ec.1'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade --to 4.22.0-ec.1'
E0313 10:13:13.061516  120433 util.go:72] Command failed with error: exit status 1
Output: error: the update 4.22.0-ec.1 is not one of the recommended updates, but is available as a conditional update. To accept the Recommended=False risk and to proceed with update use --allow-not-recommended.
  Reason: MultipleReasons
  Message: This is SomeInvokerThing that always applies for testing purposes https://docs.openshift.com/synthetic-risk-a
  
  This is SomeChannelThing that always applies for testing purposes https://docs.openshift.com/synthetic-risk-b
  ERROR: Command failed with error: exit status 1
  Output: error: the update 4.22.0-ec.1 is not one of the recommended updates, but is available as a conditional update. To accept the Recommended=False risk and to proceed with update use --allow-not-recommended.
    Reason: MultipleReasons
    Message: This is SomeInvokerThing that always applies for testing purposes https://docs.openshift.com/synthetic-risk-a
    
    This is SomeChannelThing that always applies for testing purposes https://docs.openshift.com/synthetic-risk-b

    STEP: clear risks when the accept risk list is empty @ 03/13/26 10:13:13.061
I0313 10:13:13.061707  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade accept --clear'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade accept --clear'
    STEP: Accept risk SomeInvokerThing @ 03/13/26 10:13:14.184
I0313 10:13:14.184939  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade accept SomeInvokerThing'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade accept SomeInvokerThing'
I0313 10:13:15.574431  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'
I0313 10:13:36.708150  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:13:37.825252  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:13:56.708590  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:13:57.921162  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:14:16.707951  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:14:17.996765  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:14:36.707781  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:14:37.902244  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:14:56.708130  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:14:57.815249  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:15:16.707144  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:15:17.881960  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:15:36.708115  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:15:38.120759  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:15:56.707767  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:15:57.959560  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:16:16.708129  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:16:17.853369  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:16:36.707805  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:16:37.918546  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:16:56.708824  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:16:58.118117  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:17:16.707238  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:17:17.828909  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:17:36.708704  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:17:37.932744  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:17:56.706264  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:17:57.999844  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:18:16.707983  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:18:18.108178  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:18:36.708096  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:18:37.989121  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:18:56.707791  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:18:57.941770  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:19:16.708722  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:19:17.866970  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:19:36.705989  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:19:37.959433  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:19:56.708183  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:19:57.920159  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:20:16.707737  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:20:17.808737  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:20:36.707824  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:20:37.850767  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:20:56.707710  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:20:58.121892  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:21:16.707720  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:21:17.937015  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:21:36.707695  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:21:37.939414  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:21:56.708377  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:21:58.071844  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:22:16.708108  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:22:18.123456  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:22:36.707834  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:22:37.911891  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:22:56.708379  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:22:57.835155  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:23:16.707769  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:23:18.007456  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0313 10:23:18.007497  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'
I0313 10:23:19.385225  120433 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
E0313 10:23:19.385259  120433 util.go:79] TEST FAILED: the non-recommend version 4.22.0-ec.1 is not available after accepting the risk SomeInvokerThing. 
Output:
Cluster version is 4.22.0-0.nightly-2026-03-11-034211

Upgradeable=False

  Reason: FeatureGates_RestrictedFeatureGates_TechPreviewNoUpgrade
  Message: Cluster operator config-operator should not be upgraded between minor or major versions: FeatureGatesUpgradeable: "TechPreviewNoUpgrade" does not allow updates

Upstream: https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph
Channel: OCP-88175

Recommended updates:

  VERSION     IMAGE
  4.22.0-ec.2 quay.io/openshift-release-dev/ocp-release@sha256:fc88d0bf145c81989a4116bbb0e3d2724d9ab937efb7d217a10e7d7ff3031c50
  4.22.0-ec.0 quay.io/openshift-release-dev/ocp-release@sha256:fa50ad76be25a54235058782d1844a236f64a9a0e9764eedb885ccf1565286c4

Additional updates which are not recommended for your cluster configuration are available, to view those re-run the command with --include-not-recommended.
: timed out waiting
I0313 10:23:19.385387  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade accept --clear'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade accept --clear'
I0313 10:23:20.799759  120433 util.go:60] will patching: clusterversion/version with: [{"op":"add","path":"/spec/upstream","value":"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"},{"op":"add","path":"/spec/channel","value":"OCP-88175"}] in namespace: 
  will patching: clusterversion/version with: [{"op":"add","path":"/spec/upstream","value":"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"},{"op":"add","path":"/spec/channel","value":"OCP-88175"}] in namespace: 
I0313 10:23:20.799807  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig patch clusterversion/version --type=json --patch [{"op":"add","path":"/spec/upstream","value":"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"},{"op":"add","path":"/spec/channel","value":"OCP-88175"}]'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig patch clusterversion/version --type=json --patch [{"op":"add","path":"/spec/upstream","value":"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"},{"op":"add","path":"/spec/channel","value":"OCP-88175"}]'
I0313 10:23:22.252730  120433 util.go:60] will patching: clusterversion/version with: [{"op":"add","path":"/spec/overrides","value":[{"group":"config.openshift.io","kind":"ClusterImagePolicy","name":"openshift","namespace":"","unmanaged":true}]}] in namespace: 
  will patching: clusterversion/version with: [{"op":"add","path":"/spec/overrides","value":[{"group":"config.openshift.io","kind":"ClusterImagePolicy","name":"openshift","namespace":"","unmanaged":true}]}] in namespace: 
I0313 10:23:22.252773  120433 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig patch clusterversion/version --type=json --patch [{"op":"add","path":"/spec/overrides","value":[{"group":"config.openshift.io","kind":"ClusterImagePolicy","name":"openshift","namespace":"","unmanaged":true}]}]'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig patch clusterversion/version --type=json --patch [{"op":"add","path":"/spec/overrides","value":[{"group":"config.openshift.io","kind":"ClusterImagePolicy","name":"openshift","namespace":"","unmanaged":true}]}]'
    [FAILED] in [It] - /home/jianl/1_code/oc/test/e2e/util.go:80 @ 03/13/26 10:23:23.638
  • [FAILED] [624.698 seconds]
  [Jira:"oc adm upgrade"] accept [It] Accepted Risks for OCP Cluster Updates [TechPreview, 88175, Slow, Manual]
  /home/jianl/1_code/oc/test/e2e/upgrade.go:30

    [FAILED] the non-recommend version 4.22.0-ec.1 is not available after accepting the risk SomeInvokerThing. 
    Output:
    Cluster version is 4.22.0-0.nightly-2026-03-11-034211

    Upgradeable=False

      Reason: FeatureGates_RestrictedFeatureGates_TechPreviewNoUpgrade
      Message: Cluster operator config-operator should not be upgraded between minor or major versions: FeatureGatesUpgradeable: "TechPreviewNoUpgrade" does not allow updates

    Upstream: https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph
    Channel: OCP-88175

    Recommended updates:

      VERSION     IMAGE
      4.22.0-ec.2 quay.io/openshift-release-dev/ocp-release@sha256:fc88d0bf145c81989a4116bbb0e3d2724d9ab937efb7d217a10e7d7ff3031c50
      4.22.0-ec.0 quay.io/openshift-release-dev/ocp-release@sha256:fa50ad76be25a54235058782d1844a236f64a9a0e9764eedb885ccf1565286c4

    Additional updates which are not recommended for your cluster configuration are available, to view those re-run the command with --include-not-recommended.
    : timed out waiting
    In [It] at: /home/jianl/1_code/oc/test/e2e/util.go:80 @ 03/13/26 10:23:23.638
  ------------------------------

  Summarizing 1 Failure:
    [FAIL] [Jira:"oc adm upgrade"] accept [It] Accepted Risks for OCP Cluster Updates [TechPreview, 88175, Slow, Manual]
    /home/jianl/1_code/oc/test/e2e/util.go:80

  Ran 1 of 1 Specs in 624.698 seconds
  FAIL! -- 0 Passed | 1 Failed | 0 Pending | 0 Skipped
[
  {
    "name": "[Jira:\"oc adm upgrade\"] accept Accepted Risks for OCP Cluster Updates",
    "lifecycle": "blocking",
    "duration": 624699,
    "startTime": "2026-03-13 02:12:58.939630 UTC",
    "endTime": "2026-03-13 02:23:23.638987 UTC",
    "result": "failed",
    "output": "Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -o=jsonpath='{.status.conditions[?(@.type==\"Upgradeable\")].message}''\nUpgradeable: 'Cluster should not be upgraded between minor or major versions for multiple reasons: ClusterVersionOverridesSet,FeatureGates_RestrictedFeatureGates_TechPreviewNoUpgrade\n* Disabling ownership via cluster version overrides prevents upgrades between minor or major versions. Please remove overrides before requesting a minor or major version update.\n* Cluster operator config-operator should not be upgraded between minor or major versions: FeatureGatesUpgradeable: \"TechPreviewNoUpgrade\" does not allow updates'\n\n  STEP: remove overrides from clusterversion version if exists @ 03/13/26 10:13:00.173\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -o jsonpath={.spec.overrides}'\noverrides: \u003c\u003c[{\"group\":\"config.openshift.io\",\"kind\":\"ClusterImagePolicy\",\"name\":\"openshift\",\"namespace\":\"\",\"unmanaged\":true}]\u003e\u003e\nwill patching: clusterversion/version with: [{\"op\":\"remove\",\"path\":\"/spec/overrides\"}] in namespace: \nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig patch clusterversion/version --type=json --patch [{\"op\":\"remove\",\"path\":\"/spec/overrides\"}]'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -o=jsonpath={.status.desired.version}'\n  STEP: checking the help info of `oc adm upgrade --help` @ 03/13/26 10:13:04.117\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade --help'\n  STEP: checking the help info of `oc adm upgrade accept --help` @ 03/13/26 10:13:04.197\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade accept --help'\n  STEP: patch fauxinnati upstream @ 03/13/26 10:13:04.274\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -ojsonpath={.spec.upstream}'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -ojsonpath={.spec.channel}'\nwill patching: clusterversion/version with: [{\"op\":\"add\",\"path\":\"/spec/upstream\",\"value\":\"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph\"},{\"op\":\"add\",\"path\":\"/spec/channel\",\"value\":\"OCP-88175\"}] in namespace: \nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig patch clusterversion/version --type=json --patch [{\"op\":\"add\",\"path\":\"/spec/upstream\",\"value\":\"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph\"},{\"op\":\"add\",\"path\":\"/spec/channel\",\"value\":\"OCP-88175\"}]'\n  STEP: check if new upstream has enough target versions @ 03/13/26 10:13:08.324\nGetting web resource: https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph?channel=OCP-88175\u0026version=4.22.0-0.nightly-2026-03-11-034211\u0026arch=amd64\n  STEP: check default output of `oc adm upgrade` @ 03/13/26 10:13:09.574\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\n  STEP: check output of `oc adm upgrade --include-not-recommended` @ 03/13/26 10:13:10.758\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade --include-not-recommended'\n  STEP: upgrade to a non recommend version @ 03/13/26 10:13:11.872\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade --to 4.22.0-ec.1'\nERROR: Command failed with error: exit status 1\nOutput: error: the update 4.22.0-ec.1 is not one of the recommended updates, but is available as a conditional update. To accept the Recommended=False risk and to proceed with update use --allow-not-recommended.\n  Reason: MultipleReasons\n  Message: This is SomeInvokerThing that always applies for testing purposes https://docs.openshift.com/synthetic-risk-a\n  \n  This is SomeChannelThing that always applies for testing purposes https://docs.openshift.com/synthetic-risk-b\n\n  STEP: clear risks when the accept risk list is empty @ 03/13/26 10:13:13.061\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade accept --clear'\n  STEP: Accept risk SomeInvokerThing @ 03/13/26 10:13:14.184\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade accept SomeInvokerThing'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig adm upgrade accept --clear'\nwill patching: clusterversion/version with: [{\"op\":\"add\",\"path\":\"/spec/upstream\",\"value\":\"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph\"},{\"op\":\"add\",\"path\":\"/spec/channel\",\"value\":\"OCP-88175\"}] in namespace: \nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig patch clusterversion/version --type=json --patch [{\"op\":\"add\",\"path\":\"/spec/upstream\",\"value\":\"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph\"},{\"op\":\"add\",\"path\":\"/spec/channel\",\"value\":\"OCP-88175\"}]'\nwill patching: clusterversion/version with: [{\"op\":\"add\",\"path\":\"/spec/overrides\",\"value\":[{\"group\":\"config.openshift.io\",\"kind\":\"ClusterImagePolicy\",\"name\":\"openshift\",\"namespace\":\"\",\"unmanaged\":true}]}] in namespace: \nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-13-004525.kubeconfig patch clusterversion/version --type=json --patch [{\"op\":\"add\",\"path\":\"/spec/overrides\",\"value\":[{\"group\":\"config.openshift.io\",\"kind\":\"ClusterImagePolicy\",\"name\":\"openshift\",\"namespace\":\"\",\"unmanaged\":true}]}]'\n  [FAILED] in [It] - /home/jianl/1_code/oc/test/e2e/util.go:80 @ 03/13/26 10:23:23.638\n",
    "error": "fail [/home/jianl/1_code/oc/test/e2e/util.go:80]: the non-recommend version 4.22.0-ec.1 is not available after accepting the risk SomeInvokerThing. \nOutput:\nCluster version is 4.22.0-0.nightly-2026-03-11-034211\n\nUpgradeable=False\n\n  Reason: FeatureGates_RestrictedFeatureGates_TechPreviewNoUpgrade\n  Message: Cluster operator config-operator should not be upgraded between minor or major versions: FeatureGatesUpgradeable: \"TechPreviewNoUpgrade\" does not allow updates\n\nUpstream: https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph\nChannel: OCP-88175\n\nRecommended updates:\n\n  VERSION     IMAGE\n  4.22.0-ec.2 quay.io/openshift-release-dev/ocp-release@sha256:fc88d0bf145c81989a4116bbb0e3d2724d9ab937efb7d217a10e7d7ff3031c50\n  4.22.0-ec.0 quay.io/openshift-release-dev/ocp-release@sha256:fa50ad76be25a54235058782d1844a236f64a9a0e9764eedb885ccf1565286c4\n\nAdditional updates which are not recommended for your cluster configuration are available, to view those re-run the command with --include-not-recommended.\n: timed out waiting"
  }
]E0313 10:23:23.639182  120433 run.go:72] "command failed" err="1 tests failed"
[jianl@jianl-thinkpadt14gen4 oc]$ 

@JianLi-RH JianLi-RH force-pushed the e2e_accept_risks_OTA-1543 branch from 61ec941 to 5cdcc42 Compare March 16, 2026 02:13
@JianLi-RH JianLi-RH force-pushed the e2e_accept_risks_OTA-1543 branch from 5cdcc42 to 84622ed Compare March 16, 2026 02:26
@openshift-ci
Copy link
Contributor

openshift-ci bot commented Mar 16, 2026

@JianLi-RH: all tests passed!

Full PR test history. Your PR dashboard.

Details

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here.

@JianLi-RH
Copy link
Author

@hongkailiu Test passed.
Here are the test steps:

  1. Create a cluster (cluster-bot log):
$ launch 4.22 aws,techpreview
  1. Run it on local machine:
$ make build
[jianl@jianl-thinkpadt14gen4 oc]$ ./oc-tests-ext run-test "[Jira:\"oc adm upgrade\"] accept risks exposed to conditional updates Accepted Risks for OCP Cluster Updates"
  Running Suite:  - /home/jianl/1_code/oc
  =======================================
  Random Seed: 1773635842 - will randomize all specs

  Will run 1 of 1 specs
  ------------------------------
  [Jira:"oc adm upgrade"] accept risks exposed to conditional updates Accepted Risks for OCP Cluster Updates [upgrade, accept, TechPreview, 88175, Slow, Manual]
  /home/jianl/1_code/oc/test/e2e/upgrade.go:31
I0316 12:37:22.246419  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -o=jsonpath='{.status.conditions[?(@.type=="Upgradeable")].message}''
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -o=jsonpath='{.status.conditions[?(@.type=="Upgradeable")].message}''
I0316 12:37:23.879724  219345 util.go:60] Upgradeable: 'Cluster operator config-operator should not be upgraded between minor or major versions: FeatureGatesUpgradeable: "TechPreviewNoUpgrade" does not allow updates'
  Upgradeable: 'Cluster operator config-operator should not be upgraded between minor or major versions: FeatureGatesUpgradeable: "TechPreviewNoUpgrade" does not allow updates'

    STEP: remove overrides from clusterversion version if exists @ 03/16/26 12:37:23.879
I0316 12:37:23.879807  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -o jsonpath={.spec.overrides}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -o jsonpath={.spec.overrides}'
I0316 12:37:25.492273  219345 util.go:60] overrides: <<[{"group":"config.openshift.io","kind":"ClusterImagePolicy","name":"openshift","namespace":"","unmanaged":true}]>>
  overrides: <<[{"group":"config.openshift.io","kind":"ClusterImagePolicy","name":"openshift","namespace":"","unmanaged":true}]>>
I0316 12:37:25.492402  219345 util.go:60] will patching: clusterversion/version with: [{"op":"remove","path":"/spec/overrides"}] in namespace: 
  will patching: clusterversion/version with: [{"op":"remove","path":"/spec/overrides"}] in namespace: 
I0316 12:37:25.492427  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig patch clusterversion/version --type=json --patch [{"op":"remove","path":"/spec/overrides"}]'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig patch clusterversion/version --type=json --patch [{"op":"remove","path":"/spec/overrides"}]'
I0316 12:37:34.453223  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -o=jsonpath={.status.desired.version}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -o=jsonpath={.status.desired.version}'
    STEP: checking the help info of `oc adm upgrade --help` @ 03/16/26 12:37:35.582
I0316 12:37:35.582262  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade --help'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade --help'
    STEP: checking the help info of `oc adm upgrade accept --help` @ 03/16/26 12:37:35.666
I0316 12:37:35.666169  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --help'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --help'
    STEP: patch fauxinnati upstream @ 03/16/26 12:37:35.738
I0316 12:37:35.738678  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.upstream}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.upstream}'
I0316 12:37:36.679304  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.channel}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.channel}'
I0316 12:37:37.990331  219345 util.go:60] will patching: clusterversion/version with: [{"op":"add","path":"/spec/upstream","value":"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"},{"op":"add","path":"/spec/channel","value":"OCP-88175"}] in namespace: 
  will patching: clusterversion/version with: [{"op":"add","path":"/spec/upstream","value":"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"},{"op":"add","path":"/spec/channel","value":"OCP-88175"}] in namespace: 
I0316 12:37:37.990404  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig patch clusterversion/version --type=json --patch [{"op":"add","path":"/spec/upstream","value":"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"},{"op":"add","path":"/spec/channel","value":"OCP-88175"}]'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig patch clusterversion/version --type=json --patch [{"op":"add","path":"/spec/upstream","value":"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph"},{"op":"add","path":"/spec/channel","value":"OCP-88175"}]'
    STEP: check if new upstream has enough target versions @ 03/16/26 12:37:39.18
I0316 12:37:39.180494  219345 util.go:60] Getting web resource: https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph?channel=OCP-88175&version=4.22.0-0.nightly-2026-03-15-203841&arch=amd64
  Getting web resource: https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph?channel=OCP-88175&version=4.22.0-0.nightly-2026-03-15-203841&arch=amd64
    STEP: check default output of `oc adm upgrade` @ 03/16/26 12:37:40.313
I0316 12:37:40.313831  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
    STEP: check output of `oc adm upgrade --include-not-recommended` @ 03/16/26 12:37:41.447
I0316 12:37:41.447542  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade --include-not-recommended'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade --include-not-recommended'
    STEP: upgrade to a non recommend version @ 03/16/26 12:37:43.936
I0316 12:37:43.936457  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade --to 4.22.0-ec.1'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade --to 4.22.0-ec.1'
E0316 12:37:47.184730  219345 util.go:72] Command failed with error: exit status 1
Output: error: the update 4.22.0-ec.1 is not one of the recommended updates, but is available as a conditional update. To accept the Recommended=False risk and to proceed with update use --allow-not-recommended.
  Reason: SomeInvokerThing
  Message: This is SomeInvokerThing that always applies for testing purposes https://docs.openshift.com/synthetic-risk-a
  ERROR: Command failed with error: exit status 1
  Output: error: the update 4.22.0-ec.1 is not one of the recommended updates, but is available as a conditional update. To accept the Recommended=False risk and to proceed with update use --allow-not-recommended.
    Reason: SomeInvokerThing
    Message: This is SomeInvokerThing that always applies for testing purposes https://docs.openshift.com/synthetic-risk-a

    STEP: clear risks when the accept risk list is empty @ 03/16/26 12:37:47.184
I0316 12:37:47.184946  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --clear'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --clear'
    STEP: Accept risk SomeInvokerThing @ 03/16/26 12:37:48.327
I0316 12:37:48.328010  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept SomeInvokerThing'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept SomeInvokerThing'
I0316 12:37:49.849108  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'
I0316 12:38:10.983478  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:38:12.469189  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:38:30.983258  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:38:32.249940  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:38:50.983529  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:38:52.179425  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:39:10.980260  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:39:13.474629  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:39:30.982310  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:39:32.142736  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:39:50.981233  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:39:52.209340  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:40:10.983344  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:40:12.150635  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:40:30.982216  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:40:32.118087  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:40:50.983197  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
    STEP: clear risk again @ 03/16/26 12:40:58.056
I0316 12:40:58.056125  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --clear'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --clear'
I0316 12:41:01.754927  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'
I0316 12:41:25.137967  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:41:28.081127  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:41:45.138327  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:41:46.392191  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:42:05.137817  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:42:06.519744  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:42:25.138330  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:42:26.502322  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:42:45.136331  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:42:46.258498  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:43:05.137681  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:43:06.424612  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:43:25.138429  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:43:27.898704  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:43:45.138339  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
    STEP: Accept risk SomeInvokerThing,SomeChannelThing @ 03/16/26 12:43:49.236
I0316 12:43:49.236181  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept SomeInvokerThing,SomeChannelThing'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept SomeInvokerThing,SomeChannelThing'
I0316 12:43:53.192112  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'
I0316 12:44:14.734435  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:44:15.958374  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:44:34.734464  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:44:35.960366  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:44:54.733409  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:44:55.911267  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:45:14.734510  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:45:15.929206  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:45:34.732355  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:45:35.968607  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:45:54.732727  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:45:56.261363  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:46:14.735159  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:46:16.282908  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:46:34.734142  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:46:36.194805  219345 util.go:60] non-recommend version 4.22.0-ec.1 is not available. Trying again
  non-recommend version 4.22.0-ec.1 is not available. Trying again
I0316 12:46:54.734204  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
    STEP: Replace accept risks by SomeInfrastructureThing- @ 03/16/26 12:46:56.256
I0316 12:46:56.256786  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --replace SomeInfrastructureThing-'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --replace SomeInfrastructureThing-'
E0316 12:46:56.332141  219345 util.go:72] Command failed with error: exit status 1
Output: error: The suffix '-' on risks is not allowed if --replace is specified
See 'oc adm upgrade accept -h' for help and examples
  ERROR: Command failed with error: exit status 1
  Output: error: The suffix '-' on risks is not allowed if --replace is specified
  See 'oc adm upgrade accept -h' for help and examples

    STEP: Replace nothing @ 03/16/26 12:46:56.332
I0316 12:46:56.332267  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --replace'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --replace'
E0316 12:46:56.392312  219345 util.go:72] Command failed with error: exit status 1
Output: error: no positional arguments given
See 'oc adm upgrade accept -h' for help and examples
  ERROR: Command failed with error: exit status 1
  Output: error: no positional arguments given
  See 'oc adm upgrade accept -h' for help and examples

    STEP: Replace accept risks by SomeInfrastructureThing @ 03/16/26 12:46:56.392
I0316 12:46:56.392403  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --replace SomeInfrastructureThing'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --replace SomeInfrastructureThing'
I0316 12:46:57.842990  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'
I0316 12:47:18.983842  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:47:20.534260  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:47:38.984364  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:47:40.251426  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:47:58.982789  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:48:00.426493  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:48:18.984233  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:48:20.898288  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:48:38.983286  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:48:40.154714  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:48:58.983774  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:49:00.388442  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:49:18.983538  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:49:20.211251  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:49:38.984285  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
I0316 12:49:40.121266  219345 util.go:60] non-recommend version 4.22.0-ec.1 still exists. Trying again
  non-recommend version 4.22.0-ec.1 still exists. Trying again
I0316 12:49:58.983668  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'
    STEP: Upgrade to not recommend version @ 03/16/26 12:50:01.69
I0316 12:50:01.691061  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade --to 4.22.0-ec.3'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade --to 4.22.0-ec.3'
I0316 12:50:33.167206  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:51:03.167437  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:51:33.167747  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:52:03.167109  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:52:33.167612  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:53:03.167810  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:53:33.167456  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:54:03.167858  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:54:33.167790  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:55:03.167617  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:55:33.167537  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:56:03.167714  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:56:33.167382  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:57:03.167408  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:57:33.167586  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:58:03.166862  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:58:33.167617  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:59:03.167483  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 12:59:33.167183  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:00:03.167227  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:00:33.167341  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:01:03.167610  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:01:33.167772  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:02:03.166974  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:02:33.167847  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:03:03.167421  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:03:33.167123  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:04:03.167657  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:04:33.167456  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:05:03.167067  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:05:33.167435  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:06:03.167651  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:06:33.167484  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:07:03.167757  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:07:33.166993  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:08:03.167688  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:08:33.167502  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:09:03.167006  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:09:33.167306  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:10:03.167869  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:10:33.167489  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:11:03.167782  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:11:33.167538  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:12:03.167218  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:12:33.166982  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:13:03.167001  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:13:33.167549  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:14:03.167412  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:14:33.167286  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:15:03.167289  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:15:33.167493  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:16:03.167606  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:16:33.167205  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:17:03.167357  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:17:33.167253  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:18:03.167105  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:18:33.167446  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:19:03.167834  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:19:33.166971  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:20:03.167257  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:20:33.167225  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:21:03.167221  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:21:33.167232  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:22:03.167381  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:22:33.167169  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:23:03.167487  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:23:33.166784  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:24:03.167202  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:24:33.166834  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:25:03.167176  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:25:33.167351  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:26:03.166830  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:26:33.167431  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:27:03.167609  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:27:33.167527  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:28:03.167306  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:28:33.167373  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:29:03.167453  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
E0316 13:29:15.598939  219345 util.go:72] Command failed with error: exit status 1
Output: Unable to connect to the server: net/http: TLS handshake timeout
  ERROR: Command failed with error: exit status 1
  Output: Unable to connect to the server: net/http: TLS handshake timeout

I0316 13:29:33.167228  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:30:03.167299  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:30:33.167762  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:31:03.167837  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:31:33.167568  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:32:03.166894  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:32:33.167573  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:33:03.167440  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:33:33.167144  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:34:03.167189  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:34:33.167291  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:35:03.167232  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:35:33.167330  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:36:03.167326  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:36:33.167323  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:37:03.167356  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:37:33.166997  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:38:03.166865  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:38:33.167304  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:39:03.167163  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:39:33.167155  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:40:03.167109  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:40:33.167197  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:41:03.167536  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:41:33.167284  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:42:03.167241  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:42:33.167304  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:43:03.167193  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:43:33.167187  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:44:03.167120  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:44:33.167324  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:45:03.167362  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:45:33.167415  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:46:03.167701  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:46:33.167250  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:47:03.166955  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:47:33.167706  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:48:03.167499  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:48:33.167575  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:49:03.167506  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:49:33.167600  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:50:33.167591  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:51:03.167208  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:51:33.167078  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:52:03.167189  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:52:33.167260  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:53:03.167304  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:53:33.167195  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:54:03.167133  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
E0316 13:54:14.549347  219345 util.go:72] Command failed with error: exit status 1
Output: Unable to connect to the server: net/http: TLS handshake timeout
  ERROR: Command failed with error: exit status 1
  Output: Unable to connect to the server: net/http: TLS handshake timeout

I0316 13:54:33.167311  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:55:03.166942  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:55:33.167789  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:56:03.167700  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:56:33.167284  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:57:03.167291  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:57:33.167471  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:58:03.167320  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
E0316 13:58:18.533480  219345 util.go:72] Command failed with error: exit status 1
Output: Unable to connect to the server: net/http: TLS handshake timeout
  ERROR: Command failed with error: exit status 1
  Output: Unable to connect to the server: net/http: TLS handshake timeout

I0316 13:58:33.166895  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:59:03.167273  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 13:59:33.167490  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 14:00:03.167574  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 14:00:33.167511  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 14:01:03.167612  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 14:01:33.166988  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'
I0316 14:01:34.063004  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --clear'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --clear'
I0316 14:01:35.277922  219345 util.go:60] will patching: clusterversion/version with: [{"op":"add","path":"/spec/upstream","value":""},{"op":"add","path":"/spec/channel","value":""}] in namespace: 
  will patching: clusterversion/version with: [{"op":"add","path":"/spec/upstream","value":""},{"op":"add","path":"/spec/channel","value":""}] in namespace: 
I0316 14:01:35.277973  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig patch clusterversion/version --type=json --patch [{"op":"add","path":"/spec/upstream","value":""},{"op":"add","path":"/spec/channel","value":""}]'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig patch clusterversion/version --type=json --patch [{"op":"add","path":"/spec/upstream","value":""},{"op":"add","path":"/spec/channel","value":""}]'
I0316 14:01:36.398273  219345 util.go:60] will patching: clusterversion/version with: [{"op":"add","path":"/spec/overrides","value":[{"group":"config.openshift.io","kind":"ClusterImagePolicy","name":"openshift","namespace":"","unmanaged":true}]}] in namespace: 
  will patching: clusterversion/version with: [{"op":"add","path":"/spec/overrides","value":[{"group":"config.openshift.io","kind":"ClusterImagePolicy","name":"openshift","namespace":"","unmanaged":true}]}] in namespace: 
I0316 14:01:36.398291  219345 util.go:60] Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig patch clusterversion/version --type=json --patch [{"op":"add","path":"/spec/overrides","value":[{"group":"config.openshift.io","kind":"ClusterImagePolicy","name":"openshift","namespace":"","unmanaged":true}]}]'
  Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig patch clusterversion/version --type=json --patch [{"op":"add","path":"/spec/overrides","value":[{"group":"config.openshift.io","kind":"ClusterImagePolicy","name":"openshift","namespace":"","unmanaged":true}]}]'
  • [5055.289 seconds]
  ------------------------------

  Ran 1 of 1 Specs in 5055.289 seconds
  SUCCESS! -- 1 Passed | 0 Failed | 0 Pending | 0 Skipped
[
  {
    "name": "[Jira:\"oc adm upgrade\"] accept risks exposed to conditional updates Accepted Risks for OCP Cluster Updates",
    "lifecycle": "blocking",
    "duration": 5055289,
    "startTime": "2026-03-16 04:37:22.246090 UTC",
    "endTime": "2026-03-16 06:01:37.535549 UTC",
    "result": "passed",
    "output": "Running 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -o=jsonpath='{.status.conditions[?(@.type==\"Upgradeable\")].message}''\nUpgradeable: 'Cluster operator config-operator should not be upgraded between minor or major versions: FeatureGatesUpgradeable: \"TechPreviewNoUpgrade\" does not allow updates'\n\n  STEP: remove overrides from clusterversion version if exists @ 03/16/26 12:37:23.879\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -o jsonpath={.spec.overrides}'\noverrides: \u003c\u003c[{\"group\":\"config.openshift.io\",\"kind\":\"ClusterImagePolicy\",\"name\":\"openshift\",\"namespace\":\"\",\"unmanaged\":true}]\u003e\u003e\nwill patching: clusterversion/version with: [{\"op\":\"remove\",\"path\":\"/spec/overrides\"}] in namespace: \nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig patch clusterversion/version --type=json --patch [{\"op\":\"remove\",\"path\":\"/spec/overrides\"}]'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -o=jsonpath={.status.desired.version}'\n  STEP: checking the help info of `oc adm upgrade --help` @ 03/16/26 12:37:35.582\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade --help'\n  STEP: checking the help info of `oc adm upgrade accept --help` @ 03/16/26 12:37:35.666\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --help'\n  STEP: patch fauxinnati upstream @ 03/16/26 12:37:35.738\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.upstream}'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.channel}'\nwill patching: clusterversion/version with: [{\"op\":\"add\",\"path\":\"/spec/upstream\",\"value\":\"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph\"},{\"op\":\"add\",\"path\":\"/spec/channel\",\"value\":\"OCP-88175\"}] in namespace: \nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig patch clusterversion/version --type=json --patch [{\"op\":\"add\",\"path\":\"/spec/upstream\",\"value\":\"https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph\"},{\"op\":\"add\",\"path\":\"/spec/channel\",\"value\":\"OCP-88175\"}]'\n  STEP: check if new upstream has enough target versions @ 03/16/26 12:37:39.18\nGetting web resource: https://fauxinnati-fauxinnati.apps.ota-stage.q2z4.p1.openshiftapps.com/api/upgrades_info/graph?channel=OCP-88175\u0026version=4.22.0-0.nightly-2026-03-15-203841\u0026arch=amd64\n  STEP: check default output of `oc adm upgrade` @ 03/16/26 12:37:40.313\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\n  STEP: check output of `oc adm upgrade --include-not-recommended` @ 03/16/26 12:37:41.447\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade --include-not-recommended'\n  STEP: upgrade to a non recommend version @ 03/16/26 12:37:43.936\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade --to 4.22.0-ec.1'\nERROR: Command failed with error: exit status 1\nOutput: error: the update 4.22.0-ec.1 is not one of the recommended updates, but is available as a conditional update. To accept the Recommended=False risk and to proceed with update use --allow-not-recommended.\n  Reason: SomeInvokerThing\n  Message: This is SomeInvokerThing that always applies for testing purposes https://docs.openshift.com/synthetic-risk-a\n\n  STEP: clear risks when the accept risk list is empty @ 03/16/26 12:37:47.184\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --clear'\n  STEP: Accept risk SomeInvokerThing @ 03/16/26 12:37:48.327\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept SomeInvokerThing'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\n  STEP: clear risk again @ 03/16/26 12:40:58.056\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --clear'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\n  STEP: Accept risk SomeInvokerThing,SomeChannelThing @ 03/16/26 12:43:49.236\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept SomeInvokerThing,SomeChannelThing'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 is not available. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\n  STEP: Replace accept risks by SomeInfrastructureThing- @ 03/16/26 12:46:56.256\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --replace SomeInfrastructureThing-'\nERROR: Command failed with error: exit status 1\nOutput: error: The suffix '-' on risks is not allowed if --replace is specified\nSee 'oc adm upgrade accept -h' for help and examples\n\n  STEP: Replace nothing @ 03/16/26 12:46:56.332\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --replace'\nERROR: Command failed with error: exit status 1\nOutput: error: no positional arguments given\nSee 'oc adm upgrade accept -h' for help and examples\n\n  STEP: Replace accept risks by SomeInfrastructureThing @ 03/16/26 12:46:56.392\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --replace SomeInfrastructureThing'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion version -ojsonpath={.spec.desiredUpdate.acceptRisks}'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\nnon-recommend version 4.22.0-ec.1 still exists. Trying again\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade'\n  STEP: Upgrade to not recommend version @ 03/16/26 12:50:01.69\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade --to 4.22.0-ec.3'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nERROR: Command failed with error: exit status 1\nOutput: Unable to connect to the server: net/http: TLS handshake timeout\n\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nERROR: Command failed with error: exit status 1\nOutput: Unable to connect to the server: net/http: TLS handshake timeout\n\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nERROR: Command failed with error: exit status 1\nOutput: Unable to connect to the server: net/http: TLS handshake timeout\n\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig get clusterversion'\nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig adm upgrade accept --clear'\nwill patching: clusterversion/version with: [{\"op\":\"add\",\"path\":\"/spec/upstream\",\"value\":\"\"},{\"op\":\"add\",\"path\":\"/spec/channel\",\"value\":\"\"}] in namespace: \nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig patch clusterversion/version --type=json --patch [{\"op\":\"add\",\"path\":\"/spec/upstream\",\"value\":\"\"},{\"op\":\"add\",\"path\":\"/spec/channel\",\"value\":\"\"}]'\nwill patching: clusterversion/version with: [{\"op\":\"add\",\"path\":\"/spec/overrides\",\"value\":[{\"group\":\"config.openshift.io\",\"kind\":\"ClusterImagePolicy\",\"name\":\"openshift\",\"namespace\":\"\",\"unmanaged\":true}]}] in namespace: \nRunning 'oc --kubeconfig=/home/jianl/422/cluster-bot-2026-03-16-030418.kubeconfig patch clusterversion/version --type=json --patch [{\"op\":\"add\",\"path\":\"/spec/overrides\",\"value\":[{\"group\":\"config.openshift.io\",\"kind\":\"ClusterImagePolicy\",\"name\":\"openshift\",\"namespace\":\"\",\"unmanaged\":true}]}]'\n"
  }
[jianl@jianl-thinkpadt14gen4 oc]$ 

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant