[WIP] MG-108: evals for plan_mustgather#162
Conversation
|
@swghosh: This pull request references MG-108 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "4.22.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
|
Skipping CI for Draft Pull Request. |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: swghosh The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
📝 WalkthroughWalkthroughThis pull request introduces OpenShift evaluation tasks for must-gather operations and updates the evaluation configuration. Seven new task manifests are added to test various must-gather configurations and constraints, with corresponding eval.yaml configuration to register the new task set. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes Poem
🚥 Pre-merge checks | ✅ 5✅ Passed checks (5 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
Co-authored-by: Claude Opus 4.6 <noreply@anthropic.com> Signed-off-by: Swarup Ghosh <swghosh@redhat.com>
|
@swghosh: This pull request references MG-108 which is a valid jira issue. Warning: The referenced jira issue has an invalid target version for the target branch this PR targets: expected the story to target the "5.0.0" version, but no target version was set. DetailsIn response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the openshift-eng/jira-lifecycle-plugin repository. |
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@evals/tasks/openshift/plan-mustgather-custom-images/plan-mustgather-custom-images.yaml`:
- Around line 27-28: The verification only checks for
"cluster-logging-rhel9-operator" and misses "ose-must-gather:v4.15"; update the
pod inspection for the commands that use kubectl get pod "$POD" -n "$NS" -o yaml
| grep -q "cluster-logging-rhel9-operator" (and the duplicate at the later
occurrence) to also verify the presence of the ose-must-gather:v4.15
image—either by adding a second grep check for "ose-must-gather:v4.15" after the
existing command or replacing the single grep with a combined check/regex that
ensures both "cluster-logging-rhel9-operator:latest" and "ose-must-gather:v4.15"
are present.
In
`@evals/tasks/openshift/plan-mustgather-node-selector/plan-mustgather-node-selector.yaml`:
- Around line 20-26: The current extraction stores possibly multiple pod JSON
objects into POD_JSON which concatenates objects and later pipelines into jq for
NS and POD causing parse errors; change the jq pipeline that sets POD_JSON to
produce a single JSON array (use jq -s or wrap items[] selection into an array)
or restrict the selection to the first matching item so subsequent jq calls that
read POD_JSON (used to set NS and POD) receive valid JSON; update the POD_JSON
assignment and/or the NS and POD extraction logic that references POD_JSON to
use jq -s or array indexing to safely handle multiple matches.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro Plus
Run ID: 70354532-44cb-4719-910d-f7f17cd3eed9
📒 Files selected for processing (8)
evals/claude-code/eval.yamlevals/tasks/openshift/plan-mustgather-audit-logs/plan-mustgather-audit-logs.yamlevals/tasks/openshift/plan-mustgather-custom-images/plan-mustgather-custom-images.yamlevals/tasks/openshift/plan-mustgather-custom-namespace/plan-mustgather-custom-namespace.yamlevals/tasks/openshift/plan-mustgather-default/plan-mustgather-default.yamlevals/tasks/openshift/plan-mustgather-host-network/plan-mustgather-host-network.yamlevals/tasks/openshift/plan-mustgather-node-selector/plan-mustgather-node-selector.yamlevals/tasks/openshift/plan-mustgather-timeout-since/plan-mustgather-timeout-since.yaml
| # Verify pod uses the logging operator image | ||
| kubectl get pod "$POD" -n "$NS" -o yaml | grep -q "cluster-logging-rhel9-operator" |
There was a problem hiding this comment.
Verification only checks one of the two required images.
The prompt specifies two images should be configured (ose-must-gather:v4.15 and cluster-logging-rhel9-operator:latest), but the verification only checks for the logging operator image. This could lead to false positives if only one image is configured.
Consider verifying both images
# Verify pod uses the logging operator image
- kubectl get pod "$POD" -n "$NS" -o yaml | grep -q "cluster-logging-rhel9-operator"
+ POD_YAML=$(kubectl get pod "$POD" -n "$NS" -o yaml)
+ # Verify pod uses both the platform and logging operator images
+ echo "$POD_YAML" | grep -q "ose-must-gather"
+ echo "$POD_YAML" | grep -q "cluster-logging-rhel9-operator"Also applies to: 36-37
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@evals/tasks/openshift/plan-mustgather-custom-images/plan-mustgather-custom-images.yaml`
around lines 27 - 28, The verification only checks for
"cluster-logging-rhel9-operator" and misses "ose-must-gather:v4.15"; update the
pod inspection for the commands that use kubectl get pod "$POD" -n "$NS" -o yaml
| grep -q "cluster-logging-rhel9-operator" (and the duplicate at the later
occurrence) to also verify the presence of the ose-must-gather:v4.15
image—either by adding a second grep check for "ose-must-gather:v4.15" after the
existing command or replacing the single grep with a combined check/regex that
ensures both "cluster-logging-rhel9-operator:latest" and "ose-must-gather:v4.15"
are present.
| POD_JSON=$(kubectl get pods -A -o json | jq -r '.items[] | select(.metadata.namespace | startswith("openshift-must-gather"))') | ||
| if [ -z "$POD_JSON" ]; then | ||
| echo "No must-gather pod found" | ||
| exit 1 | ||
| fi | ||
| NS=$(echo "$POD_JSON" | jq -r '.metadata.namespace' | head -1) | ||
| POD=$(echo "$POD_JSON" | jq -r '.metadata.name' | head -1) |
There was a problem hiding this comment.
Potential JSON parsing failure when multiple pods match.
When multiple pods match the namespace prefix filter, jq -r '.items[] | select(...)' outputs multiple JSON objects concatenated (not as an array). Piping $POD_JSON back into jq on lines 25-26 will fail with a parse error because it's not valid JSON when there are multiple objects.
Consider using jq -s (slurp) to handle multiple objects, or filter to an array from the start:
Proposed fix
- POD_JSON=$(kubectl get pods -A -o json | jq -r '.items[] | select(.metadata.namespace | startswith("openshift-must-gather"))')
+ POD_JSON=$(kubectl get pods -A -o json | jq '[.items[] | select(.metadata.namespace | startswith("openshift-must-gather"))] | first')
if [ -z "$POD_JSON" ]; then
echo "No must-gather pod found"
exit 1
fi
- NS=$(echo "$POD_JSON" | jq -r '.metadata.namespace' | head -1)
- POD=$(echo "$POD_JSON" | jq -r '.metadata.name' | head -1)
+ NS=$(echo "$POD_JSON" | jq -r '.metadata.namespace')
+ POD=$(echo "$POD_JSON" | jq -r '.metadata.name')Note: This same pattern appears in all other task files in this PR.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| POD_JSON=$(kubectl get pods -A -o json | jq -r '.items[] | select(.metadata.namespace | startswith("openshift-must-gather"))') | |
| if [ -z "$POD_JSON" ]; then | |
| echo "No must-gather pod found" | |
| exit 1 | |
| fi | |
| NS=$(echo "$POD_JSON" | jq -r '.metadata.namespace' | head -1) | |
| POD=$(echo "$POD_JSON" | jq -r '.metadata.name' | head -1) | |
| POD_JSON=$(kubectl get pods -A -o json | jq '[.items[] | select(.metadata.namespace | startswith("openshift-must-gather"))] | first') | |
| if [ -z "$POD_JSON" ]; then | |
| echo "No must-gather pod found" | |
| exit 1 | |
| fi | |
| NS=$(echo "$POD_JSON" | jq -r '.metadata.namespace') | |
| POD=$(echo "$POD_JSON" | jq -r '.metadata.name') |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@evals/tasks/openshift/plan-mustgather-node-selector/plan-mustgather-node-selector.yaml`
around lines 20 - 26, The current extraction stores possibly multiple pod JSON
objects into POD_JSON which concatenates objects and later pipelines into jq for
NS and POD causing parse errors; change the jq pipeline that sets POD_JSON to
produce a single JSON array (use jq -s or wrap items[] selection into an array)
or restrict the selection to the first matching item so subsequent jq calls that
read POD_JSON (used to set NS and POD) receive valid JSON; update the POD_JSON
assignment and/or the NS and POD extraction logic that references POD_JSON to
use jq -s or array indexing to safely handle multiple matches.
|
@swghosh: all tests passed! Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
The openshift toolset contains plan_mustgather ServerPrompt,
add evals for various arguments that the prompt inputs.
Summary by CodeRabbit