Conversation
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
🚧 Files skipped from review as they are similar to previous changes (1)
📝 WalkthroughWalkthroughNormalize LLM-critic guardrail errors by adding Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 1 | ❌ 2❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
backend/app/tests/test_llm_validators.py (1)
103-118: Cover the validator-name gate in_validate_with_guard.These tests validate
_normalize_llm_critic_errordirectly, but they don’t assert the route gate atbackend/app/api/routes/guardrails.pyLine 222. Please add one test where a non-guardrails/llm_criticvalidator emits a message containing"failed the following metrics"and verify it is returned unchanged.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@backend/app/tests/test_llm_validators.py` around lines 103 - 118, Add a test that exercises the validator-name gate in _validate_with_guard by simulating a non-'guardrails/llm_critic' validator emitting a message containing "failed the following metrics" and asserting the message is returned unchanged; specifically, call _validate_with_guard (the function in backend/app/api/routes/guardrails.py) or the public wrapper used in tests with validator_name set to something like "some/other_validator" and the raw message "The response failed the following metrics: ['quality']." then assert the returned/propagated message equals the original raw string to ensure the guardrails/llm_critic special-case is not applied.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@backend/app/api/routes/guardrails.py`:
- Around line 294-303: The function _normalize_llm_critic_error currently
returns the raw upstream message for unknown cases; change it to return a safe
generic fallback instead. Keep the existing exact-match checks for "failed the
following metrics" and "missing or has invalid evaluations", but replace the
final "return message" with a user-facing generic string such as "The response
could not be validated by our safety checks; please retry or contact support."
to avoid exposing upstream/internal phrasing from guardrails/llm_critic.
---
Nitpick comments:
In `@backend/app/tests/test_llm_validators.py`:
- Around line 103-118: Add a test that exercises the validator-name gate in
_validate_with_guard by simulating a non-'guardrails/llm_critic' validator
emitting a message containing "failed the following metrics" and asserting the
message is returned unchanged; specifically, call _validate_with_guard (the
function in backend/app/api/routes/guardrails.py) or the public wrapper used in
tests with validator_name set to something like "some/other_validator" and the
raw message "The response failed the following metrics: ['quality']." then
assert the returned/propagated message equals the original raw string to ensure
the guardrails/llm_critic special-case is not applied.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 29da6d22-f4b0-466e-931b-aa512d93e5f3
📒 Files selected for processing (2)
backend/app/api/routes/guardrails.pybackend/app/tests/test_llm_validators.py
| "The critic model returned an incomplete or malformed response. " | ||
| "Please retry." | ||
| ) | ||
| return message |
There was a problem hiding this comment.
Tested this out and i am still getting the same error
Summary
Target issue is #76
Explain the motivation for making this change. What existing problem does the pull request solve?
When the LLM critic validator failed, two different error strings surface directly in the API response:
"The response failed the following metrics: ['query']." — returned when the LLM scored a metric below the configured threshold (normal failure path)
"The response is missing or has invalid evaluations for the following metrics: ['query']." — returned when the LLM's response was malformed, missing a metric key, or returned an out-of-range score (including 0, which LLMCritic treats as invalid rather than a failing score)
These messages are internal implementation details of the third-party library and are not meaningful to end users.
Solution
Added _normalize_llm_critic_error to guardrails.py which intercepts these two known strings and replaces them with clear user-facing messages. The function is only invoked when the failing validator is guardrails/llm_critic (checked via log.validator_name), so no other validators are affected. All other error messages pass through unchanged.
Changes -
backend/app/api/routes/guardrails.py— gate error extraction on log.validator_name for LLM critic logs; add _normalize_llm_critic_error helper at the bottom of the file.backend/app/tests/test_llm_validators.py— 3 new tests covering both error mappings and the pass-through case for unrelated validatorsChecklist
Before submitting a pull request, please ensure that you mark these task.
fastapi run --reload app/main.pyordocker compose upin the repository root and test.Notes
Please add here if any other information is required for the reviewer.
Summary by CodeRabbit
Bug Fixes
Tests