[MPT-19878] Enabled skipped catalog e2e tests#273
Conversation
📝 WalkthroughWalkthroughThis pull request enables previously skipped end-to-end tests across multiple catalog test modules by removing Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. Comment |
|
There was a problem hiding this comment.
🧹 Nitpick comments (2)
tests/e2e/catalog/product/test_sync_product.py (1)
35-38: Consider adding an assertion to verify publish succeeded.Same as the async product test: this test lacks assertions after
publish(), unlike the item and document tests in this PR that verifyresult.status == "Published". Adding assertions would ensure the test fails on actual publish errors rather than silently passing.♻️ Suggested improvement
def test_product_review_and_publish(mpt_vendor, mpt_ops, created_product): - mpt_vendor.catalog.products.review(created_product.id) + product = mpt_vendor.catalog.products.review(created_product.id) + assert product.status == "Pending" - mpt_ops.catalog.products.publish(created_product.id) # act + result = mpt_ops.catalog.products.publish(created_product.id) + + assert result.status == "Published"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/catalog/product/test_sync_product.py` around lines 35 - 38, The test_product_review_and_publish test calls mpt_vendor.catalog.products.review(created_product.id) and then mpt_ops.catalog.products.publish(created_product.id) but doesn't assert the publish outcome; modify the test to capture the publish return (e.g., assign the result of mpt_ops.catalog.products.publish(created_product.id) to a variable) and add an assertion that result.status == "Published" (or the appropriate success status) and optionally assert the returned object's id matches created_product.id to ensure the publish actually succeeded.tests/e2e/catalog/product/test_async_product.py (1)
39-43: Consider adding an assertion to verify publish succeeded.Unlike the item and document tests in this PR,
test_product_review_and_publishdoes not assert on the returned status afterpublish(). For consistency and to catch publish failures, consider capturing and verifying the result:♻️ Suggested improvement
async def test_product_review_and_publish(async_mpt_vendor, async_mpt_ops, async_created_product): - await async_mpt_vendor.catalog.products.review(async_created_product.id) + product = await async_mpt_vendor.catalog.products.review(async_created_product.id) + assert product.status == "Pending" - await async_mpt_ops.catalog.products.publish(async_created_product.id) # act + result = await async_mpt_ops.catalog.products.publish(async_created_product.id) + + assert result.status == "Published"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/catalog/product/test_async_product.py` around lines 39 - 43, The test_product_review_and_publish test calls review() and then awaits async_mpt_ops.catalog.products.publish(async_created_product.id) but doesn't assert the publish result; capture the result of publish (e.g., result = await async_mpt_ops.catalog.products.publish(async_created_product.id)) and add an assertion that verifies the operation succeeded—either by asserting a returned status field equals the expected value (for example "published") or by fetching the product after publish and asserting its state is published; reference the test function test_product_review_and_publish, the methods review() and publish(), and the async_created_product.id when locating where to add the capture-and-assert.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@tests/e2e/catalog/product/test_async_product.py`:
- Around line 39-43: The test_product_review_and_publish test calls review() and
then awaits async_mpt_ops.catalog.products.publish(async_created_product.id) but
doesn't assert the publish result; capture the result of publish (e.g., result =
await async_mpt_ops.catalog.products.publish(async_created_product.id)) and add
an assertion that verifies the operation succeeded—either by asserting a
returned status field equals the expected value (for example "published") or by
fetching the product after publish and asserting its state is published;
reference the test function test_product_review_and_publish, the methods
review() and publish(), and the async_created_product.id when locating where to
add the capture-and-assert.
In `@tests/e2e/catalog/product/test_sync_product.py`:
- Around line 35-38: The test_product_review_and_publish test calls
mpt_vendor.catalog.products.review(created_product.id) and then
mpt_ops.catalog.products.publish(created_product.id) but doesn't assert the
publish outcome; modify the test to capture the publish return (e.g., assign the
result of mpt_ops.catalog.products.publish(created_product.id) to a variable)
and add an assertion that result.status == "Published" (or the appropriate
success status) and optionally assert the returned object's id matches
created_product.id to ensure the publish actually succeeded.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: f392ff0c-efa4-49c5-8846-ad5701ea3a91
📒 Files selected for processing (8)
tests/e2e/catalog/items/test_async_item.pytests/e2e/catalog/items/test_sync_item.pytests/e2e/catalog/product/documents/test_async_document.pytests/e2e/catalog/product/documents/test_sync_document.pytests/e2e/catalog/product/test_async_product.pytests/e2e/catalog/product/test_sync_product.pytests/e2e/catalog/units_of_measure/test_async_units_of_measure.pytests/e2e/catalog/units_of_measure/test_sync_units_of_measure.py
💤 Files with no reviewable changes (2)
- tests/e2e/catalog/units_of_measure/test_async_units_of_measure.py
- tests/e2e/catalog/units_of_measure/test_sync_units_of_measure.py



This pull request primarily removes
@pytest.mark.skipdecorators from several end-to-end test cases, enabling tests that were previously skipped due to concerns about leaving test data in the system or features not being implemented. Additionally, there are minor refactorings to improve result variable naming and test assertions, as well as a change to applypytest.mark.flakyglobally in some test modules rather than per test. These changes aim to expand test coverage and improve test maintainability.Test enablement and coverage:
@pytest.mark.skipfrom review, publish, and unpublish tests for items, documents, and products, activating tests that were previously skipped due to side effects or incomplete implementation. [1] [2] [3] [4] [5] [6] [7] [8]Test refactoring and consistency:
Test reliability annotation:
pytest.mark.flakyat the module level in product test files, removing redundant per-test decorators for cleaner code. [1] [2] [3] [4]Overall, these changes will lead to more comprehensive and maintainable test suites by enabling important tests and improving code clarity.
Closes MPT-19878
Release Notes
@pytest.mark.skipdecoratorspytest.mark.flakydecorators to module level in product test files (test_async_product.pyandtest_sync_product.py), eliminating per-test redundant flaky markerstest_review_and_publish_itemfor both sync and async item teststest_review_and_publish_documentfor both sync and async document teststest_product_review_and_publishfor both sync and async product teststest_createfor units of measure tests (sync and async)