41526 Storage [Backend] Create a file service#53
41526 Storage [Backend] Create a file service#53EBirkenfeld wants to merge 84 commits intomasterfrom
Conversation
…ion API - Add FileAttachment.access_type field with account/restricted options - Add FileAttachment.file_id field for unique file identification - Create FileAttachmentPermission model for user-specific file access - Implement AttachmentService.check_user_permission method - Add AttachmentsViewSet with check-permission endpoint - Add Redis caching for public template authentication - Add response_forbidden method to BaseResponseMixin - Include comprehensive test coverage for permission checking This enables fine-grained file access control with two access types: - account: accessible by all users in the same account - restricted: accessible only by users with explicit permissions
…integration - Add FastAPI-based file upload and download endpoints with streaming support - Implement Clean Architecture with domain entities, use cases, and repositories - Add authentication middleware with JWT token validation and Redis caching - Integrate Google Cloud Storage S3-compatible API for file storage - Add comprehensive error handling with custom exceptions and HTTP status codes - Implement file access permissions validation through external HTTP service - Add database models and Alembic migrations for file metadata storage - Include Docker containerization with docker-compose for local development - Add comprehensive test suite with unit, integration, and e2e tests - Configure pre-commit hooks with ruff, mypy, and pytest for code quality
…e_access_rights' into 41526__сreate_file_service # Conflicts: # backend/src/processes/enums.py # backend/src/processes/models/__init__.py # backend/src/processes/models/workflows/attachment.py # backend/src/processes/services/attachments.py # backend/src/processes/tests/test_services/test_attachments.py
… and implement caching for template data
….toml to dedicated config files - Move mypy configuration from pyproject.toml to mypy.ini for better separation of concerns - Simplify ruff.toml configuration by removing extensive rule selections and using "ALL" selector - Update ruff target version from py37 to py311 to match project Python version - Remove redundant ruff configuration from pyproject.toml to avoid duplication - Apply code formatting fixes across entire codebase - Standardize import statements and code style according to new linting rules - Update test files to comply with new formatting standards
…ore rule in ruff configuration - Update docstrings across various modules to ensure consistency and clarity. - Remove unused "D" rule from ruff.toml configuration. - Enhance readability and maintainability of the codebase.
…ling for consistency - Adjust import paths in test files to ensure they reference the correct locations. - Replace instances of FileNotFoundError with DomainFileNotFoundError for better clarity in exception handling. - Streamline fixture definitions and improve code readability across various test modules.
… configuration - Update docstrings across various test files for consistency and clarity. - Add new linting rules in ruff.toml for improved code quality. - Enhance readability and maintainability of the codebase by refining fixture definitions and mock implementations.
…line permission handling - Refactor the AuthenticationMiddleware to enhance error handling and response formatting. - Update permission classes to use direct Request type hints instead of string annotations. - Consolidate permission checks into FastAPI dependency wrappers for better clarity and usability. - Remove unused exception classes and error messages to clean up the codebase. - Adjust test cases to reflect changes in authentication and permission handling.
…exception tests - Remove unused infrastructure error codes from error_codes.py to streamline the codebase. - Update the AuthenticationMiddleware constructor to use direct FastAPI type hints for clarity. - Add new tests for validation exceptions, including file size and storage errors, to improve coverage and ensure accurate error handling.
…achment model and create FileAttachmentPermission model
…h SeaweedFS integration in Docker configuration
…02:8002 to 8002:8000 in Docker configurations
…ame and clean up GCS S3 environment variable assignments
…l-storage' in Docker Compose files
… to Docker Compose files
| const requestBaseUrlsMap: { [key in TRequestType]: string } = { | ||
| public: envBackendURL || publicUrl, | ||
| local: '', | ||
| fileService: fileServiceUrl, |
There was a problem hiding this comment.
🟡 Medium
api/commonRequest.ts:106 If fileServiceUrl is missing from config, mergePaths will crash accessing .length on undefined. Consider adding a fallback (e.g., fileServiceUrl || '') or validating before use.
| fileService: fileServiceUrl, | |
| fileService: fileServiceUrl || '', |
🚀 Want me to fix this? Reply ex: "fix it for me".
🤖 Prompt for AI
In file frontend/src/public/api/commonRequest.ts around line 106:
If `fileServiceUrl` is missing from config, `mergePaths` will crash accessing `.length` on `undefined`. Consider adding a fallback (e.g., `fileServiceUrl || ''`) or validating before use.
Evidence trail:
Viewed frontend/src/public/api/commonRequest.ts:98-114 at 93557ff. Viewed frontend/src/public/utils/urls.ts:1-12 at 93557ff.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| const compressOptions = { | ||
| useWebWorker: true, | ||
| maxWidthOrHeight: MAX_THUMBNAIL_WIDTH_OR_HEIGHT, | ||
| }; | ||
|
|
||
| return imageCompression(file, compressOptions); |
There was a problem hiding this comment.
utils/uploadFilesNew.ts:89
If imageCompression throws (e.g., corrupted image), the error propagates and bypasses the thumbnail || file fallback in uploadUserAvatar. Consider wrapping the call in try-catch and returning null on failure.
No longer relevant as of b9f359a
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 4 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| const formData = new FormData(); | ||
| formData.append('file', file, filename); | ||
|
|
||
| const data = await commonRequest<IFileServiceUploadResponse>( |
There was a problem hiding this comment.
🟠 High
api/fileServiceUpload.ts:27 If fileServiceUrl is missing from the runtime config, requests will go to undefined/.... Consider adding a guard to validate that fileServiceUrl is defined before making the request, or document that this config property is required.
🚀 Want me to fix this? Reply ex: "fix it for me".
🤖 Prompt for AI
In file frontend/src/public/api/fileServiceUpload.ts around line 27:
If `fileServiceUrl` is missing from the runtime config, requests will go to `undefined/...`. Consider adding a guard to validate that `fileServiceUrl` is defined before making the request, or document that this config property is required.
Evidence trail:
frontend/src/public/api/fileServiceUpload.ts (lines 26-35): `commonRequest` called with `fileServiceUpload` URL and type `'fileService'`
frontend/src/public/api/commonRequest.ts (lines 106-117): `fileServiceUrl` extracted from config, mapped as base URL for 'fileService' type, then used in `mergePaths(requestBaseUrlsMap[type], url)`
frontend/src/public/utils/getConfig.ts (lines 71-74): `getBrowserConfigEnv()` returns `{}` if config missing, causing `fileServiceUrl` to be undefined at runtime
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 3 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
…aticapp/pneumaticworkflow into 41526__create_file_service
| file, | ||
| filename, | ||
| }: IFileServiceUploadRequest): Promise<IFileServiceUploadResponse> { | ||
| const { |
There was a problem hiding this comment.
🟢 Low api/fileServiceUpload.ts:18
Deep destructuring api.urls.fileServiceUpload will throw if getBrowserConfigEnv() returns {}. Consider adding a guard or using optional chaining before destructuring.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file frontend/src/public/api/fileServiceUpload.ts around line 18:
Deep destructuring `api.urls.fileServiceUpload` will throw if `getBrowserConfigEnv()` returns `{}`. Consider adding a guard or using optional chaining before destructuring.
Evidence trail:
1. frontend/src/public/api/fileServiceUpload.ts lines 18-21 (REVIEWED_COMMIT): Deep destructuring `const { api: { urls: { fileServiceUpload } } } = getBrowserConfigEnv();`
2. frontend/src/public/utils/getConfig.ts lines 71-74 (REVIEWED_COMMIT): `getBrowserConfigEnv()` returns `(browserConfig && browserConfig.config) || {}` which can be an empty object `{}`
3. When `{}` is returned, `{}.api` is `undefined`, and accessing `.urls` on `undefined` throws `TypeError: Cannot read property 'urls' of undefined`
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
… redundant FileServiceConnectionException imports
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 3 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
…ser photo upload logic
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 1 potential issue.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
…essary calls and add tests for logo update scenarios
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 4 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with Cloud Agents, enable Autofix in the Cursor dashboard.
| api_name=self._predicate.field, | ||
| ) | ||
| self.field_value = field.attachments.exists() or None | ||
| self.field_value = field.storage_attachments.exists() or None |
There was a problem hiding this comment.
Condition resolver breaks for pre-existing file field data
High Severity
The FileResolver changed from field.attachments.exists() to field.storage_attachments.exists(), and TaskFieldQuerySet.with_attachments similarly changed. The old relation points to FileAttachment records, while the new one points to the new Attachment model from src.storage.models. For any existing workflow where file fields were populated before this change, FileAttachment records exist but corresponding Attachment records may not. This causes file-based conditions to evaluate as if the field is empty, potentially routing workflows down incorrect branches.
Additional Locations (1)
| def save(self, *args, **kwargs): | ||
| super().save(*args, **kwargs) | ||
| from src.authentication.services.public_auth import PublicAuthService | ||
| PublicAuthService.invalidate_template_public_tokens(self) |
There was a problem hiding this comment.
Template save triggers non-transactional cache side effect
Medium Severity
The Template.save() override calls PublicAuthService.invalidate_template_public_tokens() on every save. This performs Redis cache deletion, which is not rolled back if the surrounding database transaction fails. If a template deactivation (is_active=False) is saved within a transaction that subsequently rolls back, the cache entry is still deleted — the template remains active in the DB but public/embed token validation by the file service fails until the cache is repopulated on the next authenticate call.
| }, | ||
| level=SentryLogLevel.ERROR, | ||
| ) | ||
| return super().save(commit=commit) |
There was a problem hiding this comment.
Admin photo uploads silently fail without user feedback
Medium Severity
In UserAdminChangeForm.save(), ContactAdminForm.save(), and IntegrationCreateForm.save(), when FileServiceException is thrown during photo/logo upload, the error is logged to Sentry but the form saves successfully without the uploaded file. The admin user receives no error indication — the save appears successful while the photo/logo is quietly discarded. Using self.add_error() would surface the failure to the admin user.
Additional Locations (2)
| if task is not None and not (task.is_active or task.is_delayed): | ||
| raise CommentedTaskNotActive | ||
| if not text and not attachments: | ||
| if not text: |
There was a problem hiding this comment.
Comment update omits with_attachments from update kwargs
Low Severity
In CommentService.update(), the with_attachments field was removed from the update kwargs, relying solely on refresh_attachments to set it. However, refresh_attachments is called after partial_update, so if partial_update triggers any side effects (like websocket notifications via serialization) before refresh_attachments runs, consumers could see a stale with_attachments value. The create path explicitly initializes it to False before refresh_attachments, but update leaves the old value intact during the gap.
# Conflicts: # backend/docker-compose.yml # backend/src/accounts/services/user.py # backend/src/accounts/tests/views/test_user/test_update.py # backend/src/accounts/views/user.py # backend/src/processes/enums.py # backend/src/processes/models/workflows/attachment.py # backend/src/processes/tests/fixtures.py # backend/src/processes/tests/test_services/test_attachments.py # backend/src/processes/tests/test_services/test_workflows/test_workflow_version_service.py # docker-compose.src.yml # docker-compose.yml # frontend/docker-compose.yml
There was a problem hiding this comment.
Cursor Bugbot has reviewed your changes and found 2 potential issues.
Bugbot Autofix is OFF. To automatically fix reported issues with cloud agents, enable autofix in the Cursor dashboard.
| STORAGE_PROVIDER: ${STORAGE_PROVIDER:-} | ||
| GOOGLE_APPLICATION_CREDENTIALS: /pneumatic_backend/google_api_credentials.json | ||
| SENTRY_DSN: ${SENTRY_DSN:-} | ||
| SENTRY_DSN: ${SENTRY_DSN:-}} |
There was a problem hiding this comment.
Extra closing brace breaks YAML environment variable
High Severity
SENTRY_DSN: ${SENTRY_DSN:-}} has a double closing brace }}. When the three lines above (STORAGE, STORAGE_PROVIDER, GOOGLE_APPLICATION_CREDENTIALS) were removed, the remaining SENTRY_DSN line picked up a stray }. This will either break YAML parsing of the docker-compose file or cause the environment variable value to contain a literal }, preventing Sentry from initializing correctly.
| if user_groups is not None: | ||
| self.instance.user_groups.set(user_groups) | ||
| self._update_related_user_fields(old_name=old_name) | ||
| self._update_related_user_fields(old_name=old_name) |
There was a problem hiding this comment.
Method moved outside transaction breaks atomicity guarantee
Medium Severity
self._update_related_user_fields(old_name=old_name) was moved from inside the with transaction.atomic(): block to outside it. Previously this call ran within the same transaction as the user update and group assignment. Now, if the transaction rolls back (e.g., due to a DB constraint violation on user_groups.set), the related TaskField records would still be updated with the new name, causing data inconsistency between the user and their associated task fields.
| ).delete() | ||
|
|
||
| # Assign permissions to template owners (exclude soft-deleted) | ||
| template_owners = template.owners.filter(is_deleted=False) |
There was a problem hiding this comment.
🟡 Medium services/attachments.py:212
In _assign_template_permissions, the query for template owners at line 212 does not filter by type=OwnerType.USER, unlike the equivalent queries in _assign_task_permissions and _assign_workflow_permissions. If a group-type owner has its user field populated (e.g., data inconsistency or future schema change), that user incorrectly receives attachment permissions while the group itself might not. Consider adding type=OwnerType.USER to the filter at line 212 to match the other methods.
- template_owners = template.owners.filter(is_deleted=False)
+ template_owners = template.owners.filter(
+ is_deleted=False,
+ type=OwnerType.USER,
+ user__isnull=False,
+ )🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file backend/src/storage/services/attachments.py around line 212:
In `_assign_template_permissions`, the query for template owners at line 212 does not filter by `type=OwnerType.USER`, unlike the equivalent queries in `_assign_task_permissions` and `_assign_workflow_permissions`. If a group-type owner has its `user` field populated (e.g., data inconsistency or future schema change), that user incorrectly receives attachment permissions while the group itself might not. Consider adding `type=OwnerType.USER` to the filter at line 212 to match the other methods.
Evidence trail:
backend/src/storage/services/attachments.py lines 212-218 (_assign_template_permissions): `template_owners = template.owners.filter(is_deleted=False)` followed by `if owner.user:` check
backend/src/storage/services/attachments.py lines 133-141 (_assign_task_permissions): `TemplateOwner.objects.filter(template_id=workflow.template_id, type=OwnerType.USER, user__isnull=False, is_deleted=False)`
backend/src/storage/services/attachments.py lines 290-298 (_assign_workflow_permissions): `TemplateOwner.objects.filter(template_id=workflow.template_id, type=OwnerType.USER, user__isnull=False, is_deleted=False)`
| else: | ||
| stats['skipped'] += 1 | ||
|
|
||
| except (ValueError, AttributeError, TypeError) as e: |
There was a problem hiding this comment.
🟡 Medium services/file_sync.py:76
attachment.save(update_fields=['file_id']) at line 65 can raise IntegrityError when the generated file_id collides with an existing record, but the exception handler at line 76 only catches ValueError, AttributeError, and TypeError. This causes the entire sync_all_files() operation to crash instead of logging the error and continuing to the next attachment.
- except (ValueError, AttributeError, TypeError) as e:
+ except (ValueError, AttributeError, TypeError, IntegrityError) as e:🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file backend/src/storage/services/file_sync.py around line 76:
`attachment.save(update_fields=['file_id'])` at line 65 can raise `IntegrityError` when the generated `file_id` collides with an existing record, but the exception handler at line 76 only catches `ValueError`, `AttributeError`, and `TypeError`. This causes the entire `sync_all_files()` operation to crash instead of logging the error and continuing to the next attachment.
Evidence trail:
backend/src/storage/services/file_sync.py lines 65 (save call) and 76 (exception handler catching only ValueError, AttributeError, TypeError); backend/src/processes/models/workflows/attachment.py lines 31-35 showing `file_id = models.CharField(max_length=64, unique=True, null=True, blank=True)` confirming the unique constraint that can cause IntegrityError on collision.
There was a problem hiding this comment.
🟢 Low
Issue on line in backend/src/processes/tests/test_services/test_comment_service.py:1672:
The test test_update__find_attachments_in_text__ok is parameterized with markdown patterns containing attachment IDs, but the assertion no longer verifies that any IDs are extracted from the text. The old code checked update_attachments_mock.assert_called_once_with(attachment_ids), but the new assertion only verifies refresh_attachments was called with source and user — making the parameterized data unused and the test name misleading. Consider either restoring verification that IDs are extracted from text, or rename the test and remove the parameterized data if extraction is no longer the responsibility of this method.
Also found in 3 other location(s)
backend/src/storage/tests/test_services/test_attachments_permissions.py:436
This test does not actually verify that template owner permissions work correctly. The
owneruser is already set as a workflow owner viacreate_test_workflow(user=owner)(which adds the user toworkflow.ownersandworkflow.members). When_assign_workflow_permissionsruns, the owner gets access throughworkflow_ownersregardless of whether the template owner logic works. To properly test template owner permissions, use a different user as the template owner who is NOT the workflow creator.
backend/src/storage/tests/test_services/test_file_sync.py:353
Test
test_determine_source_type__workflow_via_event__okdoes not test what its name implies. The test setsworkflow=workflowon the attachment, so_determine_source_typereturnsSourceType.WORKFLOWimmediately at the firstif attachment.workflow:check, before the event is ever evaluated. Theevent_mocksetup is completely dead code and the test passes for the wrong reason - it's just testing that a workflow-attached attachment returnsWORKFLOW, not anything 'via event'.
backend/src/storage/tests/test_services/test_attachments.py:180
The test
test_bulk_create__duplicate_file_ids__ignore_conflictsdoes not verify its claimed behavior. The assertionAttachment.objects.filter(task=task).count() >= 1will always pass because the pre-existing'existing_file'attachment was already created on line 163-168. The test should verify that: (1)'new_file'was successfully created, and (2) exactly one attachment exists for'existing_file'(no duplicate). A proper assertion would be something likeassert Attachment.objects.filter(file_id='new_file').exists()andassert Attachment.objects.filter(task=task).count() == 2.
🚀 Reply "fix it for me" or copy this AI Prompt for your agent:
In file backend/src/processes/tests/test_services/test_comment_service.py around line 1672:
The test `test_update__find_attachments_in_text__ok` is parameterized with markdown patterns containing attachment IDs, but the assertion no longer verifies that any IDs are extracted from the text. The old code checked `update_attachments_mock.assert_called_once_with(attachment_ids)`, but the new assertion only verifies `refresh_attachments` was called with `source` and `user` — making the parameterized data unused and the test name misleading. Consider either restoring verification that IDs are extracted from text, or rename the test and remove the parameterized data if extraction is no longer the responsibility of this method.
Evidence trail:
git_diff base=MERGE_BASE head=REVIEWED_COMMIT path=backend/src/processes/tests/test_services/test_comment_service.py shows:
1. Lines ~1675-1700: Parameterized data had attachment_ids like `[3349]`, `[4187]` removed
2. Line ~1710 (old): `media, attachment_ids = data` changed to just using `data`
3. Line ~1777 (old): `update_attachments_mock.assert_called_once_with(attachment_ids)` changed to `refresh_attachments_mock.assert_called_once_with(source=event, user=account_owner)`
View backend/src/processes/tests/test_services/test_comment_service.py:1672-1800 confirms current state: test name is `test_update__find_attachments_in_text__ok`, parameterized data contains markdown with attachment IDs, but assertion only checks `refresh_attachments_mock.assert_called_once_with(source=event, user=account_owner)` - no verification of extracted IDs.
Also found in 3 other location(s):
- backend/src/storage/tests/test_services/test_attachments_permissions.py:436 -- This test does not actually verify that template owner permissions work correctly. The `owner` user is already set as a workflow owner via `create_test_workflow(user=owner)` (which adds the user to `workflow.owners` and `workflow.members`). When `_assign_workflow_permissions` runs, the owner gets access through `workflow_owners` regardless of whether the template owner logic works. To properly test template owner permissions, use a different user as the template owner who is NOT the workflow creator.
- backend/src/storage/tests/test_services/test_file_sync.py:353 -- Test `test_determine_source_type__workflow_via_event__ok` does not test what its name implies. The test sets `workflow=workflow` on the attachment, so `_determine_source_type` returns `SourceType.WORKFLOW` immediately at the first `if attachment.workflow:` check, before the event is ever evaluated. The `event_mock` setup is completely dead code and the test passes for the wrong reason - it's just testing that a workflow-attached attachment returns `WORKFLOW`, not anything 'via event'.
- backend/src/storage/tests/test_services/test_attachments.py:180 -- The test `test_bulk_create__duplicate_file_ids__ignore_conflicts` does not verify its claimed behavior. The assertion `Attachment.objects.filter(task=task).count() >= 1` will always pass because the pre-existing `'existing_file'` attachment was already created on line 163-168. The test should verify that: (1) `'new_file'` was successfully created, and (2) exactly one attachment exists for `'existing_file'` (no duplicate). A proper assertion would be something like `assert Attachment.objects.filter(file_id='new_file').exists()` and `assert Attachment.objects.filter(task=task).count() == 2`.


Note
Add a dedicated file storage microservice and replace legacy GCS attachment handling
storage/) that handles file upload and download via SeaweedFS (S3-compatible), with JWT/Redis-based auth, Alembic migrations, and structured error handling.AttachmentDjango model inbackend/src/storage/replacing the legacyFileAttachmentworkflow; attachments are now identified byfile_idstrings and carryaccess_type/source_typemetadata with per-object permissions viadjango-guardian.[name](url)) pointing toFILE_DOMAIN;refresh_attachmentssyncs storage records and permissions whenever content changes.FILE_DOMAINto the newpneumatic-file-servicecontainer; all compose files provision the file service stack (app, Postgres, SeaweedFS).uploadFiles.tsto POST to the file service and receive stringfileId/publicUrl;accountIdis removed from all upload call sites.text; theattachmentslist field is removed from comment creation and workflow event serializers;StoragePermissionand the/workflows/attachmentsroutes are removed; GCS upload paths are gone.📊 Macroscope summarized af43724. 131 files reviewed, 161 issues evaluated, 81 issues filtered, 3 comments posted
🗂️ Filtered Issues
backend/src/accounts/admin.py — 0 comments posted, 3 evaluated, 2 filtered
except FileServiceExceptionblock), the error is logged to Sentry but the form continues tosuper().save()without raising a validation error or notifying the admin user. The user clicks save, sees a success message, but their photo was never uploaded. This silent failure means the photo upload feature appears to work but silently does nothing on error. [ Already posted ]UserAdminChangeForm.save(), whenFileServiceExceptionis raised during photo upload, the exception is caught and logged to Sentry, but the form continues to save without the photo being updated. The admin user receives no indication that the photo upload failed - no error message is added to the form, no exception is re-raised, and no Django message is displayed. This results in a silent failure where the user believes the photo was saved but it was not. [ Already posted ]backend/src/accounts/forms.py — 0 comments posted, 2 evaluated, 2 filtered
settings.PROJECT_CONF['STORAGE']validation check inclean_photo_filemeans that when storage is not configured, users can now submit a photo file that will silently fail to upload (caught byFileServiceExceptioninsave()), resulting in the contact being saved without the photo and no error feedback to the user. The old code prevented this by adding a form error before the save attempt. [ Already posted ]except FileServiceExceptionblock), the error is logged to Sentry but the form saves successfully without notifying the admin user. The admin sees success but the contact's photo was never updated, causing silent data loss. [ Cross-file consolidated ]backend/src/accounts/services/group.py — 0 comments posted, 3 evaluated, 1 filtered
new_name != self.instance.nameat line 202 will evaluate toTruewhennew_nameisNone(not provided inupdate_kwargs) andself.instance.namehas any value. This causes the analytics tracking block to execute incorrectly when name was not being updated, potentially triggering unnecessary analytics events. [ Out of scope ]backend/src/analysis/services.py — 0 comments posted, 1 evaluated, 1 filtered
user.is_authenticatedon line 694 and 711 will raiseAttributeErrorwhenuserisNone. The parameteruser: Optional[UserModel] = NoneallowsNonevalues, but the code accessesuser.is_authenticatedwithout first checking ifuseris notNone. [ Out of scope ]backend/src/applications/admin.py — 0 comments posted, 3 evaluated, 2 filtered
self.userisNone(which is the default value set in__init__), line 72 will raiseAttributeError: 'NoneType' object has no attribute 'account'when accessingself.user.account. ThisAttributeErroris not a subclass ofFileServiceException, so it will not be caught by the except block and will propagate up, causing an unhandled exception. [ Already posted ]except FileServiceExceptionblock),file_urlremainsNoneand the integration is saved without a logo. The admin user gets no indication that the upload failed - the integration is created/updated but missing its logo image. [ Cross-file consolidated ]backend/src/authentication/services/microsoft.py — 0 comments posted, 2 evaluated, 1 filtered
_get_user_photo,account.get_owner()can returnNoneif the account has no user withis_account_owner=True. WhenFileServiceClient(user=None)is instantiated andupload_file_with_attachmentis called, the request will be made without authentication headers (since_make_requestchecksif self.user:before adding auth). This will cause a 401 error from the file service. While this is caught by the exception handler, it represents a silent failure that may not be the intended behavior - the photo upload will fail for any account without an owner. [ Already posted ]backend/src/processes/services/events.py — 0 comments posted, 2 evaluated, 1 filtered
updatemethod, whenforce_save=False(the default),partial_updatemodifiesself.instancein memory but does not persist the changes to the database. However, notifications (send_mention_notification), websocket events (_send_workflow_event), and analytics (AnalyticService.comment_edited) are still fired based on the uncommitted in-memory state. This causes notifications and analytics to be sent for comment updates that are never actually saved to the database. [ Already posted ]backend/src/processes/services/tasks/task.py — 0 comments posted, 1 evaluated, 1 filtered
partial_update,refresh_attachmentsis called whendescriptionis inupdate_kwargsregardless of whetherforce_saveisTrue. Whenforce_save=False, the description change has not been persisted to the database yet (it's only added toupdate_fieldsfor a later save), butrefresh_attachmentsis called immediately. This can cause attachment state to be out of sync with the actual persisted description, especially if the eventualsave()fails or is never called. [ Already posted ]backend/src/processes/services/templates/template.py — 0 comments posted, 2 evaluated, 2 filtered
refresh_attachmentsfunction is called withself.userwhich can beNone(as set inBaseModelService.__init__when no user is provided), but the function signaturerefresh_attachments(source: models.Model, user: UserModel)expects a non-OptionalUserModel. If the service is instantiated without a user andself.instanceexists, this will passNoneas theuserargument, likely causing anAttributeErrorwhen the function attempts to access user attributes. [ Already posted ]refresh_attachmentsfunction is called withself.userwhich can beNone(as set inBaseModelService.__init__when no user is provided), but the function signature expects a non-OptionalUserModel. The condition only checksif 'description' in update_kwargs and self.instance:but does not verify thatself.useris notNone, which will likely cause anAttributeErrorwhenrefresh_attachmentsattempts to use the user object. [ Already posted ]backend/src/storage/services/attachments.py — 1 comment posted, 18 evaluated, 6 filtered
Task.objects.get(id=task.id)will raiseTask.DoesNotExistif the task has been soft-deleted between whenself.instance.taskwas set and this reload. TheBaseSoftDeleteManagerfilters out records whereis_deleted=True, so a soft-deleted task won't be found, causing an unhandled exception. [ Already posted ]task.taskperformer_set.all()which includes performers withdirectly_status=DirectlyStatus.DELETED. Looking at theTaskmodel reference, there's anexclude_directly_deleted_taskperformer_set()method specifically to filter these out. This is a security bug - users/groups who were removed as task performers will still retain access to attachments. [ Already posted ]Template.objects.get(pk=template.pk)will raiseTemplate.DoesNotExistif the template has been soft-deleted between whenself.instance.templatewas set and this reload. TheBaseSoftDeleteManagerfilters out records whereis_deleted=True, so a soft-deleted template won't be found, causing an unhandled exception. [ Already posted ]Workflow.objects.get(id=workflow.id)will raiseWorkflow.DoesNotExistif the workflow has been soft-deleted between whenself.instance.workflowwas set and this reload. TheBaseSoftDeleteManagerfilters out records whereis_deleted=True, so a soft-deleted workflow won't be found, causing an unhandled exception. [ Already posted ]bulk_create_for_scopemethod accepts arbitrarysource_typestring parameter but_assign_restricted_permissions()only handles three specific source types. If an unsupportedsource_typeis passed withaccess_type=AccessType.RESTRICTED, the attachment is created but no permissions are assigned, silently leaving the file inaccessible. [ Already posted ]UserObjectPermissionfilter at lines 506-511 does not includecontent_type=ctypeon theUserObjectPermissionmodel itself, only filtering viapermission__content_type. While guardian'sUserObjectPermissionmodel has its owncontent_typefield that specifies which model theobject_pkbelongs to, this filter relies solely on the permission's content_type. The same applies to theGroupObjectPermissionfilter at lines 517-522. For robustness, both filters should includecontent_type=ctypeto explicitly constrain by the object's content type, not just the permission's. [ Already posted ]backend/src/storage/services/file_service.py — 0 comments posted, 5 evaluated, 3 filtered
_get_user_token, every call without a valid token in the request creates a new token viaPneumaticToken.create()withfor_api_key=True(line 76). Since API tokens are long-lived and this token is not stored or reused, repeated calls to_make_requestwill accumulate orphaned tokens in the cache that are never cleaned up, leading to cache memory growth over time. [ Already posted ]capture_sentry_message(called at lines 185-192 or 196-203 within thetryblock) raisesValueError,KeyError, orAttributeError, the exception will be caught by theexceptclause at line 206 and incorrectly reported as a 'File service response parsing error', masking the actual error from the Sentry logging call. [ Already posted ]response.json()raises an exception other thanValueError,KeyError, orAttributeError(such asTypeErrorfrom an unexpected encoding issue), the exception will propagate up uncaught by this handler. While thefinallyblock ensuresfile_objis closed, the error message and logging in the except block won't apply to these edge cases. [ Already posted ]backend/src/storage/services/file_sync.py — 1 comment posted, 10 evaluated, 7 filtered
save()call on line 65 can raisedjango.db.IntegrityErrorif the generatedfile_idcollides with an existing value (sincefile_idhasunique=True), but the exception handler on line 76 only catchesValueError,AttributeError, andTypeError. This will cause an unhandled exception and crash the sync process. [ Already posted ](ValueError, AttributeError, TypeError)butAttachment.objects.create()on line 125 andattachment.save()on line 65 can raiseIntegrityError(fromdjango.db.utils) if thefile_idunique constraint is violated (e.g., due to race conditions or duplicate data). This uncaught exception will crash the entire sync operation instead of incrementing the error counter and continuing. [ Already posted ]sync_to_new_attachment_modelonly catchesValueError,AttributeError, andTypeError, butAttachment.objects.create()can raisedjango.db.IntegrityError(e.g., ifaccount_idis required byAccountBaseMixinbut theFileAttachmenthas no account, or for other constraint violations). This would cause the entire sync loop to abort unexpectedly instead of logging the error and continuing to the next attachment. [ Already posted ]file_idgeneration on line 181-185 truncates to 64 characters after combiningaccount_id,attachment.id,salt, andfilename. If the filename is very long (e.g., 60+ chars), the truncation could cut into the unique identifiers (salt, attachment.id), potentially creating collision risks for attachments with similar long filenames under the same account, though the 6-character salt provides some mitigation. [ Already posted ]file_idto 64 characters withfile_id[:64]after generating it fromaccount_id,attachment.id,salt, andfilenamecould produce non-unique values if two attachments have identical prefixes up to 64 characters (e.g., long filenames with the same account and nearby IDs). Sincefile_idhasunique=Trueconstraint, this would cause anIntegrityErrorwhen saving toFileAttachment. [ Already posted ]return file_id[:64]) could cause uniqueness collisions when long filenames are involved. The prefix{account_id}_{attachment_id}_{salt}_can easily consume 20+ characters, and if two attachments have similar long filenames, truncation could produce identical file_ids. Consider truncating only the filename portion while preserving the unique prefix components. [ Failed validation ]Attachment.objects.create()call insync_to_new_attachment_model(the caller of_determine_source_type) can raisedjango.db.IntegrityErroror other database exceptions, but onlyValueError,AttributeError, andTypeErrorare caught. If a database constraint violation occurs during the sync loop, it will terminate the entire batch instead of counting the error and continuing. [ Already posted ]backend/src/storage/tests/test_refresh_attachments.py — 0 comments posted, 3 evaluated, 1 filtered
test_refresh_attachments_for_text__empty_text__deletes_unuseddoes not mockmock_filter.values_list.return_value. The actual code callslist(to_delete.values_list('id', flat=True))before passing the result toclear_guardian_permissions_for_attachment_ids. Without the mock, this returns aMockobject, andlist(Mock())will raiseTypeErrorbecauseMockis not iterable, orclear_guardian_permissions_for_attachment_ids(which is also not patched) may fail when receiving unexpected input. [ Failed validation ]backend/src/storage/tests/test_services/test_attachments.py — 0 comments posted, 2 evaluated, 1 filtered
test_bulk_create__duplicate_file_ids__ignore_conflictsdoes not verify its claimed behavior. The assertionAttachment.objects.filter(task=task).count() >= 1will always pass because the pre-existing'existing_file'attachment was already created on line 163-168. The test should verify that: (1)'new_file'was successfully created, and (2) exactly one attachment exists for'existing_file'(no duplicate). A proper assertion would be something likeassert Attachment.objects.filter(file_id='new_file').exists()andassert Attachment.objects.filter(task=task).count() == 2. [ Cross-file consolidated ]backend/src/storage/tests/test_services/test_attachments_permissions.py — 0 comments posted, 1 evaluated, 1 filtered
owneruser is already set as a workflow owner viacreate_test_workflow(user=owner)(which adds the user toworkflow.ownersandworkflow.members). When_assign_workflow_permissionsruns, the owner gets access throughworkflow_ownersregardless of whether the template owner logic works. To properly test template owner permissions, use a different user as the template owner who is NOT the workflow creator. [ Cross-file consolidated ]backend/src/storage/tests/test_services/test_file_sync.py — 0 comments posted, 2 evaluated, 1 filtered
test_determine_source_type__workflow_via_event__okdoes not test what its name implies. The test setsworkflow=workflowon the attachment, so_determine_source_typereturnsSourceType.WORKFLOWimmediately at the firstif attachment.workflow:check, before the event is ever evaluated. Theevent_mocksetup is completely dead code and the test passes for the wrong reason - it's just testing that a workflow-attached attachment returnsWORKFLOW, not anything 'via event'. [ Cross-file consolidated ]frontend/src/public/api/commonRequest.ts — 0 comments posted, 1 evaluated, 1 filtered
commonRequestwhen using the newfileServicerequest type. The code introduces'fileService'as aTRequestTypeand attempts to retrievefileServiceUrlfrom the browser configuration (getBrowserConfigEnv().api). If the runtime configuration (e.g.,common.jsonor injected environment) is missingfileServiceUrl, it resolves toundefined. Subsequently,mergePathsis called withundefinedas the first argument (requestBaseUrlsMap['fileService']), which causes aTypeErrorwhen accessingleft.length. [ Already posted ]frontend/src/public/api/fileServiceUpload.ts — 0 comments posted, 1 evaluated, 1 filtered
uploadFileToFileServiceperforms deep destructuring on the result ofgetBrowserConfigEnv(). SincegetBrowserConfigEnvreturns a fallback empty object{}when configuration is missing, theapiproperty will beundefined. Attempting to destructureurlsfrom thisundefinedapiproperty will throw a runtimeTypeError, causing the application to crash if the configuration is not loaded. [ Already posted ]frontend/src/public/components/Workflows/WorkflowsGridPage/WorkflowCard/utils/getClonedKickoff.ts — 0 comments posted, 1 evaluated, 1 filtered
cloneAttachmentsdue to missing string conversion. ThecopyAttachmentfunction returnsICopyAttachmentResponsewhereidis anumber(as defined infrontend/src/public/api/copyAttachment.ts), butTUploadedFilehas been updated to expectid: string. IncloneAttachments, the numeric ID from the response is assigned directly to theidproperty of the new attachment object (id: attachmentsMap.get(attachment.id)). This object is then cast toTUploadedFile, hiding the type mismatch. Downstream code expectingidto be a string will crash when attempting to invoke string methods (e.g.,id.toLowerCase(),id.startsWith()) on the numeric value. [ Out of scope ]storage/src/application/use_cases/file_upload.py — 0 comments posted, 3 evaluated, 3 filtered
execute(), the file is uploaded to S3 storage (lines 68-73) before the database record is saved (lines 76-78). If the database operation fails after a successful S3 upload, the uploaded file becomes orphaned in storage with no cleanup mechanism. The operations should either be reordered (DB first, then S3) or include compensation logic to delete the S3 object on database failure. [ Already posted ]createorcommitraises an exception (e.g.,DatabaseConstraintError,DatabaseConnectionError), the exception propagates without any mechanism to delete the already-uploaded S3 file, leaving storage in an inconsistent state. [ Already posted ]self._fastapi_base_urlcontains a trailing slash, the generatedpublic_urlon line 81 will have a double slash (e.g.,https://example.com//file-id). Consider normalizing the base URL or usingurljoinfor proper URL construction. [ Already posted ]storage/src/infra/http_client.py — 0 comments posted, 3 evaluated, 2 filtered
httpx.AsyncClient()is created without explicit timeout configuration. If the service expects specific timeout behavior (e.g., the 30 seconds mentioned in the error message), the client should be initialized withhttpx.AsyncClient(timeout=30.0)to ensure consistent behavior. [ Already posted ]HttpTimeoutErroris raised with a hardcodedtimeout=30.0value, but no timeout is actually configured on thehttpx.AsyncClient(). The default timeout forhttpx.AsyncClientis 5 seconds, not 30 seconds. This causes misleading error messages when timeouts occur, making debugging difficult. [ Already posted ]storage/src/infra/repositories/storage_service.py — 0 comments posted, 15 evaluated, 14 filtered
StorageErrorconstructor expects anerror_code_keythat exists inDOMAIN_ERROR_CODES, butMSG_STORAGE_009.format(details=str(e))passes a formatted message string. This will cause aKeyErrorwhenDOMAIN_ERROR_CODES[error_code_key]is accessed. Should useStorageError.bucket_create_failed(str(e))instead. [ Already posted ]StorageErroris instantiated incorrectly. TheStorageError.__init__expectserror_code_keyas the first parameter (a key to lookup inDOMAIN_ERROR_CODES), but the code passes a formatted message stringMSG_STORAGE_009.format(details=str(e)). This will cause aKeyErrorwhenDOMAIN_ERROR_CODES[error_code_key]is executed because the formatted message string won't exist as a key in the dictionary. [ Already posted ]StorageErrorconstructor expects anerror_code_key(e.g.,'STORAGE_UPLOAD_FAILED') as the first argument, which is then looked up inDOMAIN_ERROR_CODES. However, on line 119-121, a formatted message string (MSG_STORAGE_009.format(...)) is passed instead of an error code key. This will cause aKeyErrorat runtime whenDOMAIN_ERROR_CODES[error_code_key]tries to find a key like'Failed create bucket: ...'. Should useStorageError.bucket_create_failed(details=str(e))instead. [ Already posted ]StorageErrorconstructor expects anerror_code_key(e.g.,'STORAGE_UPLOAD_FAILED') as its first argument, which gets looked up inDOMAIN_ERROR_CODES. Instead, the code passes a pre-formatted message string (MSG_STORAGE_009.format(...)) directly. This will cause aKeyErrorwhenDOMAIN_ERROR_CODES[error_code_key]is evaluated, crashing the error handling. Should use the classmethods likeStorageError.bucket_create_failed(str(e))or pass a valid key likeStorageError('STORAGE_BUCKET_CREATE_FAILED', MSG_STORAGE_009.format(details=str(e)))[ Already posted ]ContentType=content_typeis passed tos3.put_object()even whencontent_typeisNone. While S3'sput_objectacceptsContentTypeas an optional parameter, explicitly passingNonemay cause aParamValidationErrorfrom botocore since it expects a string type when the parameter is provided. The code should conditionally include this parameter only whencontent_typeis notNone. [ Already posted ]StorageErroris instantiated incorrectly withMSG_STORAGE_010.format(details=str(e))as the first argument. The constructor expects anerror_code_keystring that maps toDOMAIN_ERROR_CODES, not a formatted error message. This will raiseKeyErrorat runtime. [ Already posted ]StorageErroris called withMSG_STORAGE_010.format(details=str(e))as the first argument, but the constructor expects an error code key fromDOMAIN_ERROR_CODES. This will raise aKeyErrorinstead of the intendedStorageError. Should useStorageError.upload_failed(str(e))or pass a valid key likeStorageError('STORAGE_UPLOAD_FAILED', MSG_STORAGE_010.format(details=str(e)))[ Already posted ]StorageErrorconstructor expects anerror_code_keythat exists inDOMAIN_ERROR_CODES, butMSG_STORAGE_010.format(details=str(e))passes a formatted message string. This will cause aKeyError. Should useStorageError.upload_failed(str(e))instead. [ Already posted ]StorageErroris called with a formatted message string (MSG_STORAGE_010.format(...)) instead of a valid error code key. This will raise aKeyErrorat runtime. Should useStorageError.upload_failed(details=str(e))instead. [ Already posted ]StorageErroris called witherror_msg(a formatted string) instead of a valid error code key. This will raise aKeyErrorat runtime. Should useStorageError.file_not_found_in_storage(file_path=file_path, bucket_name=bucket_name)instead. [ Already posted ]StorageErroris instantiated incorrectly in two places within theexcept ClientErrorblock. On line 174, a plain stringerror_msgis passed, and on line 175,MSG_STORAGE_011.format(details=str(e))is passed. Both should be error code keys like'STORAGE_DOWNLOAD_FAILED'rather than formatted message strings. This will causeKeyErrorwhen the code attempts to lookup these strings inDOMAIN_ERROR_CODES. [ Already posted ]StorageErrorconstructor expects anerror_code_keythat exists inDOMAIN_ERROR_CODES, buterror_msgis a custom formatted string. This will cause aKeyError. Should useStorageError.file_not_found_in_storage(file_path, bucket_name)instead. [ Already posted ]StorageErroris called with a formatted message string (MSG_STORAGE_011.format(...)) instead of a valid error code key. This will raise aKeyErrorat runtime. Should useStorageError.download_failed(details=str(e))instead. [ Already posted ]StorageErrorconstructor expects anerror_code_keythat exists inDOMAIN_ERROR_CODES, butMSG_STORAGE_011.format(details=str(e))passes a formatted message string. This will cause aKeyError. Should useStorageError.download_failed(str(e))instead. [ Already posted ]storage/src/presentation/api/files.py — 0 comments posted, 6 evaluated, 4 filtered
FileAccessDeniedErrorconstructor at line 132 expectsuser_id: intas its second argument, butcurrent_user.user_idcan beNone(for guest tokens or public tokens,user_idis not guaranteed to be set). While this won't crash at runtime, the error messageMSG_FILE_005.format(user_id=user_id, file_id=file_id)will contain the literal stringNonefor the user ID, producing a confusing error message. [ Already posted ]current_user.user_idtoFileAccessDeniedError, butuser_idcan beNone(it's not validated as non-null inAuthenticatedUserorget_current_user). TheFileAccessDeniedError.__init__signature expectsuser_id: int, notint | None. This will cause incorrect error message formatting (showingNoneinstead of a user ID). [ Already posted ]Content-Dispositionheader usingfile_record.filenamedirectly via f-string, butfilenamecan beNone(as seen inUploadFileCommand.filename: str | None). WhenfilenameisNone, the header becomesattachment; filename="None"- a literal string "None" - which is incorrect behavior. The filename should be checked forNoneand handled appropriately (e.g., using a default name or omitting the filename parameter). [ Already posted ]file_record.filenamecontains special characters like double quotes ("), semicolons, or newlines, theContent-Dispositionheader will be malformed (e.g.,attachment; filename="my"file.txt") and could cause parsing failures in HTTP clients. The filename should be properly escaped or encoded using RFC 5987 encoding. [ Already posted ]storage/src/shared_kernel/auth/dependencies.py — 0 comments posted, 1 evaluated, 1 filtered
AuthenticatedUserclass docstring claims "guaranteed non-null fields" butuser_idis copied directly fromauth_user.user_idwithout validation and can beNone. Theget_current_userfunction only checksis_anonymousandaccount_id is None, notuser_id. This contradicts the documented guarantee and causes downstream issues (e.g., line 132 indownload_file). [ Already posted ]storage/src/shared_kernel/auth/redis_client.py — 0 comments posted, 1 evaluated, 1 filtered
get_redis_client()function creates a newRedisAuthClientinstance on every call, which in turn creates a new Redis connection pool viaredis.from_url(). Since callers inservices.pyandtoken_auth.pyinvokeget_redis_client()on each authentication request, this leads to connection pool proliferation and potential connection exhaustion under load. The client should be cached as a singleton or use a shared connection pool. [ Already posted ]storage/src/shared_kernel/database/models.py — 0 comments posted, 2 evaluated, 1 filtered
FileRecordORMmodel definescontent_typeandfilenamecolumns asnullable=False, but theFileRecorddomain entity defines these fields asstr | None. WhenFileRecordMapper.entity_to_orm()is called with aFileRecordwherecontent_typeorfilenameisNone, the subsequent database insert/flush will fail with anIntegrityErrordue to the NOT NULL constraint violation. [ Already posted ]storage/src/shared_kernel/di/container.py — 0 comments posted, 1 evaluated, 1 filtered
async forpattern inget_db_sessionbreaks exception propagation toget_async_session. When FastAPI throws an exception intoget_db_sessionduring request cleanup, that exception is raised at theyield sessionline inget_db_session, but it is NOT propagated intoget_async_session(). This means theexcept SQLAlchemyErrorblock inget_async_sessionwill never execute, andsession.rollback()will not be called on errors. The underlying generator may be abandoned without proper cleanup. The correct pattern is to directly useget_async_sessionas a dependency or properly delegate usingasync with aclosing()or by re-yielding with a try/finally that handles exception forwarding. [ Already posted ]storage/src/shared_kernel/exceptions/base_exceptions.py — 0 comments posted, 1 evaluated, 1 filtered
to_responsemethod accepts arbitrary**kwargs(line 93) and passes them toErrorResponse(line 105), butErrorResponseis a dataclass that only accepts specific fields (timestamp,request_id). If a caller passes any kwargs with keys other thantimestamporrequest_id, aTypeErrorwill be raised at runtime (e.g.,to_response(trace_id="abc")would fail). [ Already posted ]storage/src/shared_kernel/exceptions/external_service_exceptions.py — 0 comments posted, 3 evaluated, 1 filtered
ExternalServiceError.__init__, line 29 accessesEXTERNAL_SERVICE_ERROR_CODES[error_code_key]without validating thaterror_code_keyexists in the dictionary. If an invalid key is passed, this will raise aKeyErrorwith no helpful context about valid keys. While subclasses use known keys, the class is public and could be instantiated directly with an invalid key. [ Already posted ]storage/src/shared_kernel/middleware/auth_middleware.py — 0 comments posted, 3 evaluated, 2 filtered
PublicAuthService.get_token()expects a header in the format"Token <token>"(checkingparts[0] == cls.HEADER_PREFIX), butauthenticate_public_tokenreceives the raw token value directly from cookies or headers without this prefix. This meansget_token()will always returnNonefor public token authentication since the raw token won't have the required"Token "prefix, causing public token authentication to silently fail. [ Already posted ]authenticate_public_tokenmethod passes the raw token toPublicAuthService.get_token(), butget_token()expects a header string in the format"Token <token_value>"(it splits on whitespace and checksparts[0] == 'Token'). When called from a cookie source (line 87), theraw_tokenwill be just the token value without the"Token "prefix, causingget_token()to always returnNone. The same issue occurs for the'X-Public-Authorization'header source if clients send just the token without the prefix format. [ Already posted ]storage/src/shared_kernel/uow/unit_of_work.py — 0 comments posted, 1 evaluated, 1 filtered
UnitOfWork, callingcommit()(line 35) which callsself._session.commit()inside thebegin()context manager, then having__aexit__callself._transaction.__aexit__(*args)on the same transaction, creates problematic double-commit behavior. SQLAlchemy'ssession.commit()commits and closes the transaction, but then the transaction's__aexit__will attempt to commit/rollback on an already-finished transaction. Thecommit()method should operate onself._transactionor the class should not mix explicit commits with the context manager pattern. [ Failed validation ]storage/tests/conftest.py — 0 comments posted, 7 evaluated, 4 filtered
mock_upload_use_case_executefixture patchesUploadFileUseCase.executewithout specifyingnew_callable=AsyncMock. Comparing to the similar fixtures in this file (e.g.,mock_download_use_case_get_metadataat line 81-85,mock_download_use_case_get_streamat line 86-90, and the new fixtures at lines 53-60 and 63-70), async methods are patched withnew_callable=AsyncMock. IfUploadFileUseCase.executeis an async method, tests using this fixture thatawaitthe mocked method will fail with aTypeErrorbecause a regularMagicMockis not awaitable. [ Already posted ]side_effectsetter only propagates the side effect toself._metadata.side_effect(line 123) but not toself._stream.side_effect. If the old execute method could fail at either the metadata or stream retrieval stage, this incomplete propagation could cause tests to behave unexpectedly when simulating errors. [ Already posted ]RedisAuthClient.getis an async method, the mock will not work correctly when awaited becausenew_callable=AsyncMockis not specified. Awaiting a regularMagicMockraisesTypeError: object MagicMock can't be used in 'await' expression. Compare with similar fixtures likemock_download_use_case_get_metadata(line 54-60) that usenew_callable=AsyncMockfor async methods. [ Already posted ]get_db_sessionis an async function or async context manager, the mock will not work correctly when awaited becausenew_callable=AsyncMockis not specified. This would causeTypeErrorat runtime when test code awaits the mock. [ Already posted ]storage/tests/fixtures/e2e.py — 0 comments posted, 6 evaluated, 5 filtered
Iterator[MagicMock]is incorrect since the fixture usesreturninstead ofyield. This won't cause a runtime error but the type annotation is misleading. Other similar fixtures in this file (mock_http_client,mock_storage_service, etc.) have the same issue, suggesting this follows an existing pattern, but for a fixture that usesreturn, the type should just beMagicMock, notIterator[MagicMock]. [ Already posted ]Iterator[AsyncMock]suggests this is a generator fixture usingyield, but it usesreturninstead. While pytest handles both patterns, this type mismatch could confuse type checkers and developers. However, this matches the existing pattern in other fixtures in this file (e.g.,mock_auth_middleware,mock_redis_client), so it's consistent with the codebase style. [ Already posted ]Iterator[AsyncMock]suggests this is a generator fixture usingyield, but it usesreturninstead. While pytest handles both patterns, this type mismatch could confuse type checkers and developers. However, this matches the existing pattern in other fixtures in this file. [ Already posted ]Iterator[MagicMock]is incorrect since the fixture usesreturninstead ofyield. The type should just beMagicMock. This follows an existing incorrect pattern in the file but is still technically wrong. [ Already posted ]Iterator[AsyncMock]is incorrect - the fixture usesreturninstead ofyield, so it returns anAsyncMockdirectly, not anIterator[AsyncMock]. This mismatch will cause type checking failures and could confuse consumers expecting iterator/generator behavior (e.g., fixture teardown via generator). [ Already posted ]storage/tests/fixtures/integration.py — 0 comments posted, 3 evaluated, 1 filtered
:///:memory:creates a separate database per connection. Since the engine uses a connection pool andBase.metadata.create_allruns on one connection, subsequent sessions may connect to different in-memory databases where the tables don't exist. This will causesqlalchemy.exc.OperationalError: no such tableerrors. Fix by usingStaticPoolwithpoolclass=StaticPoolandconnect_args={'check_same_thread': False}to ensure all connections share the same in-memory database. [ Already posted ]storage/tests/unit/test_token_auth.py — 0 comments posted, 3 evaluated, 3 filtered
PneumaticToken.encryptis not awaitable. Sinceencryptis an async method anddatadoesawait cls.encrypt(token), patching it withreturn_value=encrypted_tokencreates a non-awaitable mock. Whenawaitis called on the synchronous mock, it will fail. The mock should useAsyncMockor be configured withnew_callable=AsyncMockto be properly awaitable. [ Cross-file consolidated ]encryptmock at line 53-56 is not awaitable butPneumaticToken.dataawaits the result ofcls.encrypt(token). [ Already posted ]encryptmock at line 82-85 is not awaitable butPneumaticToken.dataawaits the result ofcls.encrypt(token). [ Already posted ]Note
High Risk
Replaces core attachment/comment/file-field handling and introduces object-level permission tracking plus new runtime services (file-service/SeaweedFS), which could impact data integrity and access control if migration paths or refresh logic misbehave.
Overview
This PR migrates backend file handling from the legacy
FileAttachment/Google Cloud flow to a dedicated file microservice plus a newsrc.storageAttachmentmodel, using Markdown file links (and extractedfile_ids) to create/sync/delete attachment records and maintain restricted access.Workflows, tasks, templates, comments, and file-type task fields are updated to refresh/sync attachments on create/update/delete, reassign restricted permissions when task performers change, and tighten comment validation (text now required; attachments no longer satisfy “empty comment”).
Admin/user/contact/integration image uploads now go through
FileServiceClientwith Sentry logging on failures; public template token auth now caches template data and explicitly invalidates cached tokens on template save;django-guardianis introduced with a customGroupObjectPermissionmodel and test harness tweaks; localdocker-composeis expanded to runbackend,file-service, its Postgres, and SeaweedFS, and GCP storage deps are updated (notablygoogle-cloud-storage/google-auth).Written by Cursor Bugbot for commit af43724. This will update automatically on new commits. Configure here.