Conversation
Amp-Thread-ID: https://ampcode.com/threads/T-019d1590-be48-7749-987d-936bc6abd5f7 Co-authored-by: Amp <amp@ampcode.com>
|
No actionable comments were generated in the recent review. 🎉 ℹ️ Recent review info⚙️ Run configurationConfiguration used: Organization UI Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthrough向五种语言的仪表盘本地化文件新增导出准备与进度文案;将 usage-logs 导出重构为基于 Redis 的异步批次作业并新增三个公开动作;前端 UsageLogsFilters 切换为轮询式导出流程;新增/扩展相关单元测试覆盖批次导出与 UI 进度。 Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly enhances the usage log export feature by moving the heavy lifting to a Redis-backed backend process, improving performance and reliability for large datasets. It also bolsters security by patching a CSV formula injection vulnerability and improves the user experience with a new progress indicator and ensuring filter accuracy during export. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a robust backend-driven CSV export for usage logs, complete with background job processing via Redis and a progress UI on the dashboard. The changes significantly improve the export functionality for large datasets. Additionally, it hardens security by fixing a CSV formula injection vulnerability related to leading whitespace.
My review includes a couple of suggestions for improvement:
- Replacing hardcoded Chinese strings in server actions with English or i1n keys for better maintainability and internationalization.
- A performance optimization on the frontend by memoizing a complex event handler function.
Overall, this is a well-executed feature enhancement with good test coverage.
src/actions/usage-logs.ts
Outdated
| error: undefined, | ||
| }); | ||
| } catch (error) { | ||
| logger.error("生成使用日志导出任务失败:", error); |
There was a problem hiding this comment.
The log message here is hardcoded in Chinese. This is also the case for other new log messages and error messages returned to the client in this file (e.g., lines 267, 343, 353, 375, 408).
For consistency with the rest of the codebase and to support internationalization, these strings should be in English or use translation keys. Hardcoding user-facing error messages in a single language can lead to a confusing user experience for those using other locales.
| logger.error("生成使用日志导出任务失败:", error); | |
| logger.error("Failed to run usage logs export job:", error); |
| const handleExport = async () => { | ||
| const runId = exportRunIdRef.current + 1; | ||
| exportRunIdRef.current = runId; | ||
| setIsExporting(true); | ||
| setExportStatus({ | ||
| jobId: "", | ||
| status: "queued", | ||
| processedRows: 0, | ||
| totalRows: 0, | ||
| progressPercent: 0, | ||
| }); | ||
|
|
||
| try { | ||
| const result = await exportUsageLogs(localFilters); | ||
| if (!result.ok) { | ||
| toast.error(result.error || t("logs.filters.exportError")); | ||
| const exportFilters = sanitizeFilters(filters); | ||
| const startResult = await startUsageLogsExport(exportFilters); | ||
| if (exportRunIdRef.current !== runId) { | ||
| return; | ||
| } | ||
|
|
||
| if (!startResult.ok) { | ||
| setExportStatus(null); | ||
| toast.error(startResult.error || t("logs.filters.exportError")); | ||
| return; | ||
| } | ||
|
|
||
| const jobId = startResult.data.jobId; | ||
|
|
||
| while (true) { | ||
| if (exportRunIdRef.current !== runId) { | ||
| return; | ||
| } | ||
|
|
||
| const statusResult = await getUsageLogsExportStatus(jobId); | ||
| if (exportRunIdRef.current !== runId) { | ||
| return; | ||
| } | ||
|
|
||
| if (!statusResult.ok) { | ||
| setExportStatus(null); | ||
| toast.error(statusResult.error || t("logs.filters.exportError")); | ||
| return; | ||
| } | ||
|
|
||
| setExportStatus(statusResult.data); | ||
|
|
||
| if (statusResult.data.status === "failed") { | ||
| toast.error(statusResult.data.error || t("logs.filters.exportError")); | ||
| return; | ||
| } | ||
|
|
||
| if (statusResult.data.status === "completed") { | ||
| break; | ||
| } | ||
|
|
||
| await new Promise((resolve) => window.setTimeout(resolve, 800)); | ||
| } | ||
|
|
||
| const downloadResult = await downloadUsageLogsExport(jobId); | ||
| if (exportRunIdRef.current !== runId) { | ||
| return; | ||
| } | ||
|
|
||
| if (!downloadResult.ok) { | ||
| toast.error(downloadResult.error || t("logs.filters.exportError")); | ||
| return; | ||
| } | ||
|
|
||
| const blob = new Blob([result.data], { type: "text/csv;charset=utf-8;" }); | ||
| const url = window.URL.createObjectURL(blob); | ||
| const a = document.createElement("a"); | ||
| a.href = url; | ||
| a.download = `usage-logs-${format(new Date(), "yyyy-MM-dd-HHmmss")}.csv`; | ||
| document.body.appendChild(a); | ||
| a.click(); | ||
| document.body.removeChild(a); | ||
| window.URL.revokeObjectURL(url); | ||
| downloadCsv(downloadResult.data); | ||
|
|
||
| toast.success(t("logs.filters.exportSuccess")); | ||
| } catch (error) { | ||
| console.error("Export failed:", error); | ||
| toast.error(t("logs.filters.exportError")); | ||
| } finally { | ||
| setIsExporting(false); | ||
| if (exportRunIdRef.current === runId) { | ||
| setExportStatus(null); | ||
| setIsExporting(false); | ||
| } | ||
| } | ||
| }; |
There was a problem hiding this comment.
The handleExport function is re-created on every render of the UsageLogsFilters component. To optimize performance and prevent potential unnecessary re-renders of child components if this function were passed down, it's best practice to wrap it in useCallback.
const handleExport = useCallback(async () => {
const runId = exportRunIdRef.current + 1;
exportRunIdRef.current = runId;
setIsExporting(true);
setExportStatus({
jobId: "",
status: "queued",
processedRows: 0,
totalRows: 0,
progressPercent: 0,
});
try {
const exportFilters = sanitizeFilters(filters);
const startResult = await startUsageLogsExport(exportFilters);
if (exportRunIdRef.current !== runId) {
return;
}
if (!startResult.ok) {
setExportStatus(null);
toast.error(startResult.error || t("logs.filters.exportError"));
return;
}
const jobId = startResult.data.jobId;
while (true) {
if (exportRunIdRef.current !== runId) {
return;
}
const statusResult = await getUsageLogsExportStatus(jobId);
if (exportRunIdRef.current !== runId) {
return;
}
if (!statusResult.ok) {
setExportStatus(null);
toast.error(statusResult.error || t("logs.filters.exportError"));
return;
}
setExportStatus(statusResult.data);
if (statusResult.data.status === "failed") {
toast.error(statusResult.data.error || t("logs.filters.exportError"));
return;
}
if (statusResult.data.status === "completed") {
break;
}
await new Promise((resolve) => window.setTimeout(resolve, 800));
}
const downloadResult = await downloadUsageLogsExport(jobId);
if (exportRunIdRef.current !== runId) {
return;
}
if (!downloadResult.ok) {
toast.error(downloadResult.error || t("logs.filters.exportError"));
return;
}
downloadCsv(downloadResult.data);
toast.success(t("logs.filters.exportSuccess"));
} catch (error) {
console.error("Export failed:", error);
toast.error(t("logs.filters.exportError"));
} finally {
if (exportRunIdRef.current === runId) {
setExportStatus(null);
setIsExporting(false);
}
}
}, [filters, downloadCsv, t]);
🧪 测试结果
总体结果: ✅ 所有测试通过 |
| setTimeout(() => { | ||
| void runUsageLogsExportJob(jobId, finalFilters); | ||
| }, 0); |
There was a problem hiding this comment.
setTimeout fire-and-forget is unsafe in serverless / restarted processes
setTimeout(() => void runUsageLogsExportJob(...), 0) defers execution to the next event-loop tick in the same Node.js/Bun process. In a serverless runtime (Vercel, AWS Lambda, etc.), the process can be frozen or terminated immediately after the server action returns its response, meaning the callback never fires and the job remains permanently in "queued" state.
Even on a long-lived self-hosted server, a process restart mid-export leaves the job record in "running" state in Redis for the full 15-minute TTL, causing the client's polling loop to keep running without any progress indicator moving.
If the deployment target is exclusively a persistent server, add a comment clarifying that. If serverless support is ever desired, consider using a proper background task mechanism (e.g. a dedicated /api/export/run route triggered by the action, or a job queue).
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/actions/usage-logs.ts
Line: 346-348
Comment:
**`setTimeout` fire-and-forget is unsafe in serverless / restarted processes**
`setTimeout(() => void runUsageLogsExportJob(...), 0)` defers execution to the next event-loop tick in the *same* Node.js/Bun process. In a serverless runtime (Vercel, AWS Lambda, etc.), the process can be frozen or terminated immediately after the server action returns its response, meaning the callback never fires and the job remains permanently in `"queued"` state.
Even on a long-lived self-hosted server, a process restart mid-export leaves the job record in `"running"` state in Redis for the full 15-minute TTL, causing the client's polling loop to keep running without any progress indicator moving.
If the deployment target is exclusively a persistent server, add a comment clarifying that. If serverless support is ever desired, consider using a proper background task mechanism (e.g. a dedicated `/api/export/run` route triggered by the action, or a job queue).
How can I resolve this? If you propose a fix, please make it concise.| while (true) { | ||
| if (exportRunIdRef.current !== runId) { | ||
| return; | ||
| } | ||
|
|
||
| const statusResult = await getUsageLogsExportStatus(jobId); | ||
| if (exportRunIdRef.current !== runId) { | ||
| return; | ||
| } | ||
|
|
||
| if (!statusResult.ok) { | ||
| setExportStatus(null); | ||
| toast.error(statusResult.error || t("logs.filters.exportError")); | ||
| return; | ||
| } | ||
|
|
||
| setExportStatus(statusResult.data); | ||
|
|
||
| if (statusResult.data.status === "failed") { | ||
| toast.error(statusResult.data.error || t("logs.filters.exportError")); | ||
| return; | ||
| } | ||
|
|
||
| if (statusResult.data.status === "completed") { | ||
| break; | ||
| } | ||
|
|
||
| await new Promise((resolve) => window.setTimeout(resolve, 800)); | ||
| } |
There was a problem hiding this comment.
No client-side polling timeout — UI can hang for up to 15 minutes
The while (true) loop has no upper bound on iterations or elapsed time. If the background job gets stuck in "running" state (e.g. the server process was restarted mid-export, as noted in the setTimeout concern), the only escape is when the Redis TTL expires and the job record disappears — which takes up to 15 minutes. During that window the UI is stuck in "Exporting..." with no way for the user to recover other than a full page reload.
Consider adding a deadline guard, for example:
const EXPORT_TIMEOUT_MS = 5 * 60 * 1000; // 5 minutes
const deadline = Date.now() + EXPORT_TIMEOUT_MS;
while (true) {
if (Date.now() > deadline) {
toast.error(t("logs.filters.exportError"));
return;
}
// ... existing poll logic
}Prompt To Fix With AI
This is a comment left during a code review.
Path: src/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsx
Line: 207-235
Comment:
**No client-side polling timeout — UI can hang for up to 15 minutes**
The `while (true)` loop has no upper bound on iterations or elapsed time. If the background job gets stuck in `"running"` state (e.g. the server process was restarted mid-export, as noted in the `setTimeout` concern), the only escape is when the Redis TTL expires and the job record disappears — which takes up to 15 minutes. During that window the UI is stuck in "Exporting..." with no way for the user to recover other than a full page reload.
Consider adding a deadline guard, for example:
```ts
const EXPORT_TIMEOUT_MS = 5 * 60 * 1000; // 5 minutes
const deadline = Date.now() + EXPORT_TIMEOUT_MS;
while (true) {
if (Date.now() > deadline) {
toast.error(t("logs.filters.exportError"));
return;
}
// ... existing poll logic
}
```
How can I resolve this? If you propose a fix, please make it concise.
src/actions/usage-logs.ts
Outdated
| }); | ||
|
|
||
| if (!stored) { | ||
| return { ok: false, error: "导出任务初始化失败" }; |
There was a problem hiding this comment.
Hardcoded Chinese error strings returned to the client
Several error messages that surface in the UI are hardcoded in Chinese ("导出任务初始化失败", "导出任务不存在或已过期", "导出文件不存在或已过期", etc.) while the rest of the app uses next-intl. Non-Chinese users will see untranslated strings on failure.
This pattern also appears at:
src/actions/usage-logs.ts:353—"启动导出任务失败"src/actions/usage-logs.ts:369—"导出任务不存在或已过期"src/actions/usage-logs.ts:402—"导出文件不存在或已过期"src/actions/usage-logs.ts:408—"下载导出文件失败"
These should either be added to the message files and looked up server-side, or returned as structured error codes that the client maps to translated strings.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/actions/usage-logs.ts
Line: 343
Comment:
**Hardcoded Chinese error strings returned to the client**
Several error messages that surface in the UI are hardcoded in Chinese (`"导出任务初始化失败"`, `"导出任务不存在或已过期"`, `"导出文件不存在或已过期"`, etc.) while the rest of the app uses `next-intl`. Non-Chinese users will see untranslated strings on failure.
This pattern also appears at:
- `src/actions/usage-logs.ts:353` — `"启动导出任务失败"`
- `src/actions/usage-logs.ts:369` — `"导出任务不存在或已过期"`
- `src/actions/usage-logs.ts:402` — `"导出文件不存在或已过期"`
- `src/actions/usage-logs.ts:408` — `"下载导出文件失败"`
These should either be added to the message files and looked up server-side, or returned as structured error codes that the client maps to translated strings.
How can I resolve this? If you propose a fix, please make it concise.There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d8d0f5785e
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| setTimeout(() => { | ||
| void runUsageLogsExportJob(jobId, finalFilters); | ||
| }, 0); |
There was a problem hiding this comment.
Move export execution out of request-scoped
setTimeout
This queues the actual export work in an in-process timer, and repo-wide search shows runUsageLogsExportJob is only invoked from this callback. If the Node process is restarted or crashes after startUsageLogsExport returns (deploys, autoscaling, OOMs, etc.), the Redis record stays queued/running with no worker that will ever resume it, so the dashboard polls forever until the TTL expires and the export is lost.
Useful? React with 👍 / 👎.
src/actions/usage-logs.ts
Outdated
| await usageLogsExportCsvStore.set(usageLogsExportCsvKey(jobId), csv); | ||
| await usageLogsExportStatusStore.set(jobId, { |
There was a problem hiding this comment.
Fail the job when persisting the CSV blob does not succeed
RedisKVStore.set returns false on write failures, but both results are ignored here. In the case where the CSV write fails (for example a transient Redis error or a large export hitting Redis limits) but the status write succeeds, getUsageLogsExportStatus reports completed while downloadUsageLogsExport immediately returns 导出文件不存在或已过期, which is a confusing false-success state for users.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
src/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsx (1)
207-235: 建议为轮询循环添加超时保护。当前的
while (true)轮询循环依赖服务端返回completed或failed状态来终止。如果服务端长时间未响应或状态卡住,可能导致无限轮询。建议添加最大重试次数或超时机制。♻️ 建议添加超时保护
+ const MAX_POLL_ATTEMPTS = 300; // ~4 minutes at 800ms intervals + let pollAttempts = 0; + while (true) { if (exportRunIdRef.current !== runId) { return; } + + pollAttempts++; + if (pollAttempts > MAX_POLL_ATTEMPTS) { + setExportStatus(null); + toast.error(t("logs.filters.exportError")); + return; + } const statusResult = await getUsageLogsExportStatus(jobId);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/app/`[locale]/dashboard/logs/_components/usage-logs-filters.tsx around lines 207 - 235, The polling loop using while (true) in the export status check (inside the block referencing exportRunIdRef, runId, jobId and calling getUsageLogsExportStatus) needs a timeout/max-retry guard to avoid infinite polling; add a counter or timestamp check (e.g., maxRetries or deadlineMs) in the loop and break/return when exceeded, setExportStatus(null) and show a toast.error (or use t("logs.filters.exportTimeout")) and ensure exportRunIdRef is cleared so subsequent exports can proceed; keep the existing checks for exportRunIdRef !== runId and existing handling for failed/completed, but exit the loop and surface an error when the timeout/retry limit is hit.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/actions/usage-logs.ts`:
- Around line 46-65: The CSV_HEADERS constant in usage-logs.ts is hardcoded in
English; update the export flow to accept a locale parameter and replace
CSV_HEADERS with a function (e.g., getCsvHeaders(locale)) that returns localized
header strings by looking up keys in the i18n/messages module (use the same
message IDs used elsewhere for columns like "Time", "User", "Provider", etc. to
support zh-CN, zh-TW, en, ja, ru). Ensure any call sites that reference
CSV_HEADERS (export CSV creation functions) pass through the current locale and
use getCsvHeaders(locale) so downloaded CSV column titles are generated via i18n
rather than hardcoded English.
- Around line 224-255: The code treats RedisKVStore.set() as infallible but it
returns false on failure, so usageLogsExportStatusStore.set and
usageLogsExportCsvStore.set calls in the running/progress/completed paths can
silently fail and leave inconsistent state; update the flows around
usageLogsExportStatusStore.set(jobId, ...),
usageLogsExportCsvStore.set(usageLogsExportCsvKey(jobId), csv) and the progress
update inside buildUsageLogsExportCsv's progress callback to check their boolean
return values and handle failures (retry, throw, or mark job as errored) — e.g.,
after each set call inspect the result and if false, set the job status to
"error" with an explanatory message (using usageLogsExportStatusStore.set with
error) or throw to be caught by the outer try so the CSV/state consistency is
preserved for functions buildUsageLogsExportCsv,
usageLogsExportStatusStore.set/get, and usageLogsExportCsvStore.set referenced
with jobId/currentJob/existingJob.
- Around line 78-80: UsageLogsExportJobRecord currently only stores status and
ownerUserId which causes finalFilters (captured only in the setTimeout closure)
to be lost across restarts; persist finalFilters into the job record in Redis so
any instance can resume the job: update the UsageLogsExportJobRecord interface
to include a finalFilters field, ensure the code that enqueues/creates the job
(where finalFilters is available before setTimeout) writes finalFilters into the
Redis job record, and modify the job executor/restore logic that reads
UsageLogsExportJobRecord to load finalFilters from Redis (instead of relying on
the closure) before continuing work.
---
Nitpick comments:
In `@src/app/`[locale]/dashboard/logs/_components/usage-logs-filters.tsx:
- Around line 207-235: The polling loop using while (true) in the export status
check (inside the block referencing exportRunIdRef, runId, jobId and calling
getUsageLogsExportStatus) needs a timeout/max-retry guard to avoid infinite
polling; add a counter or timestamp check (e.g., maxRetries or deadlineMs) in
the loop and break/return when exceeded, setExportStatus(null) and show a
toast.error (or use t("logs.filters.exportTimeout")) and ensure exportRunIdRef
is cleared so subsequent exports can proceed; keep the existing checks for
exportRunIdRef !== runId and existing handling for failed/completed, but exit
the loop and surface an error when the timeout/retry limit is hit.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: 483fc857-4a23-4f8d-a754-74d47a7ec76a
📒 Files selected for processing (9)
messages/en/dashboard.jsonmessages/ja/dashboard.jsonmessages/ru/dashboard.jsonmessages/zh-CN/dashboard.jsonmessages/zh-TW/dashboard.jsonsrc/actions/usage-logs.tssrc/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsxtests/unit/actions/usage-logs-export-retry-count.test.tstests/unit/dashboard-logs-export-progress-ui.test.tsx
| const CSV_HEADERS = [ | ||
| "Time", | ||
| "User", | ||
| "Key", | ||
| "Provider", | ||
| "Model", | ||
| "Original Model", | ||
| "Endpoint", | ||
| "Status Code", | ||
| "Input Tokens", | ||
| "Output Tokens", | ||
| "Cache Write 5m", | ||
| "Cache Write 1h", | ||
| "Cache Read", | ||
| "Total Tokens", | ||
| "Cost (USD)", | ||
| "Duration (ms)", | ||
| "Session ID", | ||
| "Retry Count", | ||
| ] as const; |
There was a problem hiding this comment.
CSV 表头仍然是硬编码英文。
这些列名会直接进入用户下载的文件,但这里被固定成英文,当前 locale 切换不会生效。建议把 locale 一并传入导出流程,并从消息文件生成表头。
As per coding guidelines "All user-facing strings must use i18n (5 languages supported: zh-CN, zh-TW, en, ja, ru). Never hardcode display text".
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/actions/usage-logs.ts` around lines 46 - 65, The CSV_HEADERS constant in
usage-logs.ts is hardcoded in English; update the export flow to accept a locale
parameter and replace CSV_HEADERS with a function (e.g., getCsvHeaders(locale))
that returns localized header strings by looking up keys in the i18n/messages
module (use the same message IDs used elsewhere for columns like "Time", "User",
"Provider", etc. to support zh-CN, zh-TW, en, ja, ru). Ensure any call sites
that reference CSV_HEADERS (export CSV creation functions) pass through the
current locale and use getCsvHeaders(locale) so downloaded CSV column titles are
generated via i18n rather than hardcoded English.
| interface UsageLogsExportJobRecord extends UsageLogsExportStatus { | ||
| ownerUserId: number; | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's check the file and understand its structure
wc -l src/actions/usage-logs.tsRepository: ding113/claude-code-hub
Length of output: 96
🏁 Script executed:
# Read the relevant sections of the file
cat -n src/actions/usage-logs.ts | sed -n '70,90p'Repository: ding113/claude-code-hub
Length of output: 817
🏁 Script executed:
# Read lines around 331-348
cat -n src/actions/usage-logs.ts | sed -n '325,355p'Repository: ding113/claude-code-hub
Length of output: 1113
🏁 Script executed:
# Get an overview of the entire file structure to understand context
cat -n src/actions/usage-logs.ts | head -100Repository: ding113/claude-code-hub
Length of output: 3530
🏁 Script executed:
# Find the runUsageLogsExportJob function to understand how it uses finalFilters
rg -A 30 "function runUsageLogsExportJob" src/actions/usage-logs.tsRepository: ding113/claude-code-hub
Length of output: 858
🏁 Script executed:
# Check if there are any other stores or mechanisms for persisting job parameters
rg "usageLogsExport" src/actions/usage-logs.ts | head -20Repository: ding113/claude-code-hub
Length of output: 1154
🏁 Script executed:
# Look at the full implementation around the job execution
sed -n '200,250p' src/actions/usage-logs.tsRepository: ding113/claude-code-hub
Length of output: 1332
🏁 Script executed:
# Check if there's any other mechanism to recover or retry jobs
rg -B 5 -A 10 "getUsageLogsExportJob" src/actions/usage-logs.tsRepository: ding113/claude-code-hub
Length of output: 1381
🏁 Script executed:
# Look for any cleanup or recovery logic
rg "stuck|recovery|retry|orphan" src/actions/usage-logs.tsRepository: ding113/claude-code-hub
Length of output: 176
Redis 中缺少可恢复的任务载荷,导致任务永久卡住。
UsageLogsExportJobRecord 仅存储状态和 owner ID,而执行所需的 finalFilters 参数只保存在 setTimeout 闭包中(第 346-348 行)。一旦实例重启,闭包丢失,任务将永久停留在 queued 或 running 状态。其他实例也无法接手恢复该任务,因为它们无法从 Redis 中检索到过滤条件参数。需要将 finalFilters 持久化到 Redis 中作为 job record 的一部分。
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/actions/usage-logs.ts` around lines 78 - 80, UsageLogsExportJobRecord
currently only stores status and ownerUserId which causes finalFilters (captured
only in the setTimeout closure) to be lost across restarts; persist finalFilters
into the job record in Redis so any instance can resume the job: update the
UsageLogsExportJobRecord interface to include a finalFilters field, ensure the
code that enqueues/creates the job (where finalFilters is available before
setTimeout) writes finalFilters into the Redis job record, and modify the job
executor/restore logic that reads UsageLogsExportJobRecord to load finalFilters
from Redis (instead of relying on the closure) before continuing work.
- Replace hardcoded Chinese error strings with English messages - Add 5-minute polling timeout to prevent UI hanging on stuck jobs - Check Redis CSV write result and fail job if persist fails - Add clarifying comment on setTimeout for self-hosted server Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
🧪 测试结果
总体结果: ✅ 所有测试通过 |
There was a problem hiding this comment.
Code Review Summary
This PR hardens the usage logs CSV export with Redis-backed job tracking, batch processing, and CSV formula injection fixes. Two reliability concerns were identified that could impact production behavior.
PR Size: L
- Lines changed: 978 (872 additions, 106 deletions)
- Files changed: 9
Issues Found
| Category | Critical | High | Medium | Low |
|---|---|---|---|---|
| Logic/Bugs | 0 | 0 | 0 | 0 |
| Security | 0 | 0 | 0 | 0 |
| Error Handling | 0 | 2 | 0 | 0 |
| Types | 0 | 0 | 0 | 0 |
| Comments/Docs | 0 | 0 | 0 | 0 |
| Tests | 0 | 0 | 0 | 0 |
| Simplification | 0 | 0 | 0 | 0 |
High Priority Issues (Should Fix)
1. setTimeout fire-and-forget is fragile (src/actions/usage-logs.ts:413)
setTimeout(() => void runUsageLogsExportJob(...), 0) defers the export job to the next event-loop tick. If the server process terminates (restart, crash, serverless cold-start), the callback never fires and the job stays stuck in "queued" state for the 15-minute Redis TTL.
Suggested fix:
// Option 1: Document limitation for self-hosted servers
setTimeout(() => void runUsageLogsExportJob(jobId, finalFilters), 0);
// NOTE: Fire-and-forget assumes persistent server process.
// Option 2: Use internal API for retry capability2. Unbounded polling loop in UI (src/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsx:185)
The while (true) polling loop has no timeout guard. If the background job gets stuck (e.g., server restart mid-export), the UI will hang in "Exporting..." state for up to 15 minutes with no recovery path for the user.
Suggested fix:
const EXPORT_TIMEOUT_MS = 5 * 60 * 1000; // 5 minutes
const deadline = Date.now() + EXPORT_TIMEOUT_MS;
while (true) {
if (Date.now() > deadline) {
toast.error(t("logs.filters.exportError"));
return;
}
// ... existing poll logic
}Positive Notes
- CSV formula injection fix correctly handles leading whitespace bypass
- Test coverage is comprehensive for the new functionality
- i18n strings added for all 5 supported languages
- Filter isolation fix (using applied filters vs. draft local filters)
Review Coverage
- Logic and correctness
- Security (OWASP Top 10) - CSV formula injection addressed
- Error handling - Two reliability concerns noted
- Type safety
- Documentation accuracy
- Test coverage - Comprehensive
- Code clarity
Automated review by Claude AI
| } | ||
| } | ||
|
|
||
| /** |
There was a problem hiding this comment.
[High] [ERROR-RELIABILITY] setTimeout fire-and-forget is fragile
setTimeout(() => void runUsageLogsExportJob(...), 0) defers the export job to the next event-loop tick. If the server process terminates (restart, crash, serverless cold-start), the callback never fires and the job stays stuck in "queued" state for the 15-minute Redis TTL.
Suggested fix:
// Option 1: Document limitation
setTimeout(() => void runUsageLogsExportJob(jobId, finalFilters), 0);
// NOTE: Fire-and-forget assumes persistent server process.
// Option 2: Use internal API for retry capability|
|
||
| while (true) { | ||
| if (exportRunIdRef.current !== runId) { | ||
| return; |
There was a problem hiding this comment.
[High] [ERROR-TIMEOUT] Unbounded polling loop
The while (true) loop has no timeout guard. If the background job gets stuck (server restart mid-export), the UI will hang in "Exporting..." state for up to 15 minutes with no recovery path.
Suggested fix:
const EXPORT_TIMEOUT_MS = 5 * 60 * 1000; // 5 minutes
const deadline = Date.now() + EXPORT_TIMEOUT_MS;
while (true) {
if (Date.now() > deadline) {
toast.error(t("logs.filters.exportError"));
return;
}
// ... existing poll logic
}| const initialResult = await findUsageLogsWithDetails({ ...filters, page: 1, pageSize: 1 }); | ||
| let estimatedTotalRows = initialResult.total; | ||
|
|
||
| if (estimatedTotalRows === 0) { | ||
| const stats = await findUsageLogsStats(filters); | ||
| estimatedTotalRows = stats.totalRequests; | ||
| } |
There was a problem hiding this comment.
Redundant probe query before the export loop
findUsageLogsWithDetails({ ...filters, page: 1, pageSize: 1 }) is called solely to obtain the total row count, then immediately falls back to findUsageLogsStats when that count is 0. This incurs an extra DB round-trip (offset-paginated query + COUNT) on every export before the batch loop even starts.
findUsageLogsStats already returns totalRequests which is exactly what estimatedTotalRows needs. The two-step probe could be replaced with a single stats call:
const stats = await findUsageLogsStats(filters);
let estimatedTotalRows = stats.totalRequests;If there's a specific reason findUsageLogsWithDetails produces a more accurate count than findUsageLogsStats for certain filter combinations, a comment explaining that would be helpful.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/actions/usage-logs.ts
Line: 177-183
Comment:
**Redundant probe query before the export loop**
`findUsageLogsWithDetails({ ...filters, page: 1, pageSize: 1 })` is called solely to obtain the total row count, then immediately falls back to `findUsageLogsStats` when that count is 0. This incurs an extra DB round-trip (offset-paginated query + COUNT) on every export before the batch loop even starts.
`findUsageLogsStats` already returns `totalRequests` which is exactly what `estimatedTotalRows` needs. The two-step probe could be replaced with a single stats call:
```ts
const stats = await findUsageLogsStats(filters);
let estimatedTotalRows = stats.totalRequests;
```
If there's a specific reason `findUsageLogsWithDetails` produces a more accurate count than `findUsageLogsStats` for certain filter combinations, a comment explaining that would be helpful.
How can I resolve this? If you propose a fix, please make it concise.| window.URL.revokeObjectURL(url); | ||
| }, []); | ||
|
|
||
| const handleExport = async () => { |
There was a problem hiding this comment.
handleExport is the only event handler not wrapped in useCallback
Every other handler in the component (handleApply, handleReset, downloadCsv, handlePresetToggle, etc.) is memoised with useCallback. handleExport being a plain async () => {...} literal creates a fresh function reference on every render, which could unnecessarily invalidate any child that receives it as a prop and is wrapped in React.memo.
For consistency with the rest of the component:
| const handleExport = async () => { | |
| const handleExport = useCallback(async () => { |
Note: you'll also need to add the relevant dependencies to the dependency array and close with }, [filters, t, downloadCsv]); or equivalent.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsx
Line: 180
Comment:
**`handleExport` is the only event handler not wrapped in `useCallback`**
Every other handler in the component (`handleApply`, `handleReset`, `downloadCsv`, `handlePresetToggle`, etc.) is memoised with `useCallback`. `handleExport` being a plain `async () => {...}` literal creates a fresh function reference on every render, which could unnecessarily invalidate any child that receives it as a prop and is wrapped in `React.memo`.
For consistency with the rest of the component:
```suggestion
const handleExport = useCallback(async () => {
```
Note: you'll also need to add the relevant dependencies to the dependency array and close with `}, [filters, t, downloadCsv]);` or equivalent.
How can I resolve this? If you propose a fix, please make it concise.Note: If this suggestion doesn't match your team's coding style, reply to this and let me know. I'll remember it for next time!
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 802f05122b
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
| await usageLogsExportStatusStore.set(jobId, { | ||
| ...currentJob, | ||
| status: "completed", | ||
| progressPercent: 100, | ||
| error: undefined, |
There was a problem hiding this comment.
Handle failed Redis writes when finalizing an export
RedisKVStore.set() returns false on write errors, but this completion update ignores that result. If Redis accepts the CSV blob and then rejects this status write, the job stays stuck in its previous running state even though the file exists; downloadUsageLogsExport() refuses to serve anything unless the status is completed, so the user ends up polling until timeout with no way to retrieve a successfully generated export.
Useful? React with 👍 / 👎.
| const EXPORT_TIMEOUT_MS = 5 * 60 * 1000; | ||
| const deadline = Date.now() + EXPORT_TIMEOUT_MS; | ||
|
|
||
| while (true) { | ||
| if (exportRunIdRef.current !== runId) { |
There was a problem hiding this comment.
Don't cut off large exports after five minutes
This hard-codes a 5 minute client deadline even though the backend keeps export jobs alive for 15 minutes and refreshes that TTL on progress updates. Any legitimate export that runs between 5 and 15 minutes—the exact large-export case this PR is meant to support—will be reported as failed here, the jobId is discarded, and the user has no way to resume polling or download the file once it eventually finishes.
Useful? React with 👍 / 👎.
There was a problem hiding this comment.
♻️ Duplicate comments (2)
src/actions/usage-logs.ts (2)
78-80:⚠️ Potential issue | 🟠 Major把导出筛选条件一起持久化到 Redis。
finalFilters现在只存在进程内参数/闭包里,Redis 中的 job record 无法重建执行上下文。实例重启后或需要其他实例接手时,queued/running任务会一直卡到 TTL 过期。建议把筛选条件写进UsageLogsExportJobRecord,并让执行器从 Redis 读取它。Also applies to: 215-218, 340-360
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/actions/usage-logs.ts` around lines 78 - 80, Add the export filters to the persisted job record so other instances can rebuild execution context: extend UsageLogsExportJobRecord to include a filters (or finalFilters) field, store the computed finalFilters into the Redis job record when creating the job, and update the executor to read finalFilters from the persisted UsageLogsExportJobRecord in Redis instead of relying on the in-process finalFilters closure; ensure queued/running tasks load filters from Redis before running so restarts or failovers can resume correctly.
224-278:⚠️ Potential issue | 🟠 Major状态写回失败不能继续静默吞掉。
RedisKVStore.set()失败时只会返回false。这里running、进度更新、completed、failed这些路径都没检查返回值,客户端会一直读到旧状态并最终超时,哪怕导出其实已经继续执行或已完成。🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/actions/usage-logs.ts` around lines 224 - 278, The status write-back calls (usageLogsExportStatusStore.set and usageLogsExportCsvStore.set) ignore RedisKVStore.set's boolean failure return, so failures silently leave the client with stale state — check the return value of every usageLogsExportStatusStore.set and usageLogsExportCsvStore.set (including updates inside the buildUsageLogsExportCsv progress callback and the final completed/failed updates), and on a false return log the error and abort/throw or explicitly set a failing state (e.g., mark job as "failed" with an error message) so the client won't be left with stale "running" state; ensure the error message uses error instanceof Error ? error.message : a clear fallback and include the jobId/currentJob context when logging.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@src/actions/usage-logs.ts`:
- Around line 78-80: Add the export filters to the persisted job record so other
instances can rebuild execution context: extend UsageLogsExportJobRecord to
include a filters (or finalFilters) field, store the computed finalFilters into
the Redis job record when creating the job, and update the executor to read
finalFilters from the persisted UsageLogsExportJobRecord in Redis instead of
relying on the in-process finalFilters closure; ensure queued/running tasks load
filters from Redis before running so restarts or failovers can resume correctly.
- Around line 224-278: The status write-back calls
(usageLogsExportStatusStore.set and usageLogsExportCsvStore.set) ignore
RedisKVStore.set's boolean failure return, so failures silently leave the client
with stale state — check the return value of every
usageLogsExportStatusStore.set and usageLogsExportCsvStore.set (including
updates inside the buildUsageLogsExportCsv progress callback and the final
completed/failed updates), and on a false return log the error and abort/throw
or explicitly set a failing state (e.g., mark job as "failed" with an error
message) so the client won't be left with stale "running" state; ensure the
error message uses error instanceof Error ? error.message : a clear fallback and
include the jobId/currentJob context when logging.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Organization UI
Review profile: CHILL
Plan: Pro
Run ID: c0a9fc2e-e73b-4fa7-8aab-11568d28a433
📒 Files selected for processing (2)
src/actions/usage-logs.tssrc/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsx
| error: undefined, | ||
| }); | ||
| } catch (error) { | ||
| logger.error("Failed to run usage logs export job:", error); |
There was a problem hiding this comment.
[High] [SECURITY-VULNERABILITY] Export job stores/displays raw error.message (non-i18n + potential info disclosure)
Evidence:
src/actions/usage-logs.ts:267:error: error instanceof Error ? error.message : "导出使用日志失败",src/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsx:226:toast.error(statusResult.data.error || t("logs.filters.exportError"));CLAUDE.md:14:**i18n Required** - All user-facing strings must use i18n (5 languages supported). Never hardcode display text
Why this is a problem: error.message can include internal details (DB/Redis errors) and isn't localized, but it's surfaced directly to end users via the polling UI.
Suggested fix: store a stable errorCode for UI translation (see src/actions/types.ts:9-19) and keep error generic; log the full error server-side only.
import { ERROR_CODES } from "@/lib/utils/error-messages";
// UsageLogsExportStatus: add `errorCode?: string` (and optionally `errorParams`)
catch (error) {
logger.error("生成使用日志导出任务失败:", error);
await usageLogsExportStatusStore.set(jobId, {
...currentJob,
status: "failed",
progressPercent: 0,
error: "导出使用日志失败",
errorCode: ERROR_CODES.INTERNAL_ERROR,
});
}| window.URL.revokeObjectURL(url); | ||
| if (!startResult.ok) { | ||
| setExportStatus(null); | ||
| toast.error(startResult.error || t("logs.filters.exportError")); |
There was a problem hiding this comment.
[High] [STANDARD-VIOLATION] UI renders server error strings directly instead of i18n
Evidence:
src/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsx:201:toast.error(startResult.error || t("logs.filters.exportError"));CLAUDE.md:14:**i18n Required** - All user-facing strings must use i18n (5 languages supported). Never hardcode display textsrc/actions/types.ts:9:errorCode - Error code for i18n translation (optional, recommended for new code)
Why this is a problem: this path will show non-localized messages (e.g. "未登录") in non-Chinese locales, and it also encourages surfacing raw server messages to end users.
Suggested fix: translate via errorCode on the client and fall back to a localized generic message.
import { getErrorMessage } from "@/lib/utils/error-messages";
const tErrors = useTranslations("errors");
const message = startResult.errorCode
? getErrorMessage(tErrors, startResult.errorCode, startResult.errorParams)
: t("logs.filters.exportError");
toast.error(message);Apply the same pattern for statusResult / downloadResult, and avoid statusResult.data.error unless the status payload also carries an errorCode.
There was a problem hiding this comment.
Code Review Summary
This PR bundles a backend export-job pipeline (Redis job status + CSV storage), a polling/progress UI, i18n message additions, and tests into a near-1000 line change. The main risk is that the new export flow surfaces raw server error strings (including error.message) directly to end users, which violates the repo i18n rule and can leak internal error details.
PR Size: L
- Lines changed: 998
- Files changed: 9
- Split suggestion: (1) server export job + CSV sanitization, (2) dashboard polling/progress UI, (3) tests + i18n keys
Issues Found
| Category | Critical | High | Medium | Low |
|---|---|---|---|---|
| Logic/Bugs | 0 | 0 | 0 | 0 |
| Security | 1 | 0 | 0 | 0 |
| Error Handling | 1 | 1 | 0 | 0 |
| Types | 0 | 0 | 0 | 0 |
| Comments/Docs | 0 | 0 | 0 | 0 |
| Tests | 0 | 0 | 0 | 0 |
| Simplification | 0 | 0 | 0 | 0 |
Critical Issues (Must Fix)
- [High][95]
src/actions/usage-logs.ts:267- Export job persistserror.messageinto job status and UI displays it; useerrorCode+ localized message and keep internal details server-side.
High Priority Issues (Should Fix)
- [High][90]
src/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsx:201- UI toasts prefer servererrorstrings over i18n; translate viaerrorCodeand fall back tot("logs.filters.exportError").
Review Coverage
- Logic and correctness
- Security (OWASP Top 10)
- Error handling
- Type safety
- Documentation accuracy
- Test coverage
- Code clarity
Automated review by Codex AI
There was a problem hiding this comment.
- Identified open PR
#966(fix: harden usage logs csv export), pulled metadata + full diff, and applied thesize/Llabel (998 lines changed across 9 files). - Posted 2 inline review comments on diff lines:
src/actions/usage-logs.ts:267— [High][SECURITY-VULNERABILITY] export job persists/surfaces rawerror.message(non-i18n + potential internal info disclosure); suggestederrorCode+ localized/generic user message.src/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsx:201— [High][STANDARD-VIOLATION] UI toasts render servererrorstrings directly; suggested translating viaerrorCodeand falling back to i18n keys.
- Submitted the full PR summary review using
gh pr review --comment.
Align client-side deadline with backend's 15-minute TTL to avoid prematurely aborting large exports that legitimately run longer. Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
🧪 测试结果
总体结果: ✅ 所有测试通过 |
| useEffect(() => { | ||
| setLocalFilters(filters); | ||
| }, [filters]); |
There was a problem hiding this comment.
useEffect sync can silently discard in-progress filter edits
This effect replaces localFilters with the applied filters prop every time the parent re-renders with a new object reference — even if the filter values haven't changed. If the parent component creates a fresh filters object on each render (e.g. due to a context update or unrelated state change), any draft edits the user typed but hasn't yet applied will be silently wiped.
Consider guarding the update with a deep-equality check, or using a ref flag to skip the sync while the user has unsaved edits:
useEffect(() => {
setLocalFilters(filters);
}, [filters]);If filters is always referentially stable when its values haven't changed (e.g. managed via useState in the parent and only updated on handleApply), this is safe as-is — but a comment explaining that assumption would help future readers.
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsx
Line: 144-146
Comment:
**`useEffect` sync can silently discard in-progress filter edits**
This effect replaces `localFilters` with the applied `filters` prop every time the parent re-renders with a new object reference — even if the filter values haven't changed. If the parent component creates a fresh `filters` object on each render (e.g. due to a context update or unrelated state change), any draft edits the user typed but hasn't yet applied will be silently wiped.
Consider guarding the update with a deep-equality check, or using a ref flag to skip the sync while the user has unsaved edits:
```ts
useEffect(() => {
setLocalFilters(filters);
}, [filters]);
```
If `filters` is always referentially stable when its values haven't changed (e.g. managed via `useState` in the parent and only updated on `handleApply`), this is safe as-is — but a comment explaining that assumption would help future readers.
How can I resolve this? If you propose a fix, please make it concise.|
|
||
| function getUsageLogsExportJob( | ||
| session: UsageLogsSession, | ||
| job: UsageLogsExportJobRecord | null, | ||
| _jobId: string | ||
| ): UsageLogsExportJobRecord | null { | ||
| if (!job || job.ownerUserId !== session.user.id) { | ||
| return null; | ||
| } |
There was a problem hiding this comment.
Unused
_jobId parameter in getUsageLogsExportJob
The _jobId parameter is passed to every call site but is never read inside the function body. If it was intended for future logging or auditing, a short comment would clarify the intent. Otherwise it can be removed to reduce noise.
| function getUsageLogsExportJob( | |
| session: UsageLogsSession, | |
| job: UsageLogsExportJobRecord | null, | |
| _jobId: string | |
| ): UsageLogsExportJobRecord | null { | |
| if (!job || job.ownerUserId !== session.user.id) { | |
| return null; | |
| } | |
| function getUsageLogsExportJob( | |
| session: UsageLogsSession, | |
| job: UsageLogsExportJobRecord | null | |
| ): UsageLogsExportJobRecord | null { |
(Both call sites pass the value positionally, so removing the parameter only requires dropping the third argument at each call.)
Prompt To Fix With AI
This is a comment left during a code review.
Path: src/actions/usage-logs.ts
Line: 113-121
Comment:
**Unused `_jobId` parameter in `getUsageLogsExportJob`**
The `_jobId` parameter is passed to every call site but is never read inside the function body. If it was intended for future logging or auditing, a short comment would clarify the intent. Otherwise it can be removed to reduce noise.
```suggestion
function getUsageLogsExportJob(
session: UsageLogsSession,
job: UsageLogsExportJobRecord | null
): UsageLogsExportJobRecord | null {
```
(Both call sites pass the value positionally, so removing the parameter only requires dropping the third argument at each call.)
How can I resolve this? If you propose a fix, please make it concise.
Summary
Hardens the usage logs CSV export feature with three key improvements:
=,+,-,@)Problem
Large exports could fail: The previous implementation fetched all data in a single query with a high
pageSizelimit, which could timeout or fail for large datasets.CSV formula injection bypass: Values with leading whitespace before formula characters (e.g.,
" =1+1","\t@SUM()") were not being escaped, leaving a security vulnerability.Filter mismatch: Export used local (potentially unapplied) filter state instead of the currently applied filters shown in the UI.
Solution
findUsageLogsBatch(500 rows per batch)RedisKVStorestartUsageLogsExport,getUsageLogsExportStatus,downloadUsageLogsExportescapeCsvFieldto trim-start before checking dangerous prefixessanitizeFilters(filters)(applied filters) instead oflocalFiltersChanges
Core Changes
src/actions/usage-logs.ts(+330/-67): New async export pipeline with Redis-backed job trackingsrc/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsx(+117/-15): Export progress UI with pollingSupporting Changes
messages/*/dashboard.json(+10/-0): i18n strings forexportPreparingandexportProgress(5 languages)tests/unit/actions/usage-logs-export-retry-count.test.ts(+190/-24): Tests for batched export and formula injection fixtests/unit/dashboard-logs-export-progress-ui.test.tsx(+225/-0): New UI tests for export progress componentBreaking Changes
None. The existing
exportUsageLogsaction is preserved and now delegates to the new batched export internally.Test Plan
Description enhanced by Claude AI
Greptile Summary
This PR hardens the usage-logs CSV export with three targeted fixes: a Redis-backed batched export pipeline to handle large datasets without timeouts, a
trimStart()-basedescapeCsvFieldfix that catches the leading-whitespace formula-injection bypass, and a UI polling loop (with a 10-minute deadline) that correctly uses the applied filters instead of unapplied draft state.Key changes:
src/actions/usage-logs.ts: New async export actions (startUsageLogsExport,getUsageLogsExportStatus,downloadUsageLogsExport) with cursor-based batching (500 rows/batch), Redis TTL-backed job state, and owner-scoped access control. The existingexportUsageLogsaction is preserved and now delegates to the samebuildUsageLogsExportCsvhelper.escapeCsvField: Removes\t/\rfrom the dangerous-prefix list (they were never formula injection vectors on their own), addstrimStart()before the prefix check to catch the whitespace bypass, and adds\rto the quoting condition.usage-logs-filters.tsx: Adds arunIdref for cancellation safety, a 10-minute deadline in the polling loop, a newdownloadCsvhelper, and auseEffectthat syncslocalFilterswith the appliedfiltersprop.dashboard.jsonfiles receive the two new i18n keys (exportPreparing,exportProgress).Remaining concerns from prior review rounds (not yet fully resolved):
setTimeoutfire-and-forget is safe only on a persistent Bun/Node server; the added code comment acknowledges this, which is sufficient for the stated deployment target."未登录"login-check errors are still hardcoded Chinese strings rather than i18n keys; the new actions (startUsageLogsExport,getUsageLogsExportStatus,downloadUsageLogsExport) inherit this pattern.Confidence Score: 4/5
src/actions/usage-logs.tsandsrc/app/[locale]/dashboard/logs/_components/usage-logs-filters.tsx.Important Files Changed
Sequence Diagram
sequenceDiagram participant UI as UsageLogsFilters (Client) participant SA as Server Actions participant Redis participant DB UI->>SA: startUsageLogsExport(filters) SA->>Redis: set job {status:"queued"} SA-->>UI: { jobId } SA-)SA: setTimeout → runUsageLogsExportJob Note over SA,DB: Background export job SA->>Redis: set job {status:"running"} loop Batch loop (500 rows/batch) SA->>DB: findUsageLogsBatch(cursor) DB-->>SA: batch rows SA->>Redis: set job {processedRows, progressPercent} end SA->>Redis: set CSV string (15 min TTL) SA->>Redis: set job {status:"completed"} loop Polling (800 ms interval, 10 min deadline) UI->>SA: getUsageLogsExportStatus(jobId) SA->>Redis: get job record Redis-->>SA: job status SA-->>UI: status + progress UI->>UI: update progress bar end UI->>SA: downloadUsageLogsExport(jobId) SA->>Redis: get CSV string Redis-->>SA: csv content SA-->>UI: csv string UI->>UI: trigger file downloadComments Outside Diff (1)
src/actions/usage-logs.ts, line 311-329 (link)exportUsageLogsis now unboundedThe preserved synchronous
exportUsageLogsaction previously capped the query atpageSize: 10000, keeping server action execution time bounded. It now delegates tobuildUsageLogsExportCsv, which iterates all rows via cursor pagination with no upper limit.For any moderately large dataset this action will block in the server action context for an extended and unpredictable amount of time — until all batches are fetched, the full CSV is assembled in memory, and the whole string is returned over the wire. On a platform with a function timeout (e.g. 30 s on Vercel), the call silently times out with no useful feedback; on the self-hosted server it ties up a request handler thread for as long as the export takes.
Because the UI has fully migrated to the async pipeline (
startUsageLogsExport→ polling →downloadUsageLogsExport), consider one of:exportUsageLogsif it has no other callers.pageSize: 10000) so it stays fast and bounded.Prompt To Fix With AI
Prompt To Fix All With AI
Reviews (3): Last reviewed commit: "fix: increase export polling timeout fro..." | Re-trigger Greptile