Skip to content

refactor(sdk): drop async on ingest verbs that never yield#419

Merged
willwashburn merged 5 commits into
mainfrom
claude/submit-pr-open-issue-5XKVx
May 12, 2026
Merged

refactor(sdk): drop async on ingest verbs that never yield#419
willwashburn merged 5 commits into
mainfrom
claude/submit-pr-open-issue-5XKVx

Conversation

@willwashburn
Copy link
Copy Markdown
Member

Closes #334.

Summary

  • Drops async fn from every ingest verb (ingest_all,
    ingest_claude_projects, ingest_codex_sessions,
    ingest_opencode_sessions, ingest_claude_session,
    reingest_missing_content, LedgerHandle::ingest, free ingest). The
    bodies are sync filesystem walks and rusqlite writes; nothing awaits a
    yield to the runtime, so the async annotation was a tax that forced
    callers to build a runtime just to block_on it.
  • Deletes run_ingest_tick (a one-line wrapper over ingest().await).
  • Removes the dead "belt-and-braces" in_flight.lock().await in
    WatchController::stop — awaiting the spawned task handle already
    covers any in-flight tick because the runner holds the in_flight
    guard for the duration of the run.
  • Updates every caller: CLI presenters (commands/{ingest,summary, hotspots,state}.rs) drop their tokio runtimes and call the sync verb
    directly; burn ingest --watch keeps its runtime for the watch loop's
    FS-event driver but the inner IngestFn body no longer .awaits
    ingest_all. The codex/opencode adapters now pass the sync SDK fn
    pointer directly (no per-tick Box::pin).
  • The sdk-node ingest binding wraps the sync verb in
    tokio::task::spawn_blocking so the napi tokio runtime stays
    responsive during long sweeps.

Direction A from the issue. Documented in the top-level lib.rs module
doc so the next reader sees "ingest is sync" up front.

Test plan

  • cargo build --workspace
  • cargo test --workspace (652 sdk unit + 22 cli + integration + sdk-node — all green)
  • cargo clippy --workspace --all-targets (no new warnings)

https://claude.ai/code/session_01RAt76YyVgMYdij9zeqPUFB


Generated by Claude Code

Every ingest verb was `async fn` but the bodies were sync filesystem
walks and rusqlite writes — no `.await` actually reached the runtime.
Drop the `async` annotation so the type matches the semantics; callers
that need to run these from an async context can wrap them in
`tokio::task::spawn_blocking` (the sdk-node binding now does).

Also delete `run_ingest_tick` (a one-line wrapper over `ingest().await`)
and the dead `in_flight.lock().await` belt-and-braces in
`WatchController::stop` — the task handle await already covers any
in-flight tick.

CLI presenters that built a tokio runtime just to `block_on` ingest now
call the sync verb directly; only `burn ingest --watch` keeps its
runtime (the watch loop still spawns async tasks for the FS-event
driver).

https://claude.ai/code/session_01RAt76YyVgMYdij9zeqPUFB
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented May 11, 2026

Review Change Stack
No actionable comments were generated in the recent review. 🎉

ℹ️ Recent review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro Plus

Run ID: 2c3f9622-4594-4c41-a867-b0c903cf5b78

📥 Commits

Reviewing files that changed from the base of the PR and between bf8004c and d18e185.

📒 Files selected for processing (1)
  • CHANGELOG.md
🚧 Files skipped from review as they are similar to previous changes (1)
  • CHANGELOG.md

📝 Walkthrough

Walkthrough

This PR converts ingest entrypoints and related adapters from async to synchronous functions, updates CLI callers and harness adapters to call the sync APIs directly, adapts the Node N‑API binding to use spawn_blocking, converts tests to synchronous, and updates docs and public exports (removes run_ingest_tick).

Changes

Ingest Async-to-Sync Conversion

Layer / File(s) Summary
API Signatures & Contracts
crates/relayburn-sdk/src/ingest/ingest.rs, crates/relayburn-sdk/src/ingest/reingest.rs, crates/relayburn-cli/src/harnesses/pending_stamp.rs, crates/relayburn-sdk/src/ingest_verb.rs
Public ingest entrypoints (ingest_all, per-harness verbs, reingest_missing_content, LedgerHandle::ingest, free ingest) and SessionIngestor changed from async/future-returning to synchronous fn(...)->anyhow::Result<...>; tests updated accordingly.
CLI callers & Harnesses
crates/relayburn-cli/src/commands/*, crates/relayburn-cli/src/harnesses/*
CLI commands (hotspots, ingest one-shot/watch/hook, state reingest, summary) stop building local Tokio runtimes and call ingest_all synchronously; Codex/OpenCode adapters pass SDK ingest functions directly; Claude fast-path no longer .awaits.
Tests
crates/relayburn-sdk/src/ingest/*, crates/relayburn-sdk/tests/*
Orchestration, gap-warning, reingest, and integration tests converted from #[tokio::test] async fn to synchronous #[test] fn, with .await removed at callsites.
Node Binding
crates/relayburn-sdk-node/src/lib.rs
N‑API ingest binding now runs sdk::ingest(raw) on Tokio's blocking pool via tokio::task::spawn_blocking and converts spawn failures to napi::Error.
Docs & Exports
crates/relayburn-sdk/src/lib.rs, crates/relayburn-sdk/src/ingest.rs, CHANGELOG.md, crates/relayburn-sdk/src/ingest/watch_loop.rs
Crate docs now state SDK verbs are synchronous and direct async-context callers to tokio::task::spawn_blocking; run_ingest_tick removed from exports; doc example and changelog updated; watch_loop comment clarified.

Sequence Diagram

sequenceDiagram
  participant CLI as CLI
  participant SDK as relayburn_sdk::ingest_all
  participant Harness as HarnessAdapter
  participant DB as Ledger/DB
  participant Node as Node N-API

  CLI->>SDK: ingest_all(ledger, opts) [sync call]
  SDK->>Harness: ingest_* per-harness (sync)
  Harness->>DB: parse + rusqlite INSERTs (sync)
  Harness-->>SDK: IngestReport (immediate)
  SDK-->>CLI: IngestReport (no await)
  Node->>Node: spawn_blocking(sdk::ingest(raw)) -> result
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related issues

Possibly related PRs

🐰 I dropped the awaits and hopped to the ground,
Sync calls hum steady without futures around.
Harnesses pass functions, bindings run blocked,
Tests grew simpler — no runtimes to mock.
Cheers from a rabbit, small and profound.

🚥 Pre-merge checks | ✅ 5
✅ Passed checks (5 passed)
Check name Status Explanation
Title check ✅ Passed The PR title 'refactor(sdk): drop async on ingest verbs that never yield' clearly and specifically describes the main refactoring: converting async ingest functions to synchronous ones because they don't actually yield to the runtime.
Description check ✅ Passed The PR description comprehensively explains the changes: dropping async from ingest verbs, deleting run_ingest_tick, updating callers, and wrapping in spawn_blocking for sdk-node. It references issue #334 and includes test results.
Linked Issues check ✅ Passed The PR fully addresses issue #334's Direction A requirements: all ingest verbs converted from async fn to fn, run_ingest_tick deleted, callers updated to call sync verbs directly, and spawn_blocking used in sdk-node for async contexts.
Out of Scope Changes check ✅ Passed All changes are directly scoped to issue #334: removing async annotations from ingest verbs, updating their callers, deleting run_ingest_tick, and related documentation updates. No unrelated refactoring or feature additions are present.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch claude/submit-pr-open-issue-5XKVx

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 070bd89973

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

Comment on lines 265 to 267
if let Some(handle) = self.handle.lock().await.take() {
let _ = handle.await;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Wait for manual in-flight tick before returning from stop

stop() now returns immediately once the background loop task exits, but it no longer waits on in_flight. This regresses the documented "await any in-flight tick" behavior when tick() is called concurrently (or just after stop is signaled): tick() can still acquire in_flight and run ingest even after the loop task has ended, so callers that tear down state right after stop().await can race an active write. The previous in_flight.lock().await barrier covered this case.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch — you're right. tick() doesn't check stopped, so awaiting the spawned task handle only covers the loop-driven path; a concurrent tick() call can still take in_flight after the background task exits. Restored the in_flight.lock().await barrier with a comment explaining why it isn't redundant (8cbebcf).


Generated by Claude Code

Codex review on #419 flagged that dropping the trailing
`in_flight.lock().await` regresses the "await any in-flight tick"
contract: the public `tick()` path doesn't check the `stopped` flag, so
a concurrent caller can grab `in_flight` and run an ingest after the
loop task has already exited. Awaiting the spawned task handle alone
covers the runner-driven path but not that one — callers that tear down
state right after `stop().await` could race an active write.

Put the trailing `in_flight.lock().await` back and document why it
isn't redundant.

https://claude.ai/code/session_01RAt76YyVgMYdij9zeqPUFB
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (3)
crates/relayburn-sdk/src/ingest/watch_loop.rs (1)

255-267: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Keep stop() waiting on in_flight.

Awaiting the spawned loop only covers ticks driven by that task. A concurrent public tick().await can still hold in_flight after handle.await finishes, so stop() may return before ingest side effects land.

Suggested fix
     pub async fn stop(&self) {
         self.inner.stopped.store(true, Ordering::SeqCst);
         self.inner.stop_signal.notify_waiters();
         if let Some(handle) = self.handle.lock().await.take() {
             let _ = handle.await;
         }
+        let _guard = self.inner.in_flight.lock().await;
     }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@crates/relayburn-sdk/src/ingest/watch_loop.rs` around lines 255 - 267, The
stop() implementation must wait for any concurrently started tick() to finish by
waiting for the in_flight guard to be released; after taking and awaiting the
spawned task handle (the existing let _ = handle.await), add a wait that ensures
self.inner.in_flight is zero before returning. Implement this by polling or
awaiting the appropriate mechanism for in_flight (e.g. if in_flight is an Atomic
counter, loop checking self.inner.in_flight.load(Ordering::SeqCst) and await
tokio::task::yield_now() or a short tokio::time::sleep until it reaches 0; if
in_flight exposes a Notify/semaphore, call its async wait/permit acquisition
instead). Ensure you reference the same symbols: stop(), self.handle.await, and
self.inner.in_flight so stop() does not return until in_flight is cleared.
crates/relayburn-cli/src/harnesses/pending_stamp.rs (1)

159-176: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Offload per-tick ingest from the Tokio worker thread.

The sync ingestor(...) function runs inline inside the async closure without using spawn_blocking. Since this closure is called on every watch-loop tick and in after_exit, and it performs blocking operations (Ledger::open does SQLite I/O and DDL, plus the ingestor does filesystem walks and writes), a long tick will block the Tokio worker thread and delay timers and other scheduled tasks.

Move the sync ingest onto tokio::task::spawn_blocking:

Suggested fix
     let ingest_sessions: IngestSessionsFn = Arc::new(move |ledger_home| {
         Box::pin(async move {
-            // Per-tick ledger open mirrors the TS sibling's
-            // `withLock('ledger', …)` pattern. SQLite WAL keeps the open
-            // cheap (no DDL after first open). Use the typed ledger home
-            // so explicit `--ledger-path` runs keep manifest writes and
-            // resolution scoped to the same home the writer used.
-            let ledger_opts = match ledger_home.as_deref() {
-                Some(home) => LedgerOpenOptions::with_home(home),
-                None => LedgerOpenOptions::default(),
-            };
-            let mut handle = Ledger::open(ledger_opts)?;
-            let opts = RawIngestOptions {
-                ledger_home,
-                ..RawIngestOptions::default()
-            };
-            ingestor(handle.raw_mut(), &opts)
+            tokio::task::spawn_blocking(move || {
+                let ledger_opts = match ledger_home.as_deref() {
+                    Some(home) => LedgerOpenOptions::with_home(home),
+                    None => LedgerOpenOptions::default(),
+                };
+                let mut handle = Ledger::open(ledger_opts)?;
+                let opts = RawIngestOptions {
+                    ledger_home,
+                    ..RawIngestOptions::default()
+                };
+                ingestor(handle.raw_mut(), &opts)
+            })
+            .await
+            .map_err(|err| anyhow::anyhow!("pending-stamp ingest task failed: {err}"))?
         })
     });
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@crates/relayburn-cli/src/harnesses/pending_stamp.rs` around lines 159 - 176,
The ingest closure in the IngestSessionsFn (the Arc closure assigned to
ingest_sessions) runs blocking work inline — specifically Ledger::open and the
synchronous ingestor call — which can block the Tokio worker; wrap the blocking
portion in tokio::task::spawn_blocking and await its JoinHandle inside the
Box::pin(async move { ... }) so Ledger::open and ingestor(...) execute on a
blocking thread pool; preserve passing ledger_home into RawIngestOptions and
return the ingestor result from the spawned task, propagating any errors back
from the awaited spawn_blocking result.
crates/relayburn-cli/src/harnesses/claude.rs (1)

137-149: ⚠️ Potential issue | 🟠 Major | ⚡ Quick win

Run the Claude post-exit ingest on the blocking pool.

after_exit() is async, but ingest_claude_session is synchronous and performs substantial filesystem and SQLite work. Calling it directly will block the Tokio runtime thread. The SDK's own documentation notes this pattern, and the Node binding already uses tokio::task::spawn_blocking for the same reason.

Suggested fix
     async fn after_exit(&self, ctx: &PlanCtx, plan: &SpawnPlan) -> anyhow::Result<IngestReport> {
         let session_id = plan
             .session_id
             .as_ref()
             .ok_or_else(|| anyhow::anyhow!("claude adapter: plan must include sessionId"))?;
-        // Open a ledger handle scoped to the resolved RELAYBURN_HOME and
-        // run the per-session fast-path. The SDK encodes cwd → flattened
-        // dir name internally and persists a cursor at EOF so the next
-        // sweep skips the file.
-        let mut handle = Ledger::open(LedgerOpenOptions::default())?;
         let cwd_str = ctx.cwd.to_string_lossy().into_owned();
-        let opts = RawIngestOptions::default();
-        ingest_claude_session(handle.raw_mut(), &cwd_str, session_id, &opts)
+        let session_id = session_id.clone();
+        tokio::task::spawn_blocking(move || {
+            let mut handle = Ledger::open(LedgerOpenOptions::default())?;
+            let opts = RawIngestOptions::default();
+            ingest_claude_session(handle.raw_mut(), &cwd_str, &session_id, &opts)
+        })
+        .await
+        .map_err(|err| anyhow::anyhow!("claude ingest task failed: {err}"))?
     }
🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@crates/relayburn-cli/src/harnesses/claude.rs` around lines 137 - 149, The
synchronous ingest_claude_session call is blocking and must run on the blocking
pool; change after_exit() so it spawns a blocking task that does the sync work
(open the Ledger and call ingest_claude_session) instead of calling
ingest_claude_session directly on the async runtime. Concretely: after you
extract session_id and build cwd_str and opts, replace the direct call
ingest_claude_session(handle.raw_mut(), &cwd_str, session_id, &opts) with a
tokio::task::spawn_blocking move || { let mut handle =
Ledger::open(LedgerOpenOptions::default())?;
ingest_claude_session(handle.raw_mut(), &cwd_str, session_id, &opts) } and await
the JoinHandle, propagating any error into the anyhow::Result returned by
after_exit(); ensure the Ledger is opened inside the blocking closure (not on
the async thread) to avoid blocking and Send issues.
🧹 Nitpick comments (1)
CHANGELOG.md (1)

9-14: ⚡ Quick win

Shorten this changelog entry to the shipped impact.

This reads more like implementation backstory than release notes. A shorter bullet such as “relayburn-sdk: ingest APIs are now synchronous; async callers should run them in spawn_blocking” gets the user-visible change across without listing internals.

As per coding guidelines, "Changelog entries should be concise and impact-first... Drop issue/PR links, internal review notes, implementation backstory..."

🤖 Prompt for AI Agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

In `@CHANGELOG.md` around lines 9 - 14, Shorten the CHANGELOG entry to a single
concise, impact-first bullet: state that the relayburn-sdk ingest APIs (e.g.,
ingest_all, ingest_claude_session, LedgerHandle::ingest, etc.) are now
synchronous and advise async callers to run them via
tokio::task::spawn_blocking; remove implementation details about filesystem
walks, rusqlite, and internal backstory so the note only conveys the
user-visible change and migration guidance.
🤖 Prompt for all review comments with AI agents
Verify each finding against current code. Fix only still-valid issues, skip the
rest with a brief reason, keep changes minimal, and validate.

Outside diff comments:
In `@crates/relayburn-cli/src/harnesses/claude.rs`:
- Around line 137-149: The synchronous ingest_claude_session call is blocking
and must run on the blocking pool; change after_exit() so it spawns a blocking
task that does the sync work (open the Ledger and call ingest_claude_session)
instead of calling ingest_claude_session directly on the async runtime.
Concretely: after you extract session_id and build cwd_str and opts, replace the
direct call ingest_claude_session(handle.raw_mut(), &cwd_str, session_id, &opts)
with a tokio::task::spawn_blocking move || { let mut handle =
Ledger::open(LedgerOpenOptions::default())?;
ingest_claude_session(handle.raw_mut(), &cwd_str, session_id, &opts) } and await
the JoinHandle, propagating any error into the anyhow::Result returned by
after_exit(); ensure the Ledger is opened inside the blocking closure (not on
the async thread) to avoid blocking and Send issues.

In `@crates/relayburn-cli/src/harnesses/pending_stamp.rs`:
- Around line 159-176: The ingest closure in the IngestSessionsFn (the Arc
closure assigned to ingest_sessions) runs blocking work inline — specifically
Ledger::open and the synchronous ingestor call — which can block the Tokio
worker; wrap the blocking portion in tokio::task::spawn_blocking and await its
JoinHandle inside the Box::pin(async move { ... }) so Ledger::open and
ingestor(...) execute on a blocking thread pool; preserve passing ledger_home
into RawIngestOptions and return the ingestor result from the spawned task,
propagating any errors back from the awaited spawn_blocking result.

In `@crates/relayburn-sdk/src/ingest/watch_loop.rs`:
- Around line 255-267: The stop() implementation must wait for any concurrently
started tick() to finish by waiting for the in_flight guard to be released;
after taking and awaiting the spawned task handle (the existing let _ =
handle.await), add a wait that ensures self.inner.in_flight is zero before
returning. Implement this by polling or awaiting the appropriate mechanism for
in_flight (e.g. if in_flight is an Atomic counter, loop checking
self.inner.in_flight.load(Ordering::SeqCst) and await tokio::task::yield_now()
or a short tokio::time::sleep until it reaches 0; if in_flight exposes a
Notify/semaphore, call its async wait/permit acquisition instead). Ensure you
reference the same symbols: stop(), self.handle.await, and self.inner.in_flight
so stop() does not return until in_flight is cleared.

---

Nitpick comments:
In `@CHANGELOG.md`:
- Around line 9-14: Shorten the CHANGELOG entry to a single concise,
impact-first bullet: state that the relayburn-sdk ingest APIs (e.g., ingest_all,
ingest_claude_session, LedgerHandle::ingest, etc.) are now synchronous and
advise async callers to run them via tokio::task::spawn_blocking; remove
implementation details about filesystem walks, rusqlite, and internal backstory
so the note only conveys the user-visible change and migration guidance.

ℹ️ Review info
⚙️ Run configuration

Configuration used: Organization UI

Review profile: CHILL

Plan: Pro Plus

Run ID: e0a3989c-3cd8-458f-9968-9060107bb4b5

📥 Commits

Reviewing files that changed from the base of the PR and between 632a0b3 and 070bd89.

📒 Files selected for processing (19)
  • CHANGELOG.md
  • crates/relayburn-cli/src/commands/hotspots.rs
  • crates/relayburn-cli/src/commands/ingest.rs
  • crates/relayburn-cli/src/commands/state.rs
  • crates/relayburn-cli/src/commands/summary.rs
  • crates/relayburn-cli/src/harnesses/claude.rs
  • crates/relayburn-cli/src/harnesses/codex.rs
  • crates/relayburn-cli/src/harnesses/opencode.rs
  • crates/relayburn-cli/src/harnesses/pending_stamp.rs
  • crates/relayburn-sdk-node/src/lib.rs
  • crates/relayburn-sdk/src/ingest.rs
  • crates/relayburn-sdk/src/ingest/gap_warning_tests.rs
  • crates/relayburn-sdk/src/ingest/ingest.rs
  • crates/relayburn-sdk/src/ingest/orchestration_tests.rs
  • crates/relayburn-sdk/src/ingest/reingest.rs
  • crates/relayburn-sdk/src/ingest/watch_loop.rs
  • crates/relayburn-sdk/src/ingest_verb.rs
  • crates/relayburn-sdk/src/lib.rs
  • crates/relayburn-sdk/tests/integration.rs

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 potential issue.

View 5 additional findings in Devin Review.

Open in Devin Review

Comment on lines +262 to 268
/// task has exited. Waiting on the guard here guarantees no tick is
/// mid-write when `stop` returns, so callers can tear down state
/// safely.
pub async fn stop(&self) {
self.inner.stopped.store(true, Ordering::SeqCst);
self.inner.stop_signal.notify_waiters();
if let Some(handle) = self.handle.lock().await.take() {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 WatchController::stop() no longer drains external tick() calls after removing belt-and-braces lock

The removed let _ = self.inner.in_flight.lock().await; line at the end of stop() was the only mechanism that guaranteed external tick() calls (i.e., those not initiated by the spawned periodic task) had completed before stop() returned. The new doc comment claims "Awaiting the spawned task handle covers any in-flight tick" but this is only true for ticks started by the spawned task's periodic loop — WatchController::tick() runs run_tick_or_join directly in the caller's task context, independent of the JoinHandle.

Race scenario: concurrent tick() + stop()
  1. External caller does ctrl.tick() → acquires in_flight lock → runs sync ingest
  2. Meanwhile ctrl.stop() is called → sets stopped, notifies stop_signal
  3. The spawned periodic loop breaks (it wasn't holding the lock)
  4. handle.await resolves (spawned task exited)
  5. stop() returns
  6. But the tick() from step 1 is still in-flight

The old code's let _ = self.inner.in_flight.lock().await would wait at step 5 until the tick completed. No current caller exercises this path (CLI and harness driver only call stop(), never concurrent tick()), but the public API contract ("stop the periodic task and await any in-flight tick") is weakened.

Suggested change
/// task has exited. Waiting on the guard here guarantees no tick is
/// mid-write when `stop` returns, so callers can tear down state
/// safely.
pub async fn stop(&self) {
self.inner.stopped.store(true, Ordering::SeqCst);
self.inner.stop_signal.notify_waiters();
if let Some(handle) = self.handle.lock().await.take() {
pub async fn stop(&self) {
self.inner.stopped.store(true, Ordering::SeqCst);
self.inner.stop_signal.notify_waiters();
if let Some(handle) = self.handle.lock().await.take() {
let _ = handle.await;
}
// Belt-and-braces: even if the handle was already taken (idempotent
// calls), make sure no tick is mid-flight before returning.
let _ = self.inner.in_flight.lock().await;
}
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

CodeRabbit review feedback on #419: per the project's CLAUDE.md
guidelines, changelog entries should be concise and impact-first.

https://claude.ai/code/session_01RAt76YyVgMYdij9zeqPUFB
Copy link
Copy Markdown
Member Author

Thanks for the review. Round-up on the four threads:

  • watch_loop.rs stop() waiting on in_flight — agreed, restored in 8cbebcf with a comment explaining the public-tick() race it covers.
  • pending_stamp.rs / claude.rs spawn_blocking for per-tick ingest — skipping. Issue Rust async hygiene: ingest verbs are async fn but never await actual yields #334 explicitly chose direction A and called out that direction B (yielding) is only worthwhile "if the watch loop ever needs to be cancellable mid-tick — it isn't today". The harness watch loops had the same worker-blocking shape under the prior async fn wrapping (the bodies didn't .await anything either), so this isn't a regression. Happy to revisit as a follow-up if mid-tick responsiveness becomes a goal.
  • CHANGELOG verbosity — agreed, trimmed in 5948798.

Devin's comment on the same stop() line targets the pre-fix commit and proposes exactly what 8cbebcf restored, so no separate action there.


Generated by Claude Code

@willwashburn
Copy link
Copy Markdown
Member Author

@copilot resolve the merge conflicts in this pull request

Copy link
Copy Markdown
Contributor

Copilot AI commented May 11, 2026

@copilot resolve the merge conflicts in this pull request

Resolved: I merged origin/main into this PR branch and fixed the conflict in CHANGELOG.md. The conflict-resolution commit on this branch is bf8004c.

@willwashburn
Copy link
Copy Markdown
Member Author

@copilot resolve the merge conflicts in this pull request

Copy link
Copy Markdown
Contributor

Copilot AI commented May 12, 2026

@copilot resolve the merge conflicts in this pull request

Resolved. I merged the latest origin/main into this PR branch and fixed the CHANGELOG.md conflict; the conflict-resolution commit is 4d057bc.

@willwashburn willwashburn merged commit d7c9f61 into main May 12, 2026
11 checks passed
@willwashburn willwashburn deleted the claude/submit-pr-open-issue-5XKVx branch May 12, 2026 01:46
willwashburn pushed a commit that referenced this pull request May 12, 2026
Resolves conflict with #419 (sync ingest verbs). The new fast-path is
now synchronous like the rest of the ingest surface; the CLI hook
path matches main's non-blocking call sites. Tests converted from
#[tokio::test] back to #[test].

https://claude.ai/code/session_011ubB69Zxijqb1BsYVYL9iQ
willwashburn pushed a commit that referenced this pull request May 12, 2026
Main released 2.8.5 (#419 + perf/parser refactors) which moved its
prior [Unreleased] block into the release section. Reattach this
branch's items (the new ingest_claude_transcript_path verb and the
CLI hook/--quiet behavior) under [Unreleased] above 2.8.5.

https://claude.ai/code/session_011ubB69Zxijqb1BsYVYL9iQ
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Rust async hygiene: ingest verbs are async fn but never await actual yields

3 participants