Add webhook receiver, ingestion workers, and server wiring#5
Add webhook receiver, ingestion workers, and server wiring#5
Conversation
Set up continuous integration with four parallel jobs: - Lint (ESLint + Prettier format check) - Type check (TypeScript strict) - Build (full Turborepo build) - Test (with PostgreSQL pgvector/pg17 and Redis service containers) Uses pnpm 9, Node.js 22, and concurrency groups with cancel-in-progress to avoid redundant runs on rapid pushes.
Complete Drizzle ORM setup with pgvector integration: - Enable pgvector extension and add vector(3072) embedding columns to pull_requests, issues, and vision_chunks tables - Add HNSW indexes on all embedding vectors for ANN search - Add connection pooling with postgres-js (max 10, idle timeout 20s) - Add programmatic migration runner (tsx src/migrate.ts) - Add drizzle-kit config for schema generation and studio - Add partial index on pull_requests for open PR staleness queries - Update db package exports to include connection utilities - Fix core package.json exports (add default entry)
Implement Octokit-based GitHub App client with rate limit optimization: - GitHub App authentication via app ID + private key - Installation-scoped access token management - ETag-based conditional request caching (returns cached data on 304) - Paginate helper for fetching all pages of paginated endpoints - Singleton pattern for GitHub App instance
Set up BullMQ-based async job processing: - Redis connection management with singleton pattern - Four named queues: webhook-events, pr-ingestion, issue-ingestion, batch-sync - Default job options: 3 retries, exponential backoff, 1h completed retention - Batch sync queue: 5 retries, 5s initial backoff, 24h completed retention - Failed job retention: 7 days across all queues - Graceful queue cleanup on shutdown
Complete the PR/Issue ingestion pipeline: Webhook Receiver: - POST /api/webhooks/github with HMAC-SHA256 signature validation - Event routing: pull_request/review/comment → pr-ingestion queue, issues → issue-ingestion queue, installation.created → batch-sync - Raw body parsing via fastify-raw-body for signature verification - Timing-safe comparison to prevent timing attacks Ingestion Workers: - PR worker: upserts repository + pull request, fetches file list and diff stats via GitHub API, sets analyzedAt=null for re-analysis - Issue worker: upserts repository + issue, extracts labels, skips PRs received as issue events - Batch sync worker: on new installation, paginates all repos, enqueues all open PRs and issues for individual processing Server Setup: - Fastify with CORS, raw body parsing, health check endpoint - Worker lifecycle management (start on boot, stop on shutdown) - Graceful shutdown: SIGTERM/SIGINT → stop workers → close queues → close Redis → close server - Update API package dependencies (add octokit, fastify-raw-body, etc.)
There was a problem hiding this comment.
Pull request overview
This PR implements the Phase 1 infrastructure for the PReview GitHub App, establishing webhook ingestion, job queue processing, and server lifecycle management. It provides the foundation for ingesting pull requests and issues from GitHub, with proper signature verification, event routing, and batch synchronization capabilities.
Changes:
- Added webhook receiver with HMAC-SHA256 signature verification and event-to-queue routing
- Implemented three BullMQ workers: PR ingestion (with GitHub API file fetching), issue ingestion (with PR filtering), and batch sync (with pagination for new installations)
- Set up Fastify server with CORS, raw body parsing, health endpoint, worker lifecycle management, and graceful shutdown handling
Reviewed changes
Copilot reviewed 9 out of 11 changed files in this pull request and generated 18 comments.
Show a summary per file
| File | Description |
|---|---|
| pnpm-lock.yaml | Added dependencies for @octokit/auth-app, fastify-raw-body, and tsx, plus their transitive dependencies |
| packages/api/package.json | Updated dependencies to include @octokit/auth-app, fastify-raw-body, and @octokit/webhooks; added tsx as devDependency |
| packages/api/src/routes/webhooks.ts | Webhook receiver endpoint with signature validation, event routing, and job enqueueing |
| packages/api/src/routes/health.ts | Simple health check endpoint returning service status |
| packages/api/src/workers/pr-ingestion.worker.ts | PR ingestion worker that upserts repos/PRs and fetches file lists from GitHub API |
| packages/api/src/workers/issue-ingestion.worker.ts | Issue ingestion worker that upserts repos/issues and filters out PRs |
| packages/api/src/workers/batch-sync.worker.ts | Batch sync worker that paginates all repos/PRs/issues for new installations |
| packages/api/src/workers/index.ts | Worker lifecycle management with startup, event handling, and shutdown |
| packages/api/src/server.ts | Fastify server setup with plugins, routes, workers, and graceful shutdown handlers |
| packages/dashboard/tsconfig.json | Formatting changes and Next.js-specific configuration additions |
| packages/dashboard/next-env.d.ts | Auto-generated Next.js TypeScript definitions file |
Files not reviewed (1)
- pnpm-lock.yaml: Language not supported
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| let filesChanged: string[] = []; | ||
| let diffStats = { additions: 0, deletions: 0, changedFiles: 0 }; | ||
|
|
||
| const [owner = "", repoName = ""] = ghRepo.full_name.split("/"); |
There was a problem hiding this comment.
Missing validation for empty repository full_name. If ghRepo.full_name doesn't contain a slash or is malformed, the split will result in an array without the expected structure, potentially causing issues with the GitHub API calls. Consider validating the format or using a more robust parsing approach with error handling.
| const [owner = "", repoName = ""] = ghRepo.full_name.split("/"); | |
| const fullName = ghRepo.full_name?.trim(); | |
| if (!fullName || !fullName.includes("/")) { | |
| job.log(`Invalid repository full_name format: "${ghRepo.full_name}"`); | |
| return; | |
| } | |
| const [owner, repoName] = fullName.split("/", 2); |
There was a problem hiding this comment.
The webhook payload is signed and verified with HMAC-SHA256 before reaching this code — the data is trusted from GitHub. GitHub always provides full_name in owner/repo format; this field has a well-defined schema.
If it were somehow malformed, the subsequent GitHub API call would fail with a 404 and the job would retry (3 attempts with exponential backoff). The current defaults (owner = "", repoName = "") provide an implicit safety net. Adding explicit validation for every field GitHub sends would be excessive defensive coding at an internal boundary.
| } | ||
|
|
||
| function extractLabels(labels: GitHubIssue["labels"]): string[] { | ||
| return labels.map((l) => (typeof l === "string" ? l : l.name)); |
There was a problem hiding this comment.
The labels array contains either string or object with name property, but the extractLabels function assumes all elements conform to this. Consider adding runtime validation to handle unexpected label formats gracefully, as webhook payloads can vary or be modified by GitHub over time.
| return labels.map((l) => (typeof l === "string" ? l : l.name)); | |
| if (!Array.isArray(labels)) { | |
| return []; | |
| } | |
| return labels | |
| .map((l) => { | |
| if (typeof l === "string") { | |
| return l; | |
| } | |
| if (l && typeof l === "object" && typeof (l as { name?: unknown }).name === "string") { | |
| return (l as { name: string }).name; | |
| } | |
| return null; | |
| }) | |
| .filter((label): label is string => typeof label === "string"); |
There was a problem hiding this comment.
The TypeScript type already correctly models GitHub's API response — labels are either {name: string} objects or strings. The function handles both cases. Adding null-checking and filtering for formats GitHub doesn't produce adds complexity at an internal boundary where the data source is trusted (signed webhook payload from GitHub).
| await server.register(cors, { | ||
| origin: process.env["CORS_ORIGIN"] ?? true, |
There was a problem hiding this comment.
The CORS origin is set to process.env["CORS_ORIGIN"] ?? true. When true, CORS allows all origins which could be a security risk in production. Consider defaulting to a more restrictive value or at least documenting that CORS_ORIGIN should be explicitly configured in production environments.
| await server.register(cors, { | |
| origin: process.env["CORS_ORIGIN"] ?? true, | |
| const corsOrigin = process.env["CORS_ORIGIN"]; | |
| await server.register(cors, { | |
| origin: corsOrigin !== undefined ? corsOrigin : false, |
There was a problem hiding this comment.
The CORS origin is controlled via the CORS_ORIGIN env var — set it to the dashboard URL in production. The true default is intentional for local development where the dashboard runs on a different port.
The webhook endpoint (POST /api/webhooks/github) is server-to-server (GitHub → our API) and isn't affected by CORS at all — browsers don't make these requests. CORS only matters for the dashboard's browser-based API calls, and permissive defaults in dev are the standard Fastify pattern.
| for (const worker of workers) { | ||
| worker.on("completed", (job) => { | ||
| console.log(`Job ${job.id} completed on ${worker.name}`); | ||
| }); | ||
|
|
||
| worker.on("failed", (job, err) => { | ||
| console.error( | ||
| `Job ${job?.id} failed on ${worker.name}: ${err.message}`, | ||
| ); | ||
| }); | ||
| } | ||
|
|
||
| console.log(`Started ${workers.length} workers`); |
There was a problem hiding this comment.
The worker event handlers use console.log and console.error directly instead of using the Fastify logger or a consistent logging strategy. Since the server has a configured logger, consider passing a logger instance to the workers or using a centralized logging utility for consistency across the application.
There was a problem hiding this comment.
BullMQ workers run outside the Fastify request lifecycle — they don't have access to the Fastify logger instance. console.log/console.error is the standard pattern for BullMQ worker event handlers. Introducing a shared logger abstraction is a valid Phase 2+ improvement but would be premature at this point.
| const job = await queue.add(`${event}.${action ?? "unknown"}`, { | ||
| event, | ||
| action, | ||
| deliveryId, |
There was a problem hiding this comment.
The deliveryId header is used without validation that it's a string. If the header is missing or is an array, it will be assigned as undefined or an array. Consider adding validation to ensure deliveryId is a string, or handle the undefined case explicitly in the job data.
There was a problem hiding this comment.
GitHub always sends the x-github-delivery header with every webhook. Even if it were undefined, deliveryId is only used as metadata in the job payload for tracing/debugging — it's not used for any logic, routing, or uniqueness constraints. No functional impact.
| let totalIssues = 0; | ||
|
|
||
| for (const ghRepo of installationRepos) { | ||
| const [owner, repo] = ghRepo.full_name.split("/") as [string, string]; |
There was a problem hiding this comment.
The same issue exists here with ghRepo.full_name.split("/"): if the full_name is malformed or doesn't contain a slash, the destructuring will result in potentially empty strings for owner or repo. This could lead to invalid GitHub API calls throughout the batch sync process. Consider adding validation or error handling for malformed repository names.
| const [owner, repo] = ghRepo.full_name.split("/") as [string, string]; | |
| const parts = ghRepo.full_name.split("/"); | |
| if (parts.length !== 2 || !parts[0] || !parts[1]) { | |
| job.log(`Skipping repository with invalid full_name: "${ghRepo.full_name}"`); | |
| continue; | |
| } | |
| const [owner, repo] = parts as [string, string]; |
There was a problem hiding this comment.
Same reasoning as the PR ingestion worker — full_name comes from the GitHub API (fetched via GET /installation/repositories), not from user input. GitHub always returns it in owner/repo format.
| for (const ghRepo of installationRepos) { | ||
| const [owner, repo] = ghRepo.full_name.split("/") as [string, string]; | ||
|
|
||
| // Upsert repository | ||
| await db | ||
| .insert(repositories) | ||
| .values({ | ||
| githubId: ghRepo.id, | ||
| fullName: ghRepo.full_name, | ||
| }) | ||
| .onConflictDoUpdate({ | ||
| target: repositories.githubId, | ||
| set: { fullName: ghRepo.full_name, updatedAt: new Date() }, | ||
| }); | ||
|
|
||
| // Fetch all open PRs | ||
| job.log(`Syncing open PRs for ${ghRepo.full_name}...`); | ||
| const openPRs = await paginateAll<GitHubPR>( | ||
| octokit, | ||
| "GET /repos/{owner}/{repo}/pulls", | ||
| { owner, repo, state: "open" }, | ||
| ); | ||
|
|
||
| for (const pr of openPRs) { | ||
| await prQueue.add(`batch-sync.pr.${ghRepo.full_name}#${pr.number}`, { | ||
| event: "pull_request", | ||
| action: "opened", | ||
| deliveryId: `batch-sync-${job.id}`, | ||
| payload: { | ||
| pull_request: pr, | ||
| repository: { id: ghRepo.id, full_name: ghRepo.full_name }, | ||
| installation: { id: installation.id }, | ||
| }, | ||
| receivedAt: new Date().toISOString(), | ||
| }); | ||
| } | ||
|
|
||
| totalPRs += openPRs.length; | ||
| job.log(`Enqueued ${openPRs.length} PRs for ${ghRepo.full_name}`); | ||
|
|
||
| // Fetch all open issues (excluding PRs) | ||
| job.log(`Syncing open issues for ${ghRepo.full_name}...`); | ||
| const openIssues = await paginateAll<GitHubIssue>( | ||
| octokit, | ||
| "GET /repos/{owner}/{repo}/issues", | ||
| { owner, repo, state: "open" }, | ||
| ); | ||
|
|
||
| const issuesOnly = openIssues.filter((i) => !i.pull_request); | ||
|
|
||
| for (const issue of issuesOnly) { | ||
| await issueQueue.add( | ||
| `batch-sync.issue.${ghRepo.full_name}#${issue.number}`, | ||
| { | ||
| event: "issues", | ||
| action: "opened", | ||
| deliveryId: `batch-sync-${job.id}`, | ||
| payload: { | ||
| issue, | ||
| repository: { id: ghRepo.id, full_name: ghRepo.full_name }, | ||
| installation: { id: installation.id }, | ||
| }, | ||
| receivedAt: new Date().toISOString(), | ||
| }, | ||
| ); | ||
| } | ||
|
|
||
| totalIssues += issuesOnly.length; | ||
| job.log(`Enqueued ${issuesOnly.length} issues for ${ghRepo.full_name}`); | ||
|
|
||
| await job.updateProgress({ | ||
| repos: installationRepos.length, | ||
| currentRepo: ghRepo.full_name, | ||
| totalPRs, | ||
| totalIssues, | ||
| }); | ||
| } |
There was a problem hiding this comment.
If an error occurs while processing one repository in the batch sync, the entire job will fail and potentially stop processing the remaining repositories. Consider wrapping the repository processing loop in error handling to log failures for individual repos while continuing to process others, ensuring more resilient batch syncs.
There was a problem hiding this comment.
Valid — one failing repo would abort the entire sync across all repositories. Wrapping the per-repo loop body in try-catch so failures are logged and skipped without blocking other repos.
| return reply.code(401).send({ error: "Invalid signature" }); | ||
| } | ||
|
|
||
| const payload = request.body as Record<string, unknown>; |
There was a problem hiding this comment.
The payload is cast to Record<string, unknown> after signature verification, but request.body is used. This assumes that Fastify has already parsed the JSON body. While this works with the current setup, it's inconsistent with using rawBody for signature verification. Consider documenting this behavior or parsing the body explicitly from bodyString to ensure consistency.
| const payload = request.body as Record<string, unknown>; | |
| let payload: Record<string, unknown>; | |
| try { | |
| const parsed = JSON.parse(bodyString); | |
| if (!parsed || typeof parsed !== "object" || Array.isArray(parsed)) { | |
| return reply.code(400).send({ error: "Invalid JSON payload" }); | |
| } | |
| payload = parsed as Record<string, unknown>; | |
| } catch { | |
| return reply.code(400).send({ error: "Invalid JSON payload" }); | |
| } |
There was a problem hiding this comment.
Using Fastify's auto-parsed `request.body` alongside `request.rawBody` for signature verification is the documented fastify-raw-body pattern. The plugin captures the raw body via a preParsing hook before Fastify parses the JSON.
- `rawBody`: unparsed string for HMAC signature verification
- `request.body`: parsed JSON for application logic
Manually parsing `bodyString` would be redundant since Fastify already did the work. This is the standard webhook verification approach in Fastify (same pattern used for Stripe, Slack, etc.).
packages/api/src/server.ts
Outdated
| process.on("SIGTERM", () => void shutdown()); | ||
| process.on("SIGINT", () => void shutdown()); |
There was a problem hiding this comment.
The shutdown handlers use void operator to suppress the returned promise, but this means errors during shutdown won't be caught or logged. Consider adding error handling to the shutdown function calls or using .catch() to log any shutdown errors for better debugging of graceful shutdown failures.
| process.on("SIGTERM", () => void shutdown()); | |
| process.on("SIGINT", () => void shutdown()); | |
| process.on("SIGTERM", () => { | |
| shutdown().catch((err) => { | |
| server.log.error({ err }, "Error during shutdown on SIGTERM"); | |
| }); | |
| }); | |
| process.on("SIGINT", () => { | |
| shutdown().catch((err) => { | |
| server.log.error({ err }, "Error during shutdown on SIGINT"); | |
| }); | |
| }); |
There was a problem hiding this comment.
Agreed — void shutdown() swallows errors silently. Adding .catch() with error logging so shutdown failures are visible.
| for (const pr of openPRs) { | ||
| await prQueue.add(`batch-sync.pr.${ghRepo.full_name}#${pr.number}`, { | ||
| event: "pull_request", | ||
| action: "opened", | ||
| deliveryId: `batch-sync-${job.id}`, | ||
| payload: { | ||
| pull_request: pr, | ||
| repository: { id: ghRepo.id, full_name: ghRepo.full_name }, | ||
| installation: { id: installation.id }, | ||
| }, | ||
| receivedAt: new Date().toISOString(), | ||
| }); | ||
| } | ||
|
|
||
| totalPRs += openPRs.length; | ||
| job.log(`Enqueued ${openPRs.length} PRs for ${ghRepo.full_name}`); | ||
|
|
||
| // Fetch all open issues (excluding PRs) | ||
| job.log(`Syncing open issues for ${ghRepo.full_name}...`); | ||
| const openIssues = await paginateAll<GitHubIssue>( | ||
| octokit, | ||
| "GET /repos/{owner}/{repo}/issues", | ||
| { owner, repo, state: "open" }, | ||
| ); | ||
|
|
||
| const issuesOnly = openIssues.filter((i) => !i.pull_request); | ||
|
|
||
| for (const issue of issuesOnly) { | ||
| await issueQueue.add( | ||
| `batch-sync.issue.${ghRepo.full_name}#${issue.number}`, | ||
| { | ||
| event: "issues", | ||
| action: "opened", | ||
| deliveryId: `batch-sync-${job.id}`, | ||
| payload: { | ||
| issue, | ||
| repository: { id: ghRepo.id, full_name: ghRepo.full_name }, | ||
| installation: { id: installation.id }, | ||
| }, | ||
| receivedAt: new Date().toISOString(), | ||
| }, | ||
| ); | ||
| } | ||
|
|
||
| totalIssues += issuesOnly.length; | ||
| job.log(`Enqueued ${issuesOnly.length} issues for ${ghRepo.full_name}`); |
There was a problem hiding this comment.
The batch sync worker enqueues jobs for all open PRs and issues without any rate limiting or batching control. For installations with hundreds or thousands of open items, this could overwhelm Redis and the worker queues. Consider implementing batching with delays between batches, or using BullMQ's bulk add functionality with flow controls to manage the load more gracefully.
There was a problem hiding this comment.
Jobs are added one at a time with `await` (not `addBulk`), and the loop is naturally throttled by the GitHub API pagination response time between repos. Each `queue.add()` is a single Redis command.
The downstream workers have concurrency limits (5 for PR/issue ingestion, 1 for batch sync), so jobs are processed at whatever rate the workers can sustain — BullMQ handles backpressure natively. Redis can handle thousands of queue entries without issue.
For Phase 2+ optimization on very large repos (5000+ PRs), we may consider chunked `addBulk` for throughput, but the current serial approach is correct for Phase 1.
- Remove unused @octokit/auth-app dependency (transitively included via octokit) - Use paginateAll for PR files instead of single-page fetch (handles 100+ files) - Add closeDb() to shutdown sequence to drain database connection pool - Fix stopWorkers race condition: copy array before clearing - Add per-repo try-catch in batch sync so one failing repo doesn't block others - Replace void shutdown() with .catch() for visible error logging on shutdown
|
@Sauhard74 just make sure PR's focus on a single aspect and doesn't manage to have scope creep |
- Tests for getGitHubApp singleton and env validation - Tests for fetchWithEtag caching, 304 handling, error propagation - Tests for paginateAll pagination - Add 85% coverage thresholds to api vitest config - Exclude scaffold server.ts from coverage (rewritten in PR #5)
b344769 to
0c947cd
Compare
- Add 85% coverage thresholds (lines/functions/branches/statements) to all vitest configs - Add tests for core/config.ts: normalizeRankingWeights, validateStalenessConfig - Add tests for core/constants.ts: value assertions, weight sum invariant, staleness ordering - Exclude scaffold server.ts from api coverage (rewritten in PR #5)
…ntinuing with empty data
* Add CI/CD pipeline with GitHub Actions Set up continuous integration with four parallel jobs: - Lint (ESLint + Prettier format check) - Type check (TypeScript strict) - Build (full Turborepo build) - Test (with PostgreSQL pgvector/pg17 and Redis service containers) Uses pnpm 9, Node.js 22, and concurrency groups with cancel-in-progress to avoid redundant runs on rapid pushes. * Remove redundant build job and add CI stages Lint and typecheck run in parallel first. Test (which already runs pnpm build internally) only starts if both pass, avoiding wasted compute on code that fails basic checks. * Replace push trigger with merge_group Merge queue is now enabled on main via rulesets. CI runs on pull_request and merge_group events. The push trigger is no longer needed since merge queue tests the merge result before merging to main. * Finalize CI/CD pipeline with coverage, diff checks, and Docker CD CI: add 85% test coverage thresholds via vitest configs, diff coverage check on changed lines using diff-cover, merge_group trigger for merge queue, staged jobs with needs. CD: build and push Docker images (api + dashboard) to GHCR on merge to main. Multi-stage Dockerfiles using pnpm deploy for api and Next.js standalone output for dashboard. * Upload coverage reports as artifacts Pipes diff-cover output to a report file and uploads all per-package coverage directories, merged lcov, and diff-cover report as downloadable artifacts. Retained for 30 days. * Fix pnpm version conflict in CI pnpm/action-setup@v4 conflicts when both version key in workflow and packageManager field in package.json are set. Removed explicit version so it reads pnpm@9.15.4 from packageManager automatically. * Bump GitHub Actions to v6 checkout v4 -> v6, setup-node v4 -> v6, upload-artifact v4 -> v6 * Pre-configure Next.js tsconfig values to fix CI lint next lint auto-adds allowJs and .next/types/**/*.ts to tsconfig.json, which then causes format:check to fail on the modified file. Adding these values upfront prevents the runtime modification. * Add passWithNoTests to vitest configs Coverage thresholds will be added per-package alongside their test files in subsequent PRs. passWithNoTests prevents vitest from failing when no test files exist yet. * Add database schema, migrations, and connection pooling Complete Drizzle ORM setup with pgvector integration: - Enable pgvector extension and add vector(3072) embedding columns to pull_requests, issues, and vision_chunks tables - Add HNSW indexes on all embedding vectors for ANN search - Add connection pooling with postgres-js (max 10, idle timeout 20s) - Add programmatic migration runner (tsx src/migrate.ts) - Add drizzle-kit config for schema generation and studio - Add partial index on pull_requests for open PR staleness queries - Update db package exports to include connection utilities - Fix core package.json exports (add default entry) * Add tests for core and db packages with 85% coverage thresholds - core: tests for normalizeRankingWeights and validateStalenessConfig - core: tests for constants values and weight invariants - db: tests for connection singleton, env validation, close behavior - Add 85% coverage thresholds to core and db vitest configs - Exclude migrate.ts from db coverage (auto-executing script) * Add 85% coverage thresholds and core package tests - Add 85% coverage thresholds (lines/functions/branches/statements) to all vitest configs - Add tests for core/config.ts: normalizeRankingWeights, validateStalenessConfig - Add tests for core/constants.ts: value assertions, weight sum invariant, staleness ordering - Exclude scaffold server.ts from api coverage (rewritten in PR #5) * Remove coverage thresholds until Phase 2 tests are in place * Make database pool config env-configurable with sensible defaults * Update lockfile after database-schema branch merge * Fix formatting in merged database-schema files * Remove .cursor from tracking and add to .gitignore * Trigger CI
* Add rate-limited GitHub App client with ETag caching Implement Octokit-based GitHub App client with rate limit optimization: - GitHub App authentication via app ID + private key - Installation-scoped access token management - ETag-based conditional request caching (returns cached data on 304) - Paginate helper for fetching all pages of paginated endpoints - Singleton pattern for GitHub App instance * Add GitHub client tests with 85% coverage threshold - Tests for getGitHubApp singleton and env validation - Tests for fetchWithEtag caching, 304 handling, error propagation - Tests for paginateAll pagination - Add 85% coverage thresholds to api vitest config - Exclude scaffold server.ts from coverage (rewritten in PR #5) * Bound ETag cache to 1000 entries with LRU eviction
Summary
Webhook Receiver (
POST /api/webhooks/github)pull_request,pull_request_review,issue_comment→ PR ingestion;issues→ issue ingestion;installation.created→ batch syncIngestion Workers
analyzedAton updatesServer Setup
GET /health)TRD Phase 1 Deliverables
GitHub App scaffold with webhook receiver (Fastify, signature validation)PR/Issue ingestion pipeline with batch sync workerTest plan
{ status: "ok" }