Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
0c7f808
perf: add build-time precompression for hashed static assets
NathanDrake2406 Mar 22, 2026
91dc43c
perf: add startup metadata cache for zero-syscall static serving
NathanDrake2406 Mar 22, 2026
0b83607
perf: refactor tryServeStatic to async + cache + precompressed serving
NathanDrake2406 Mar 22, 2026
53f19d2
docs: fix stale comments in precompress (mention .zst)
NathanDrake2406 Mar 22, 2026
078746d
fix: deduplicate buffer reads for HTML alias entries, fix stale JSDoc
NathanDrake2406 Mar 22, 2026
bd0ea02
docs: fix stale comment in cli.ts (mention zstd)
NathanDrake2406 Mar 22, 2026
320466e
fix: address code review findings
NathanDrake2406 Mar 23, 2026
6b4b1e1
fix: address remaining review findings in prod-server
NathanDrake2406 Mar 24, 2026
a623e58
perf: move precompression to Vite plugin with edge auto-detection
NathanDrake2406 Mar 24, 2026
5b02a94
docs: fix inaccurate comments in precompress and static-file-cache
NathanDrake2406 Mar 24, 2026
c614d1b
fix: address code review findings
NathanDrake2406 Mar 29, 2026
f8fa3ab
perf: tune compression levels and restore auto mode
NathanDrake2406 Mar 29, 2026
626ed6f
fix: use filename hash for ETag on hashed assets
NathanDrake2406 Mar 29, 2026
d5d4774
fix: make precompress opt-in per maintainer feedback
NathanDrake2406 Mar 30, 2026
4d741ae
fix: address code review findings
NathanDrake2406 Mar 30, 2026
e75b62b
chore: use type instead of interface per lint rule
NathanDrake2406 Mar 30, 2026
9c84455
fix final precompression review comments
NathanDrake2406 Mar 31, 2026
43c7703
perf: optimize precompress build path
NathanDrake2406 Mar 31, 2026
3ade536
fix: address final review feedback
NathanDrake2406 Apr 1, 2026
0e33529
fix: address final bonk review nits
NathanDrake2406 Apr 1, 2026
f212d6a
address bonk review comments: counters post-write, Math.floor, Conten…
james-elicx Apr 1, 2026
6e74b39
address bonk: slow-path ETag+304 parity, visible precompress error log
james-elicx Apr 1, 2026
e47501a
address bonk: slow-path 304 Vary header, HAS_ZSTD comment
james-elicx Apr 1, 2026
3523c75
address bonk: clarify Vary/compress=false intent in slow-path 304 com…
james-elicx Apr 1, 2026
c309af2
address bonk: readFile-before-stat comment, document VINEXT_PRECOMPRE…
james-elicx Apr 1, 2026
52f4828
address bonk: filename-hash ETag on slow path, entry-level dedup in b…
james-elicx Apr 1, 2026
54f7e51
address bonk: fix etagFromFilenameHash arg, explicit re-run safety co…
james-elicx Apr 1, 2026
43d0d4d
address bonk: add slow-path 304 test coverage
james-elicx Apr 1, 2026
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
160 changes: 160 additions & 0 deletions packages/vinext/src/build/precompress.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,160 @@
/**
* Build-time precompression for hashed static assets.
*
* Generates .br (brotli q5), .gz (gzip l8), and .zst (zstd l8) files
* alongside compressible assets in dist/client/assets/. Served directly by
* the production server — no per-request compression needed for immutable
* build output.
*
* Only targets assets/ (hashed, immutable) — public directory files use
* on-the-fly compression since they may change between deploys.
*/
import fsp from "node:fs/promises";
import os from "node:os";
import path from "node:path";
import zlib from "node:zlib";
import { promisify } from "node:util";

const brotliCompress = promisify(zlib.brotliCompress);
const gzip = promisify(zlib.gzip);
const zstdCompress = typeof zlib.zstdCompress === "function" ? promisify(zlib.zstdCompress) : null;

/** File extensions worth compressing (text-based, not already compressed). */
const COMPRESSIBLE_EXTENSIONS = new Set([
".js",
".mjs",
".css",
".html",
".json",
".xml",
".svg",
".txt",
".map",
".wasm",
]);

/** Below this size, compression overhead exceeds savings. */
const MIN_SIZE = 1024;

/**
* Past ~8 parallel files, mixed-size asset sets spend more time queueing zlib
* work than making forward progress. Keep the batch size bounded even on
* higher-core machines.
*/
const CONCURRENCY = Math.min(os.availableParallelism(), 8);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Minor: os.availableParallelism() was added in Node 19.4 / backported to 18.14. Since the repo requires engines.node >= 22, this is fine — just noting for awareness. If vinext ever widens engine support, this would need a fallback to os.cpus().length.


export type PrecompressResult = {
filesCompressed: number;
totalOriginalBytes: number;
/** Sum of brotli-compressed sizes (used for compression ratio reporting). */
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: JSDoc on totalBrotliBytes is good — it clearly says "brotli-compressed sizes" now. This addresses the maintainer's earlier concern about the generic field name. The log line in index.ts:3402 correctly says "smaller with brotli" to match.

totalBrotliBytes: number;
};

/**
* Walk a directory recursively, yielding relative paths for regular files.
*/
async function* walkFiles(dir: string, base: string = dir): AsyncGenerator<string> {
let entries;
try {
entries = await fsp.readdir(dir, { withFileTypes: true });
} catch {
return; // directory doesn't exist
}
for (const entry of entries) {
const fullPath = path.join(dir, entry.name);
if (entry.isDirectory()) {
yield* walkFiles(fullPath, base);
} else if (entry.isFile()) {
yield path.relative(base, fullPath);
}
}
}

/**
* Precompress all compressible hashed assets under `clientDir/assets/`.
*
* Writes `.br`, `.gz`, and `.zst` files alongside each original.
* Safe to re-run — overwrites existing compressed variants with identical
* output, and never compresses `.br`, `.gz`, or `.zst` files themselves.
*/
export async function precompressAssets(
clientDir: string,
onProgress?: (completed: number, total: number, file: string) => void,
): Promise<PrecompressResult> {
const assetsDir = path.join(clientDir, "assets");
const result: PrecompressResult = {
filesCompressed: 0,
totalOriginalBytes: 0,
totalBrotliBytes: 0,
};

// Collect compressible file paths, then read + compress in bounded chunks
// to keep peak memory at O(CONCURRENCY * max_file_size) instead of
// O(total_assets).
const filePaths: string[] = [];

for await (const relativePath of walkFiles(assetsDir)) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Robustness: walkFiles collects .br/.gz/.zst files, then the COMPRESSIBLE_EXTENSIONS filter lets them through to get re-compressed on re-runs.

The JSDoc on line 78 says "never compresses .br, .gz, or .zst files themselves" — which is true because .br etc. aren't in COMPRESSIBLE_EXTENSIONS. But this depends on an implicit invariant rather than an explicit guard. If someone later adds .br to COMPRESSIBLE_EXTENSIONS (unlikely but not impossible), the safe-to-rerun claim breaks.

The current approach works, just noting it's worth a brief inline comment on the filter to make the invariant explicit:

// .br/.gz/.zst are not in COMPRESSIBLE_EXTENSIONS, so precompressed
// variants are never re-compressed on re-runs.

const ext = path.extname(relativePath).toLowerCase();

if (!COMPRESSIBLE_EXTENSIONS.has(ext)) continue;
// .br/.gz/.zst are intentionally absent from COMPRESSIBLE_EXTENSIONS, so
// precompressed variants generated by a previous run are never re-compressed.

filePaths.push(path.join(assetsDir, relativePath));
}

let processed = 0;
for (let i = 0; i < filePaths.length; i += CONCURRENCY) {
const chunk = filePaths.slice(i, i + CONCURRENCY);
await Promise.all(
chunk.map(async (fullPath) => {
const content = await fsp.readFile(fullPath);
// readFile already done before this check — stat()-first would save
// the read for tiny files but costs an extra syscall per file;
// sub-1KB hashed assets are rare enough that read-first is cheaper.
if (content.length < MIN_SIZE) return;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: files below MIN_SIZE are read into memory but not counted in progress. When content.length < MIN_SIZE, the file was already fully read into memory (the readFile completed), but the early return skips the counter increments. The chunk-level processed += chunk.length at line 150 correctly handles progress tracking, but the readFile for tiny files is wasted I/O.

For a micro-optimization, you could check file size via stat before readFile — but this adds a syscall per file, so the current approach (read first, skip if small) is likely cheaper in aggregate. Just noting the tradeoff.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Robustness: readFile is wasted for files below MIN_SIZE (1KB).

Every file gets fully read into memory before the size check. For a project with many tiny assets (e.g., icon SVGs under 1KB), this does unnecessary I/O.

A micro-optimization would be to stat() first and skip the read entirely for small files. However, the extra syscall per file may not be worth it — readFile for sub-1KB files is nearly instantaneous, and stat() + readFile is two syscalls vs one. The current approach is likely cheaper in aggregate.

No change needed — just documenting the tradeoff for future readers.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should fix: readFile is unconditional but the size check happens after.

Every file in the chunk gets fully read into memory before the content.length < MIN_SIZE check. For files below 1KB, the read is pure waste. More importantly, the early return here means the progress counter for this file is only tracked at the chunk level (line 150), not at the file level — so onProgress can't report which specific file was skipped.

This is fine for the current use case (most hashed assets are well above 1KB), but if you want to tighten it up later, you could stat() first and skip the read for tiny files. The tradeoff is one extra syscall per file vs one wasted read per tiny file — probably not worth changing now, but worth a brief comment:

Suggested change
if (content.length < MIN_SIZE) return;
if (content.length < MIN_SIZE) return; // Below threshold — skip (readFile already done; stat() would add a syscall)


// Compress all variants concurrently within each file
const compressions: Promise<Buffer>[] = [
brotliCompress(content, {
params: { [zlib.constants.BROTLI_PARAM_QUALITY]: 5 },
}),
gzip(content, { level: 8 }),
];
if (zstdCompress) {
compressions.push(
zstdCompress(content, {
params: { [zlib.constants.ZSTD_c_compressionLevel]: 8 },
}),
);
}

const results = await Promise.all(compressions);
const [brContent, gzContent, zstdContent] = results;

const writes = [
fsp.writeFile(fullPath + ".br", brContent),
fsp.writeFile(fullPath + ".gz", gzContent),
];
if (zstdContent) {
writes.push(fsp.writeFile(fullPath + ".zst", zstdContent));
}
await Promise.all(writes);

// Increment counters only after all writes succeed, so partial
// failures (e.g. ENOSPC mid-write) don't inflate the reported totals.
result.filesCompressed++;
result.totalOriginalBytes += content.length;
result.totalBrotliBytes += brContent.length;
}),
);
// Report progress once per chunk to avoid non-deterministic ordering
// within Promise.all (smaller files complete before larger ones).
// Progress tracks all files (including skipped ones below MIN_SIZE),
// which differs from filesCompressed (only files actually compressed).
processed += chunk.length;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Progress reports after-the-fact for the last chunk. The processed += chunk.length and onProgress call happen after Promise.all completes for each chunk. This means the progress bar jumps in steps of CONCURRENCY rather than updating per-file. This is actually fine for deterministic progress (avoids the ordering issue within a chunk that earlier reviews flagged), but it means the progress bar sits at 0% until the first chunk finishes, then jumps to e.g. 12%.

Not blocking — the current approach is the right tradeoff over per-file reporting within a Promise.all chunk. Just noting the visual behavior.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Robustness: progress reporting is chunk-granular, not per-file. This is the right tradeoff — it avoids the non-deterministic ordering issue within Promise.all that earlier reviews flagged (where smaller files resolve before larger ones, causing the progress bar to jump backward). The visual effect is that the progress bar advances in steps of CONCURRENCY rather than per-file, but since CONCURRENCY is capped at 8, the steps are small enough.

The comment at lines 146-149 clearly explains both the decision and the semantic difference between processed and filesCompressed. Good.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: path.basename(chunk[chunk.length - 1]) in the progress callback reports the last file in the chunk, not the last file to finish processing. Since Promise.all resolves when all items complete, the progress message always shows the last filename alphabetically (or by insertion order), regardless of which file took longest. This is fine — it's cosmetic — but could briefly show a different filename than what's actually compressing if someone reads the progress bar closely.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should fix (minor): onProgress is called with processed counting all files including skipped ones, but the final log message at index.ts:3398 says "Precompressed N assets" using filesCompressed — these two counters can diverge.

If a project has 100 compressible files but 20 are below MIN_SIZE, the progress bar reaches 100/100, but the summary says "Precompressed 80 assets". This is slightly confusing but not a bug. The comment on lines 151-154 documents this well.

The actual minor bug: if all files in a chunk are below MIN_SIZE, the path.basename(chunk[chunk.length - 1]) in the progress callback reports the name of a file that was skipped, not one that was compressed. Again, cosmetic — but worth noting.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Progress is chunk-granular rather than per-file, which avoids the non-deterministic ordering issue within Promise.all. Good tradeoff — with CONCURRENCY capped at 8, the visual steps are small enough.

One cosmetic note: path.basename(chunk[chunk.length - 1]) always shows the last filename in the array, not the last to finish compressing. This is fine since it converges to 100% either way.

onProgress?.(processed, filePaths.length, path.basename(chunk[chunk.length - 1]));
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Progress bar reports inflated count for skipped files. processed += chunk.length includes files that were skipped due to content.length < MIN_SIZE (the early return at line 110). This means:

  • onProgress reports more files as "compressed" than actually were
  • result.filesCompressed only counts files that passed the size threshold, but processed counts all files in the chunk

This is fine for the progress bar (it's tracking processing not compressing), but worth noting the semantic difference between processed and filesCompressed. A comment here would clarify the intent:

Suggested change
onProgress?.(processed, filePaths.length, path.basename(chunk[chunk.length - 1]));
// Progress tracks all files (including skipped ones below MIN_SIZE)
processed += chunk.length;
onProgress?.(processed, filePaths.length, path.basename(chunk[chunk.length - 1]));

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: path.basename(chunk[chunk.length - 1]) always reports the last file in the array, not the last file to finish compressing. Within a Promise.all chunk, a 200KB JS file might still be compressing while a 2KB CSS file finished long ago. The progress message will show the JS filename regardless. This is fine — the progress bar is informational and converges to 100% either way.

}

return result;
}
8 changes: 8 additions & 0 deletions packages/vinext/src/cli.ts
Original file line number Diff line number Diff line change
Expand Up @@ -101,6 +101,7 @@ type ParsedArgs = {
turbopack?: boolean; // accepted for compat, always ignored
experimental?: boolean; // accepted for compat, always ignored
prerenderAll?: boolean;
precompress?: boolean;
};

function parseArgs(args: string[]): ParsedArgs {
Expand All @@ -117,6 +118,9 @@ function parseArgs(args: string[]): ParsedArgs {
result.experimental = true; // no-op
} else if (arg === "--prerender-all") {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Setting process.env.VINEXT_PRECOMPRESS = "1" in the CLI parser to communicate with the Vite plugin is a pragmatic approach. However, it couples the CLI to the plugin via an env var side-channel. A cleaner alternative would be to pass it through the Vite config's define or through the plugin options. Not blocking — the env var approach works and is common in build tooling.

result.prerenderAll = true;
} else if (arg === "--precompress") {
result.precompress = true;
process.env.VINEXT_PRECOMPRESS = "1";
} else if (arg === "--port" || arg === "-p") {
result.port = parseInt(args[++i], 10);
} else if (arg.startsWith("--port=")) {
Expand Down Expand Up @@ -509,6 +513,9 @@ async function buildApp() {
prerenderResult = await runPrerender({ root: process.cwd() });
}

// Precompression runs as a Vite plugin writeBundle hook (vinext:precompress).
// Opt-in via --precompress CLI flag or `precompress: true` in plugin options.

process.stdout.write("\x1b[0m");
await printBuildReport({
root: process.cwd(),
Expand Down Expand Up @@ -678,6 +685,7 @@ function printHelp(cmd?: string) {
--verbose Show full Vite/Rollup build output (suppressed by default)
--prerender-all Pre-render discovered routes after building (future releases
will serve these files in vinext start)
--precompress Precompress static assets at build time (.br, .gz, .zst)
-h, --help Show this help
`);
return;
Expand Down
106 changes: 106 additions & 0 deletions packages/vinext/src/index.ts
Original file line number Diff line number Diff line change
Expand Up @@ -40,6 +40,7 @@ import {
runInstrumentation,
} from "./server/instrumentation.js";
import { PHASE_PRODUCTION_BUILD, PHASE_DEVELOPMENT_SERVER } from "./shims/constants.js";
import { precompressAssets } from "./build/precompress.js";
import { validateDevRequest } from "./server/dev-origin-check.js";
import {
isExternalUrl,
Expand Down Expand Up @@ -817,6 +818,22 @@ export type VinextOptions = {
* @default true
*/
react?: VitePluginReactOptions | boolean;
/**
* Enable build-time precompression of static assets (.br, .gz, .zst).
*
* When enabled, hashed assets in the client build are precompressed at
* build time so the production server can serve them without on-the-fly
* compression overhead.
*
* Disabled by default. Not useful when deploying to edge platforms
* (Cloudflare Workers, Nitro) that handle compression at the CDN layer.
*
* Can also be enabled via the `--precompress` CLI flag or by setting the
* `VINEXT_PRECOMPRESS=1` environment variable (useful for CI pipelines
* that need to enable precompression without modifying vite.config.ts).
* @default false
*/
precompress?: boolean;
/**
* Experimental vinext-only feature flags.
*/
Expand Down Expand Up @@ -3325,6 +3342,95 @@ export default function vinext(options: VinextOptions = {}): PluginOption[] {
},
},
},
// Build-time precompression: generate .br, .gz, .zst for hashed assets.
// Runs after the client bundle is written so compressed variants are
// available for the production server's static file cache.
// Opt-in via `precompress: true` in plugin options or `--precompress`
// CLI flag. Not useful for edge platforms (Cloudflare Workers, Nitro)
// that handle compression at the CDN layer.
(() => {
let pendingPrecompress: Promise<void> | null = null;
let pendingPrecompressError: unknown = null;

return {
name: "vinext:precompress",
apply: "build" as const,
enforce: "post" as const,
writeBundle: {
sequential: true,
order: "post" as const,
handler(outputOptions: { dir?: string }) {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: writeBundle.handler is not marked async — it starts precompression as a fire-and-forget promise. This is intentional (the comment at line 3378-3381 explains the overlapping strategy), but it means errors from the async IIFE are caught by .catch() at line 3407 rather than propagating through Vite's plugin pipeline. The closeBundle handler picks up the error and rethrows it, which is correct.

One subtle edge: if the client writeBundle hook fires but the build is aborted before the SSR closeBundle fires (e.g., process.exit() in another plugin), the .catch() handler logs the error but the process may exit before the log is flushed. This is a rare edge case and the current approach is pragmatic — just documenting the limitation.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-blocking: fire-and-forget pattern has a subtle edge case on process.exit().

The writeBundle handler starts precompression as an untracked promise (the pendingPrecompress reference is the only way to await it). If the build process is killed between writeBundle completing and closeBundle firing (e.g., process.exit() in another plugin, or SIGTERM), the precompression work is abandoned mid-write. This could leave partial .br/.gz/.zst files on disk.

Since precompressAssets overwrites existing files on re-run, this is self-healing on the next build. Not a blocker — just documenting the edge case.

if (this.environment?.name !== "client") return;

if (!options.precompress && process.env.VINEXT_PRECOMPRESS !== "1") return;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Opt-in gate is correct. This checks both options.precompress (vite config) and process.env.VINEXT_PRECOMPRESS (CLI flag), and returns early if neither is set. This matches the maintainer's request for opt-in semantics.

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Non-blocking: the opt-in gate correctly checks both options.precompress and process.env.VINEXT_PRECOMPRESS.

This addresses the maintainer's request. One thing to note: the env var approach means VINEXT_PRECOMPRESS=1 set externally (not via CLI) also enables precompression. This is probably fine (gives CI pipelines a way to enable it without touching vite config), but worth documenting in the JSDoc on the precompress option:

Suggested change
if (!options.precompress && process.env.VINEXT_PRECOMPRESS !== "1") return;
if (!options.precompress && process.env.VINEXT_PRECOMPRESS !== "1") return;

(No code change — just noting the env var should be mentioned in the VinextOptions.precompress JSDoc above.)

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The env var VINEXT_PRECOMPRESS is documented in the JSDoc on the precompress option (line 832), which is good. One subtle behavior worth noting: this means VINEXT_PRECOMPRESS=1 vinext build enables precompression even without --precompress on the CLI or precompress: true in config. This is a useful escape hatch for CI pipelines — just make sure it's mentioned in user-facing docs when those are written.


const outDir = outputOptions.dir;
if (!outDir) return;

// Only precompress hashed assets — public directory files use
// on-the-fly compression since they may change between deploys.
const assetsDir = path.join(outDir, "assets");
if (!fs.existsSync(assetsDir)) return;
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This fs.existsSync call is synchronous and runs during the build. The writeBundle hook is async-capable, so this could use fsp.access() or fsp.stat() instead. However, since this runs once during build (not per-request), the sync call is acceptable. Non-blocking.


const isTTY = process.stderr.isTTY;
let lastLineLen = 0;

// Start precompression as soon as the client bundle is written, but
// defer awaiting it until the SSR environment finishes. This overlaps
// the extra asset work with the final build phase instead of putting
// the full precompression cost on the critical path of step 4/5.
pendingPrecompressError = null;
pendingPrecompress = (async () => {
const result = await precompressAssets(outDir, (completed, total, file) => {
if (!isTTY) return;
const pct = total > 0 ? Math.floor((completed / total) * 100) : 0;
const bar = `[${"█".repeat(Math.floor(pct / 5))}${" ".repeat(20 - Math.floor(pct / 5))}]`;
const maxFile = 30;
const fileLabel = file.length > maxFile ? "…" + file.slice(-(maxFile - 1)) : file;
const line = `Compressing assets... ${bar} ${String(completed).padStart(String(total).length)}/${total} ${fileLabel}`;
const padded = line.padEnd(lastLineLen);
lastLineLen = line.length;
process.stderr.write(`\r${padded}`);
});
if (isTTY) {
process.stderr.write(`\r${" ".repeat(lastLineLen)}\r`);
}
if (result.filesCompressed > 0) {
const ratio = (
(1 - result.totalBrotliBytes / result.totalOriginalBytes) *
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: the log line says "smaller with brotli" but totalBrotliBytes only tracks brotli. The maintainer flagged this — the field name on PrecompressResult is now correctly totalBrotliBytes with an accurate JSDoc, and this log line correctly references brotli specifically. This is consistent now. No change needed.

100
).toFixed(1);
console.log(
` Precompressed ${result.filesCompressed} assets (${ratio}% smaller with brotli)`,
);
}
})().catch((error) => {
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nit: error handling is now complete. The .catch() block at line 3405 logs immediately via console.error, and closeBundle at line 3416 rethrows if the SSR environment fires. This addresses the earlier concern about errors being invisible if closeBundle doesn't run. Good.

pendingPrecompressError = error;
// Log immediately so the error isn't invisible if closeBundle
// never fires (e.g. a crash in a later SSR build plugin).
console.error("[vinext] Precompression failed:", error);
});
Comment on lines +3378 to +3412
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Potential issue: pendingPrecompressError swallowed if SSR closeBundle never fires.

The pattern here starts precompression in writeBundle (client env) and defers error handling to closeBundle (SSR env). If the SSR build step somehow doesn't fire closeBundle (e.g., a crash in an earlier SSR plugin, or an environment ordering change), the precompression error is silently swallowed via the .catch() handler and never rethrown.

In practice this is unlikely since the SSR environment always runs after client in the build pipeline, but consider adding a process.on('beforeExit') guard or at minimum a console.error in the .catch() block so errors aren't completely invisible:

Suggested change
// Start precompression as soon as the client bundle is written, but
// defer awaiting it until the SSR environment finishes. This overlaps
// the extra asset work with the final build phase instead of putting
// the full precompression cost on the critical path of step 4/5.
pendingPrecompressError = null;
pendingPrecompress = (async () => {
const result = await precompressAssets(outDir, (completed, total, file) => {
if (!isTTY) return;
const pct = total > 0 ? Math.floor((completed / total) * 100) : 0;
const bar = `[${"█".repeat(Math.floor(pct / 5))}${" ".repeat(20 - Math.floor(pct / 5))}]`;
const maxFile = 30;
const fileLabel = file.length > maxFile ? "…" + file.slice(-(maxFile - 1)) : file;
const line = `Compressing assets... ${bar} ${String(completed).padStart(String(total).length)}/${total} ${fileLabel}`;
const padded = line.padEnd(lastLineLen);
lastLineLen = line.length;
process.stderr.write(`\r${padded}`);
});
if (isTTY) {
process.stderr.write(`\r${" ".repeat(lastLineLen)}\r`);
}
if (result.filesCompressed > 0) {
const ratio = (
(1 - result.totalBrotliBytes / result.totalOriginalBytes) *
100
).toFixed(1);
console.log(
` Precompressed ${result.filesCompressed} assets (${ratio}% smaller with brotli)`,
);
}
})().catch((error) => {
pendingPrecompressError = error;
});
})().catch((error) => {
pendingPrecompressError = error;
console.error('[vinext] Precompression failed:', error);
});

This way operators see the error even if closeBundle doesn't run.

},
},
closeBundle: {
sequential: true,
order: "post" as const,
async handler() {
if (this.environment?.name !== "ssr") return;
if (!pendingPrecompress) return;

const task = pendingPrecompress;
pendingPrecompress = null;
await task;
if (pendingPrecompressError) {
const error = pendingPrecompressError;
pendingPrecompressError = null;
throw error;
}
},
},
};
})(),
// Cloudflare Workers production build integration:
// After all environments are built, compute lazy chunks from the client
// build manifest and inject globals into the worker entry.
Expand Down
Loading
Loading