Skip to content

squashfs: fix inode block references when metadata is compressed (#360)#377

Merged
deitch merged 2 commits intodiskfs:masterfrom
luthermonson:squashfs-360-fix
Apr 16, 2026
Merged

squashfs: fix inode block references when metadata is compressed (#360)#377
deitch merged 2 commits intodiskfs:masterfrom
luthermonson:squashfs-360-fix

Conversation

@luthermonson
Copy link
Copy Markdown
Contributor

updateInodeLocations pre-computed each inode's metadata-block byte offset as logicalBlock * (metadataBlockSize + 2) = logicalBlock * 8194, which assumes every compressed metadata block is exactly 8194 bytes on disk. That's only true when compression is effectively off. With real gzip compression metadata blocks are typically much smaller, so directory entries and the superblock root-inode ref pointed past the real block positions. Readers then failed with "could not read directory entries" or "failed to read inode X:Y" for every file whose inode was not in block 0.

The bug reproduces without any "large" file — it just needs enough inodes to span a metadata block boundary AND effective compression. A single large file triggers it too because its extendedFile inode carries one block-size entry per data block (a 1 GB file at 128 KB blocksize needs ~32 KB of inode, spanning 4 metadata blocks by itself).

Fix: store a logical block index in updateInodeLocations, then have writeInodes track the actual on-disk byte offset of each metadata block as it writes them. After writeInodes returns, translateInodeLocations rewrites every inodeLocation.block and every directory entry's startBlock from logical index to real byte offset, before the directory table (which embeds those references) is written.

There's a matching TODO at finalize.go:274 noting "blockPosition calculations appear to be off" — this fixes that.

The reproducer test TestFinalizeInodesAcrossMetadataBlocks was added as a standalone failing test in a preceding PR; it passes after this fix.

…kfs#360)

updateInodeLocations pre-computed each inode's metadata-block byte offset
as `logicalBlock * (metadataBlockSize + 2)` = `logicalBlock * 8194`, which
assumes every compressed metadata block is exactly 8194 bytes on disk.
That's only true when compression is effectively off. With real gzip
compression metadata blocks are typically much smaller, so directory
entries and the superblock root-inode ref pointed past the real block
positions. Readers then failed with "could not read directory entries"
or "failed to read inode X:Y" for every file whose inode was not in
block 0.

The bug reproduces without any "large" file — it just needs enough inodes
to span a metadata block boundary AND effective compression. A single
large file triggers it too because its extendedFile inode carries one
block-size entry per data block (a 1 GB file at 128 KB blocksize needs
~32 KB of inode, spanning 4 metadata blocks by itself).

Fix: store a logical block index in updateInodeLocations, then have
writeInodes track the actual on-disk byte offset of each metadata block
as it writes them. After writeInodes returns, translateInodeLocations
rewrites every inodeLocation.block and every directory entry's
startBlock from logical index to real byte offset, before the directory
table (which embeds those references) is written.

There's a matching TODO at finalize.go:274 noting "blockPosition
calculations appear to be off" — this fixes that.

The reproducer test TestFinalizeInodesAcrossMetadataBlocks was added
as a standalone failing test in a preceding PR; it passes after this fix.
- Use fmt.Fprintf instead of fh.Write([]byte(fmt.Sprintf(...)))
- Combine consecutive same-type parameters in translateInodeLocations
@luthermonson
Copy link
Copy Markdown
Contributor Author

Fixes #360

@deitch deitch merged commit fb782ca into diskfs:master Apr 16, 2026
20 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants