Conversation
Signed-off-by: Nicholas Gates <[email protected]>
Signed-off-by: Nicholas Gates <[email protected]>
Merging this PR will degrade performance by 30.46%
|
| Mode | Benchmark | BASE |
HEAD |
Efficiency | |
|---|---|---|---|---|---|
| ❌ | WallTime | u16_FoR[10M] |
8 µs | 9.6 µs | -16.37% |
| ❌ | WallTime | 1M_50pct[500000] |
57.1 µs | 82.1 µs | -30.46% |
| ❌ | WallTime | 1M_10pct[100000] |
21 µs | 29 µs | -27.44% |
| ⚡ | WallTime | u64_FoR[10M] |
388.1 µs | 351.3 µs | +10.47% |
| ⚡ | Simulation | chunked_bool_into_canonical[(1000, 10)] |
64.1 µs | 43.2 µs | +48.57% |
Comparing ngates/normalize (9b78ebe) with develop (1dd2d66)
Footnotes
-
1323 benchmarks were skipped, so the baseline results were used instead. If they were deleted from the codebase, click here and archive them to remove them from the performance reports. ↩
| /// | ||
| /// This operation performs a recursive traversal of the array. Any non-allowed encoding is | ||
| /// normalized per the configured operation. | ||
| pub fn normalize(&self, options: &mut NormalizeOptions) -> VortexResult<ArrayRef> { |
There was a problem hiding this comment.
When would we use this?
|
I am concerned that the is more of a bandaid than a fix. It will make it very easy to compress an array (include a accidental filter) and then undo it just before we serialise. I think we actually want to throw an error instead and stop the compressor adding the unwanted array in the first place |
based on #6213 Signed-off-by: Joe Isaacs <[email protected]>
This will be used prior to serialization to remove any unwanted arrays from the tree.
Should it actually just be fused with serialization?