Skip to content

Optimized LS-EEND API#526

Open
SGD2718 wants to merge 11 commits intomainfrom
optimized-ls-eend
Open

Optimized LS-EEND API#526
SGD2718 wants to merge 11 commits intomainfrom
optimized-ls-eend

Conversation

@SGD2718
Copy link
Copy Markdown
Collaborator

@SGD2718 SGD2718 commented Apr 15, 2026

  • Removed unnecessary copies of the prediction matrix
  • Removed extra Mel Spectrogram allocation

Open with Devin

Copilot AI review requested due to automatic review settings April 15, 2026 19:25
@SGD2718 SGD2718 self-assigned this Apr 15, 2026
@SGD2718 SGD2718 requested a review from Alex-Wengg April 15, 2026 19:29
@SGD2718 SGD2718 added the speaker-diarization Issues related to speaker diarization label Apr 15, 2026

This comment was marked as resolved.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 15, 2026

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (168.8 KB)

Runtime: 0m42s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality and performance may differ from Apple Silicon.

devin-ai-integration[bot]

This comment was marked as resolved.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 15, 2026

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 5.84x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 77.6s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.078s Average chunk processing time
Max Chunk Time 0.155s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 1m26s • 04/22/2026, 02:13 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 15, 2026

Kokoro TTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (634.8 KB)

Runtime: 0m27s

Note: Kokoro TTS uses CoreML flow matching + Vocos vocoder. CI VM lacks physical ANE — performance may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 15, 2026

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Performance Metrics

Metric CI Value Expected on Apple Silicon
Median RTFx 0.04x ~2.5x
Overall RTFx 0.04x ~2.5x

Runtime: 4m24s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 15, 2026

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 30.3% <35%
Miss Rate 28.2% - -
False Alarm 0.9% - -
Speaker Error 1.2% - -
RTFx 7.6x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 3m 18s • 2026-04-22T18:14:52.353Z

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 15, 2026

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 17.58x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 10.202 17.1 Fetching diarization models
Model Compile 4.372 7.3 CoreML compilation
Audio Load 0.072 0.1 Loading audio file
Segmentation 17.891 30.0 Detecting speech regions
Embedding 29.819 50.0 Extracting speaker voices
Clustering 11.928 20.0 Grouping same speakers
Total 59.687 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 59.6s diarization time • Test runtime: 2m 23s • 04/22/2026, 02:16 PM EST

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 15, 2026

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 282.0x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 311.1x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 15, 2026

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 10.4% <20% Diarization Error Rate (lower is better)
RTFx 10.51x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 13.314 13.3 Fetching diarization models
Model Compile 5.706 5.7 CoreML compilation
Audio Load 0.083 0.1 Loading audio file
Segmentation 26.139 26.2 VAD + speech detection
Embedding 99.574 99.7 Speaker embedding extraction
Clustering (VBx) 0.107 0.1 Hungarian algorithm + VBx clustering
Total 99.855 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 10.4% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 125.8s processing • Test runtime: 2m 10s • 04/22/2026, 02:07 PM EST

@github-actions
Copy link
Copy Markdown

github-actions Bot commented Apr 15, 2026

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 4.08x
test-other 1.75% 0.00% 2.22x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 4.38x
test-other 1.00% 0.00% 2.99x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.56x Streaming real-time factor
Avg Chunk Time 1.644s Average time to process each chunk
Max Chunk Time 2.201s Maximum chunk processing time
First Token 1.940s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.43x Streaming real-time factor
Avg Chunk Time 1.971s Average time to process each chunk
Max Chunk Time 2.827s Maximum chunk processing time
First Token 1.943s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 6m53s • 04/22/2026, 02:10 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@SGD2718
Copy link
Copy Markdown
Collaborator Author

SGD2718 commented Apr 16, 2026

Do not push this. I have an even more optimized version of LS-EEND coming up but without preview frames.

devin-ai-integration[bot]

This comment was marked as resolved.

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 4 new potential issues.

View 12 additional findings in Devin Review.

Open in Devin Review

Comment thread Sources/FluidAudio/Diarizer/DiarizerTimeline.swift Outdated
Comment on lines +670 to +674
public var finalizedPredictions: [Float] = []

/// Tentative predictions.
/// Flat array of shape [numTentative, numSpeakers].
public var tentativePredictions: [Float] {
queue.sync { _tentativePredictions }
}
public var tentativePredictions: [Float] = []
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 finalizedPredictions and tentativePredictions exposed as unsynchronized public vars, creating data races

These two properties were previously behind synchronized computed properties (queue.sync { _finalizedPredictions }) but are now public var with no locking. Internal methods (_addChunkUnlocked, _finalizeUnlocked, _resetUnlocked, rebuild) still mutate them under self.lock, so any external concurrent reader (e.g., LSEENDBenchmark.swift:589 reading timeline.finalizedPredictions, or LSEENDCommand.swift:212) races against the lock-protected writes. This violates the repository's AGENTS.md rule: "implement proper thread safety with actors/MainActor" and removes thread safety that the old code provided.

Prompt for agents
The properties `finalizedPredictions` and `tentativePredictions` on `DiarizerTimeline` (lines 670-674) are now `public var` but are mutated internally under `self.lock` (e.g. in `_addChunkUnlocked`, `_finalizeUnlocked`, `rebuild`). External callers read them without any synchronization, creating a data race. The old code used computed properties that acquired the dispatch queue before returning the backing store. To fix: either make these properties private and expose them through lock-protected computed properties (matching the old pattern), or ensure all reads and writes go through the lock. Affected callers include `LSEENDBenchmark.swift:589` and `LSEENDCommand.swift:212`.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +225 to +234
public var name: String?

/// Diarizer output slot
public var index: Int

/// Finalized speech segments
public var finalizedSegments: [DiarizerSegment] = []

/// Preview speech segments
public var tentativeSegments: [DiarizerSegment] = []
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 DiarizerSpeaker mutable properties exposed without synchronization, breaking prior thread safety

The refactoring changed DiarizerSpeaker.name, index, finalizedSegments, and tentativeSegments from private properties with queue-synchronized public accessors to plain public var. The class has a private NSLock and locking methods like rename(to:), but external code can bypass the lock by accessing properties directly. For example, LSEENDDiarizer.swift:352 writes enrolledSpeaker.name = name directly instead of using rename(to:). This violates the AGENTS.md rule requiring proper thread safety.

Prompt for agents
DiarizerSpeaker now exposes `name`, `index`, `finalizedSegments`, and `tentativeSegments` as `public var` (lines 225-234) while also having a private NSLock and lock-protected methods (`rename`, `reassign`, `append`, `clearTentative`, etc.). This means any external caller can read/write these properties without acquiring the lock, creating data races when accessed concurrently. The old code kept these properties private with queue-synchronized accessors. Fix options: (1) Make these properties private again and use computed properties that acquire the lock. (2) At minimum, change direct property writes like `enrolledSpeaker.name = name` (LSEENDDiarizer.swift:352) to use `enrolledSpeaker.rename(to: name)`.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment thread Sources/FluidAudio/Diarizer/DiarizerTimeline.swift
Co-authored-by: devin-ai-integration[bot] <158243242+devin-ai-integration[bot]@users.noreply.github.com>
Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 1 new potential issue.

View 15 additional findings in Devin Review.

Open in Devin Review

}
newPreds.append(contentsOf: try model.predict(from: input))
processed += 1
progressCallback?(processed, totalChunks, 1)
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Progress callback receives chunk counts instead of sample counts, violating documented API contract

The Diarizer protocol documents progressCallback as (processedSamples, totalSamples, chunksProcessed), but the new LSEENDDiarizer.flush implementation at Sources/FluidAudio/Diarizer/LS-EEND/LSEENDDiarizer.swift:249 passes (processed, totalChunks, 1) where processed is the number of chunks processed (not samples), totalChunks is the initial ready-chunk count (not total samples), and the third parameter is hardcoded to 1 (not the accumulated chunk count). Any caller that interprets the callback per the documented contract (e.g., computing a percent-complete from processedSamples / totalSamples) will get nonsensical values.

Callback site in flush()

Line 249: progressCallback?(processed, totalChunks, 1)processed increments per chunk, totalChunks is a snapshot from before the drain loop, and 1 is always literal 1.

The protocol comment at Sources/FluidAudio/Diarizer/DiarizerProtocol.swift:60 says: progressCallback: Optional callback receiving (processedSamples, totalSamples, chunksProcessed).

Prompt for agents
The flush() method at LSEENDDiarizer.swift:225-258 passes chunk-level counts to the progressCallback, but the Diarizer protocol documents this callback as (processedSamples, totalSamples, chunksProcessed). Either update the callback invocation to pass actual sample counts (tracking cumulative samples processed and total audio samples enqueued), or update the protocol documentation to reflect the new semantics. The SortformerDiarizer still uses the sample-count convention, so consistency across implementations matters.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

speaker-diarization Issues related to speaker diarization

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants