Skip to content

feat(tts/magpie): add NVIDIA Magpie TTS Multilingual 357M Swift port#541

Open
Alex-Wengg wants to merge 1 commit intomainfrom
tts/magpie-swift-port
Open

feat(tts/magpie): add NVIDIA Magpie TTS Multilingual 357M Swift port#541
Alex-Wengg wants to merge 1 commit intomainfrom
tts/magpie-swift-port

Conversation

@Alex-Wengg
Copy link
Copy Markdown
Member

@Alex-Wengg Alex-Wengg commented Apr 25, 2026

Summary

Ports the NVIDIA Magpie TTS Multilingual 357M autoregressive TTS from Python (mobius #24) to Swift. Closes #49.

  • Languages (8/9): English, Spanish, German, French, Italian, Vietnamese, Mandarin, Hindi. Japanese deferred pending OpenJTalk XCFramework integration.
  • 5 built-in speakers (.john, .sofia, .aria, .jason, .leo) with 110-token (768d fp16) context embeddings.
  • Inline IPA override ("Hello | ˈ n ɛ m o ʊ | world") routes |…| segments directly to the tokenizer for pronunciation control — first-class feature as requested.
  • Output: 22.05 kHz mono WAV via 8-codebook NanoCodec decoder, max 11.89 s per synthesis (256 nanocodec frames).

HF assets — live

FluidInference/magpie-tts-multilingual-357m-coreml is uploaded and ready (1.4 GB). Ships:

  • text_encoder.{mlmodelc,mlpackage} — both compiled and portable
  • decoder_step.{mlmodelc,mlpackage} — stateful 12-layer KV cache
  • decoder_prefill.{mlmodelc,mlpackage} — fast prefill path (110-token batched)
  • nanocodec_decoder.{mlmodelc,mlpackage} — 8-codebook → 22 kHz PCM
  • constants/constants.json, speaker_info.json, 8 audio-codebook embeddings, 5 speaker contexts, local-transformer weights
  • tokenizer/ — per-language phoneme/jieba/pypinyin lookups (lazy-downloaded)
  • manifest.json — machine-readable index (sha256, file sizes, npy shapes, model IO specs) consumed by MagpieResourceDownloader

Architecture

Stage Implementation
Text encoder text_encoder.mlmodelc (CoreML, cpuAndNeuralEngine)
Prefill Optional decoder_prefill.mlmodelc fast path, else 110 × decoder_step
AR loop decoder_step.mlmodelc with stateful 12-layer KV cache [2, 1, 512, 12, 64]
Local transformer Pure Swift, 1-layer (256d), Accelerate (cblas_sgemm) + BNNS (GELU)
Sampling top-k (80) + temperature (0.6), audio-EOS mask during minFrames, forbidden-token mask [2016, 2018-2023]
Vocoder nanocodec_decoder.mlmodelc — 8×N codes → float PCM → peak-normalize

Assets fetched lazily via DownloadUtils; only the languages requested in downloadAndCreate(languages:) are materialized.

Public API

let manager = try await MagpieTtsManager.downloadAndCreate(
    languages: [.english, .spanish]
)
let result = try await manager.synthesize(
    text: "Hello | ˈ n ɛ m o ʊ | from FluidAudio.",
    speaker: .john,
    language: .english
)
let wav = AudioWAV.data(from: result.samples, sampleRate: result.sampleRate)

CLI

fluidaudiocli magpie download --languages en,es
fluidaudiocli magpie text --text "Bonjour." --speaker 0 --language fr --output out.wav
fluidaudiocli magpie parity --fixture fixture_en.npz
fluidaudiocli magpie tokenizer-parity --fixture fixture_en.json

Inline IPA — verified working

The |…| passthrough is native NeMo IpaG2p behavior (not added by us): segments inside pipes are looked up directly in token2id.json as whitespace-separated phonemes, bypassing G2P.

input:  "Hello | n ɛ m o ʊ | from FluidAudio."
G2P:    həˈloʊ nɛmoʊ frʌm fluɪdaːdɪoʊ.   ← injected IPA visible mid-stream

Validated end-to-end with the live HF assets (Python reference): 30 tokens → 43 frames → 2.00 s @ 3.97x RTF.

Guardrails followed

  • No @unchecked Sendable; MagpieTtsManager, MagpieModelStore, MagpieTokenizer, MagpieSynthesizer are all actors.
  • No dummy models / synthetic data.
  • AppLogger(category: "Magpie*") throughout, no print().
  • MagpieError: Error, LocalizedError for all error paths.

Test plan

  • swift build — clean on macOS 14 / Swift 6 (only pre-existing cblas_sgemm deprecation warnings from Accelerate).
  • swift test --filter "Magpie|NpyReader" — 17 / 17 pass:
    • MagpieConstantsTests (4) — forbidden-token mask, shape relations, NeMo tokenizer-name parity, per-language file coverage
    • MagpieIpaOverrideTests (7) — |…| segmentation edge cases
    • MagpieKvCacheTests (3) — cache shape, addInputs key count, static output keys
    • NpyReaderTests (3) — fp32 parse, fp16→fp32 upcast, bad-magic rejection
  • HF assets uploaded, Python inference parity confirmed (4.60 s plain English, 2.00 s + 11.05 s with inline IPA).
  • End-to-end Swift validation: magpie downloadmagpie text --text "Hello world." --speaker 0 --language en produces audible 22 kHz WAV.
  • Parity run (magpie parity --fixture …) against fixtures emitted by FluidInference/mobius#44; target: MAE < 1e-3 on encoder output, SNR > 40 dB on audio.

Companion PR

Conversion pipeline + parity-fixture emitter + manifest generator: FluidInference/mobius#44.

Out of scope (follow-ups)

  • Japanese support (OpenJTalk + MeCab dict).
  • Streaming NanoCodec via MLState conv-cache (current export is fixed-window batch; chunked-overlap fallback yields <15 dB SNR — unviable without proper state caching).
  • CI workflow magpie-benchmark.yml.
  • CFG perf optimization (currently doubles per-step decoder cost).

Ports the Magpie TTS Multilingual 357M autoregressive TTS from Python
(mobius PR #24) to Swift, covering 8 languages (EN, ES, DE, FR, IT, VI,
ZH, HI). Japanese is deferred pending OpenJTalk integration.

Highlights:
- Encoder-decoder transformer + NanoCodec vocoder, 22 kHz output.
- 5 built-in speakers; `|...|` inline-IPA override routes phoneme tokens
  directly to the tokenizer for fine-grained pronunciation control.
- 1-layer local transformer (256d) runs on CPU via Accelerate/BNNS with
  top-k + temperature sampling and audio-EOS / forbidden-token masking.
- 12-layer decoder KV cache rolled statefully across `decoder_step`
  calls; optional `decoder_prefill` fast path for the speaker context.
- Assets (4 CoreML models + constants/ + tokenizer/) auto-fetch from
  `FluidInference/magpie-tts-multilingual-357m-coreml` on first use.
- New CLI: `fluidaudiocli magpie {download,text,parity,tokenizer-parity}`.
- Public API: `MagpieTtsManager.downloadAndCreate(languages:)` actor.
- Unit tests: IPA override segmentation, KV-cache shape, NeMo tokenizer
  parity, and NPY v1 fp16/fp32 reader (17 tests, all passing).
@github-actions
Copy link
Copy Markdown

Parakeet EOU Benchmark Results ✅

Status: Benchmark passed
Chunk Size: 320ms
Files Tested: 100/100

Performance Metrics

Metric Value Description
WER (Avg) 7.03% Average Word Error Rate
WER (Med) 4.17% Median Word Error Rate
RTFx 12.39x Real-time factor (higher = faster)
Total Audio 470.6s Total audio duration processed
Total Time 39.1s Total processing time

Streaming Metrics

Metric Value Description
Avg Chunk Time 0.039s Average chunk processing time
Max Chunk Time 0.078s Maximum chunk processing time
EOU Detections 0 Total End-of-Utterance detections

Test runtime: 0m59s • 04/24/2026, 11:01 PM EST

RTFx = Real-Time Factor (higher is better) • Processing includes: Model inference, audio preprocessing, state management, and file I/O

@github-actions
Copy link
Copy Markdown

Kokoro TTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (634.8 KB)

Runtime: 0m37s

Note: Kokoro TTS uses CoreML flow matching + Vocos vocoder. CI VM lacks physical ANE — performance may differ from Apple Silicon.

Copy link
Copy Markdown
Contributor

@devin-ai-integration devin-ai-integration Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Devin Review found 2 potential issues.

View 8 additional findings in Devin Review.

Open in Devin Review

Comment on lines +120 to +124
case "--no-ipa-override":
allowIpa = false
default:
if text == nil { text = arg }
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🔴 CLI parser missing --text flag causes README-documented syntax to synthesize wrong text

The README documents the Magpie text subcommand with --text as a named flag (e.g. swift run fluidaudiocli magpie text --text "Hello | ˈ n ɛ m o ʊ |." --speaker 0), but the MagpieCommand.runText parser at Sources/FluidAudioCLI/Commands/MagpieCommand.swift:84-124 has no case "--text": handler. The default: branch at line 123 captures "--text" as the text content, and the actual text argument is silently dropped. Any user following the README examples at README.md:625 or README.md:629 will synthesize the literal string "--text" as speech instead of the intended text.

Suggested change
case "--no-ipa-override":
allowIpa = false
default:
if text == nil { text = arg }
}
case "--no-ipa-override":
allowIpa = false
case "--text":
if i + 1 < arguments.count {
text = arguments[i + 1]
i += 1
}
default:
if text == nil { text = arg }
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

Comment on lines +122 to +131
// Find kth-largest threshold via partial sort.
var indexed = truncated.enumerated().map { ($0.offset, $0.element) }
indexed.sort { $0.1 > $1.1 }
let threshold = indexed[topK - 1].1
for i in 0..<truncated.count {
if truncated[i] < threshold {
truncated[i] = -.infinity
}
}
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🟡 Top-K sampling keeps more than K tokens when logit values are tied at the threshold

In sampleTopK, the threshold is set to the K-th largest logit and values strictly below it are masked to -inf. When multiple logits share the same value as the threshold, all of them survive, potentially keeping significantly more than K candidates. This diverges from the Python reference which uses torch.topk to select exactly K values. For example, if many logits cluster around the same value near the K boundary, the effective sampling set grows, diluting the probability distribution. In extreme cases (e.g., all logits equal), no tokens would be masked at all despite topK=80 and vocab=2024.

Prompt for agents
The sampleTopK function in MagpieSampler.swift uses a threshold-based approach to top-K filtering that keeps all values >= the K-th largest value. This means ties at the threshold boundary cause more than K tokens to survive. The Python reference uses torch.topk which returns exactly K values (arbitrary tie-breaking). To match the reference behavior, after sorting the indexed array (line 123-124), only keep the first topK entries by index. For example, collect the indices of the top-K entries from the sorted indexed array into a Set, then mask all indices NOT in that set to -.infinity. This ensures exactly K tokens survive regardless of ties.
Open in Devin Review

Was this helpful? React with 👍 or 👎 to provide feedback.

@github-actions
Copy link
Copy Markdown

Qwen3-ASR int8 Smoke Test ✅

Check Result
Build
Model download
Model load
Transcription pipeline
Decoder size 571 MB (vs 1.1 GB f32)

Performance Metrics

Metric CI Value Expected on Apple Silicon
Median RTFx 0.06x ~2.5x
Overall RTFx 0.06x ~2.5x

Runtime: 3m12s

Note: CI VM lacks physical GPU — CoreML MLState (macOS 15) KV cache produces degraded results on virtualized runners. On Apple Silicon: ~1.3% WER / 2.5x RTFx.

@github-actions
Copy link
Copy Markdown

Sortformer High-Latency Benchmark Results

ES2004a Performance (30.4s latency config)

Metric Value Target Status
DER 33.4% <35%
Miss Rate 24.4% - -
False Alarm 0.2% - -
Speaker Error 8.8% - -
RTFx 9.7x >1.0x
Speakers 4/4 - -

Sortformer High-Latency • ES2004a • Runtime: 3m 12s • 2026-04-25T03:04:58.885Z

@github-actions
Copy link
Copy Markdown

ASR Benchmark Results ✅

Status: All benchmarks passed

Parakeet v3 (multilingual)

Dataset WER Avg WER Med RTFx Status
test-clean 0.57% 0.00% 5.38x
test-other 1.96% 0.00% 3.49x

Parakeet v2 (English-optimized)

Dataset WER Avg WER Med RTFx Status
test-clean 0.80% 0.00% 5.11x
test-other 1.00% 0.00% 3.37x

Streaming (v3)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.63x Streaming real-time factor
Avg Chunk Time 1.453s Average time to process each chunk
Max Chunk Time 1.566s Maximum chunk processing time
First Token 1.729s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming (v2)

Metric Value Description
WER 0.00% Word Error Rate in streaming mode
RTFx 0.61x Streaming real-time factor
Avg Chunk Time 1.470s Average time to process each chunk
Max Chunk Time 1.839s Maximum chunk processing time
First Token 1.535s Latency to first transcription token
Total Chunks 31 Number of chunks processed

Streaming tests use 5 files with 0.5s chunks to simulate real-time audio streaming

25 files per dataset • Test runtime: 6m16s • 04/24/2026, 11:07 PM EST

RTFx = Real-Time Factor (higher is better) • Calculated as: Total audio duration ÷ Total processing time
Processing time includes: Model inference on Apple Neural Engine, audio preprocessing, state resets between files, token-to-text conversion, and file I/O
Example: RTFx of 2.0x means 10 seconds of audio processed in 5 seconds (2x faster than real-time)

Expected RTFx Performance on Physical M1 Hardware:

• M1 Mac: ~28x (clean), ~25x (other)
• CI shows ~0.5-3x due to virtualization limitations

Testing methodology follows HuggingFace Open ASR Leaderboard

@github-actions
Copy link
Copy Markdown

PocketTTS Smoke Test ✅

Check Result
Build
Model download
Model load
Synthesis pipeline
Output WAV ✅ (168.8 KB)

Runtime: 0m49s

Note: PocketTTS uses CoreML MLState (macOS 15) KV cache + Mimi streaming state. CI VM lacks physical GPU — audio quality and performance may differ from Apple Silicon.

@github-actions
Copy link
Copy Markdown

VAD Benchmark Results

Performance Comparison

Dataset Accuracy Precision Recall F1-Score RTFx Files
MUSAN 92.0% 86.2% 100.0% 92.6% 642.4x faster 50
VOiCES 92.0% 86.2% 100.0% 92.6% 737.7x faster 50

Dataset Details

  • MUSAN: Music, Speech, and Noise dataset - standard VAD evaluation
  • VOiCES: Voices Obscured in Complex Environmental Settings - tests robustness in real-world conditions

✅: Average F1-Score above 70%

@github-actions
Copy link
Copy Markdown

Offline VBx Pipeline Results

Speaker Diarization Performance (VBx Batch Mode)

Optimal clustering with Hungarian algorithm for maximum accuracy

Metric Value Target Status Description
DER 10.4% <20% Diarization Error Rate (lower is better)
RTFx 11.70x >1.0x Real-Time Factor (higher is faster)

Offline VBx Pipeline Timing Breakdown

Time spent in each stage of batch diarization

Stage Time (s) % Description
Model Download 12.519 14.0 Fetching diarization models
Model Compile 5.365 6.0 CoreML compilation
Audio Load 0.093 0.1 Loading audio file
Segmentation 22.379 25.0 VAD + speech detection
Embedding 89.307 99.6 Speaker embedding extraction
Clustering (VBx) 0.153 0.2 Hungarian algorithm + VBx clustering
Total 89.667 100 Full VBx pipeline

Speaker Diarization Research Comparison

Offline VBx achieves competitive accuracy with batch processing

Method DER Mode Description
FluidAudio (Offline) 10.4% VBx Batch On-device CoreML with optimal clustering
FluidAudio (Streaming) 17.7% Chunk-based First-occurrence speaker mapping
Research baseline 18-30% Various Standard dataset performance

Pipeline Details:

  • Mode: Offline VBx with Hungarian algorithm for optimal speaker-to-cluster assignment
  • Segmentation: VAD-based voice activity detection
  • Embeddings: WeSpeaker-compatible speaker embeddings
  • Clustering: PowerSet with VBx refinement
  • Accuracy: Higher than streaming due to optimal post-hoc mapping

🎯 Offline VBx Test • AMI Corpus ES2004a • 1049.0s meeting audio • 111.8s processing • Test runtime: 1m 55s • 04/24/2026, 11:11 PM EST

@github-actions
Copy link
Copy Markdown

Speaker Diarization Benchmark Results

Speaker Diarization Performance

Evaluating "who spoke when" detection accuracy

Metric Value Target Status Description
DER 15.1% <30% Diarization Error Rate (lower is better)
JER 24.9% <25% Jaccard Error Rate
RTFx 25.81x >1.0x Real-Time Factor (higher is faster)

Diarization Pipeline Timing Breakdown

Time spent in each stage of speaker diarization

Stage Time (s) % Description
Model Download 10.462 25.7 Fetching diarization models
Model Compile 4.484 11.0 CoreML compilation
Audio Load 0.069 0.2 Loading audio file
Segmentation 12.192 30.0 Detecting speech regions
Embedding 20.320 50.0 Extracting speaker voices
Clustering 8.128 20.0 Grouping same speakers
Total 40.658 100 Full pipeline

Speaker Diarization Research Comparison

Research baselines typically achieve 18-30% DER on standard datasets

Method DER Notes
FluidAudio 15.1% On-device CoreML
Research baseline 18-30% Standard dataset performance

Note: RTFx shown above is from GitHub Actions runner. On Apple Silicon with ANE:

  • M2 MacBook Air (2022): Runs at 150 RTFx real-time
  • Performance scales with Apple Neural Engine capabilities

🎯 Speaker Diarization Test • AMI Corpus ES2004a • 1049.0s meeting audio • 40.6s diarization time • Test runtime: 2m 22s • 04/24/2026, 11:14 PM EST

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Model Support Requests

1 participant