Skip to content

feat: integrate LazyLLM for unified LLM provider support#325

Open
Yuang-Deng wants to merge 2 commits intoMLSysOps:mainfrom
Yuang-Deng:feat/lazyllm-integration
Open

feat: integrate LazyLLM for unified LLM provider support#325
Yuang-Deng wants to merge 2 commits intoMLSysOps:mainfrom
Yuang-Deng:feat/lazyllm-integration

Conversation

@Yuang-Deng
Copy link

🎯 Overview

This PR integrates LazyLLM into MLE-agent to provide a unified interface for 20+ LLM providers.

✨ Key Features

  • 20+ Providers: OpenAI, Anthropic, Gemini, DeepSeek, Qwen, GLM, Kimi, MiniMax, Doubao, etc.
  • Local Models: vLLM, LMDeploy, Ollama support
  • MLE_ Namespace: Clean API key management (e.g., MLE_DEEPSEEK_API_KEY)
  • Backward Compatible: All existing provider files intact
  • Optional Dependency: pip install lazyllm

📝 Changes

  • mle/model/lazyllm_model.py - LazyLLMModel class
  • mle/model/init.py - Integration with load_model()
  • tests/test_lazyllm.py - Test suite
  • docs/integrations/lazyllm.md - Documentation
  • pyproject.toml - Optional dependency

🧪 Testing

python tests/test_lazyllm.py

🔗 Related

Note: Existing provider implementations remain unchanged.

- Add LazyLLMModel class with AutoModel integration
- Support 20+ providers (OpenAI, DeepSeek, Qwen, GLM, Kimi, etc.)
- Use MLE_ namespace prefix for API keys (e.g., MLE_DEEPSEEK_API_KEY)
- Keep all existing provider files intact (backward compatible)
- Add comprehensive test suite (test_lazyllm.py)
- Add documentation (docs/integrations/lazyllm.md)
- Add lazyllm to optional dependencies in pyproject.toml

Features:
- Unified interface for online and local models
- Automatic provider detection from model name
- Environment variable support with MLE_ prefix
- Seamless switching between providers via config
- LazyLLM AutoModel handles online/local fallback

Testing:
- DeepSeek integration test
- Qwen integration test
- Streaming mode test
- Environment variable loading test

Related issue: MLSysOps#324
@dosubot dosubot bot added size:XL This PR changes 500-999 lines, ignoring generated files. enhancement New feature or request labels Mar 2, 2026
- LazyLLM's forward() requires first parameter (input) to be non-None
- Use empty string '' as input, pass chat history via llm_chat_history parameter
- Fix streaming to filter empty chunks
- Update API key environment variable handling (LAZYLLM_<SOURCE>_API_KEY)

Test results:
✅ Qwen (qwen-plus) - PASSED
❌ DeepSeek - API key invalid (401 error, needs valid key)
@Yuang-Deng
Copy link
Author

🎉 测试更新 - 全部通过!

刚刚完成了完整的 LazyLLM 集成测试,4/4 测试全部通过

✅ 测试结果

测试项 状态 说明
DeepSeek ✅ PASSED deepseek-chat 正常响应
Qwen ✅ PASSED qwen-plus 正常响应
Streaming ✅ PASSED 流式输出工作正常
Env Var ✅ PASSED MLE_DEEPSEEK_API_KEY 环境变量加载成功

🚀 为什么 LazyLLM 值得集成?

1️⃣ 统一接口,20+ 提供商

LazyLLM 提供了统一的 LLM 接口,支持:

  • 国际提供商: OpenAI, Anthropic, Gemini, DeepSeek...
  • 中国提供商: 豆包、智谱 GLM、Kimi、MiniMax、通义千问、DeepSeek...
  • 本地部署: vLLM, LMDeploy, Ollama

2️⃣ MLE_ Namespace 设计

干净的 API key 管理:

export MLE_DEEPSEEK_API_KEY="sk-..."
export MLE_QWEN_API_KEY="sk-..."

3️⃣ 无缝切换

用户可以在云模型和本地模型之间无缝切换,无需修改代码。

4️⃣ 向后兼容

所有现有的 provider 文件保持不变,LazyLLM 作为可选依赖。


📊 测试代码

测试脚本位于 tests/test_lazyllm.py,运行:

python tests/test_lazyllm.py

完整测试输出已验证,所有功能正常工作。


🔗 相关链接


这个集成将大大简化 MLE-agent 的 LLM 提供商管理,同时为用户提供更多选择!🎯

/cc @huangyz0918 @embersax

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

enhancement New feature or request size:XL This PR changes 500-999 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant