issue/349 - Support GLM4 model#370
Merged
Merged
Conversation
wooway777
reviewed
May 12, 2026
wooway777
reviewed
May 12, 2026
Collaborator
|
建议将中文注释修改为简洁的英文注释,因为个别平台对中文显示不全 |
pengcheng888
approved these changes
May 13, 2026
wooway777
requested changes
May 13, 2026
Author
Collaborator
Collaborator
|
添加 --ccl=y 选项 |
根据InfiniTensor#352 这个PR里面的检视意见以及本次PR之committer的检视意见,进行了重构 建议参考一下修改点 1、新增模型不应该修改已有模型代码,不要修改llama_legacy文件中的代码。 2、删除config_factory.cpp和rank_worker.cpp的改动 3、参考已有代码实现(非llama_legacy文件夹),mlp model causual_lm 应该可以使用现有的模块。 4、在glm4文件添加如下文件 glm4_decoder_layer.cpp/hpp + glm4_for_causal_lm.cpp/hpp。 5、csrc/models/glm4/glm4_for_causal_lm.cpp中,需要定义一个自己的Glm4ForCausalLM类,不要使用nfinilm::models::llama::LlamaForCausalLM。 6、RoPE类型问题:csrc/layers/rotary_embedding/rotary_embedding.cpp中get_rope函数的功能,在这个函数中处理GPT_J类型和"partial_rotary_factor"超参数。 7、使用字典模式设计weights remap 8、同时支持atention_static和attention_paged 9、中文注释改成英文注释 10、using Glm4ForCausalLM = infinilm::layers::causal_lm_templates::TextCausalLM;复用已有模块 11、删除reset_cache重载 12、验证tp并行
Author
wooway777
approved these changes
May 13, 2026
Collaborator
|
感谢华老师! |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.














根据#352 这个PR里面的检视意见以及本次PR之committer的检视意见,进行了重构
建议参考一下修改点
1、新增模型不应该修改已有模型代码,不要修改llama_legacy文件中的代码。
2、删除config_factory.cpp和rank_worker.cpp的改动
3、参考已有代码实现(非llama_legacy文件夹),mlp model causual_lm 应该可以使用现有的模块。
4、在glm4文件添加如下文件 glm4_decoder_layer.cpp/hpp + glm4_for_causal_lm.cpp/hpp。
5、csrc/models/glm4/glm4_for_causal_lm.cpp中,需要定义一个自己的Glm4ForCausalLM类,不要使用nfinilm::models::llama::LlamaForCausalLM。
6、RoPE类型问题:csrc/layers/rotary_embedding/rotary_embedding.cpp中get_rope函数的功能,在这个函数中处理GPT_J类型和"partial_rotary_factor"超参数。
7、使用字典模式设计weights remap
8、同时支持atention_static和attention_paged
9、中文注释改成英文注释
10、using Glm4ForCausalLM = infinilm::layers::causal_lm_templates::TextCausalLM;复用已有模块
11、删除reset_cache重载
1、模型test_infer.py测试截图:







命令:python examples/test_infer.py --device nvidia --model=/data/rubik/models/GLM-4-9B-0414/
2、推理服务启动
命令:python python/infinilm/server/inference_server.py --device nvidia --model=/data/rubik/models/GLM-4-9B-0414/
客户端命令:python scripts/test_perf.py --verbose
客户端部分输出截图:
另外由于修改了csrc/layers/rotary_embedding/rotary_embedding.cpp中的代码,algo默认参数为infinicore::nn::RoPE::Algo algo = infinicore::nn::RoPE::Algo::GPT_NEOX,逻辑跟原来一样。


跑两个之前的ok的模型进行验证:
根据检视意见修改后在进行验证,加上enable_paged

测试命令:python examples/test_infer.py --device nvidia --model=/data/rubik/models/GLM-4-9B-0414/ --enable-paged-attn
输出:
模型服务测试:python python/infinilm/server/inference_server.py --device nvidia --model=/data/rubik/models/GLM-4-9B-0414/ --enable-paged-attn

客户端部分输出截图:

tp并行测试,命令:CUDA_VISIBLE_DEVICES=0,1 python examples/test_infer.py --device nvidia --model=/data/rubik/models/GLM-4-9B-0414/ --enable-paged-attn --tp=2 --prompt="山东最高的山是?"


输出:
命令:CUDA_VISIBLE_DEVICES=0,1 python examples/test_infer.py --device nvidia --model=/data/rubik/models/GLM-4-9B-0414/ --enable-paged-attn --tp=1 --prompt="山东最高的山是?"
输出:
tp并行推理服务启动命令:CUDA_VISIBLE_DEVICES=0,1 python python/infinilm/server/inference_server.py --device nvidia --model=/data/rubik/models/GLM-4-9B-0414/ --enable-paged-attn --tp=2

