- May 6, 2025: 🎉 Code released!
Coming !!!
This repository supplies only the software‑module code; the hardware components are not available for remote testing.
Quick start is based on llama2-7b as an example, other models can change the base model path.
pip install -r requirement.txt
cd Generator
bash quick_start.sh
cd ../Pruner
bash quick_start.sh
You can run the following command to fintune the modle on alpaca.
cd ../Pruner
bash fintune.sh
You can run the following command to evalute Llama2-7B on BBH (zero-shot), MMLU (3-shot), PPL, and Commonsense (zero-shot). You need to download LLaMA-Factory-main and place it in the Pruner folder
cd ../Pruner
bash eval.sh
- The evaluation of the LLM: lm-evaluation-harness
- LLaMA: https://github.com/facebookresearch/llama
- Vicuna: https://github.com/lm-sys/FastChat
- Peft: https://github.com/huggingface/peft
- Alpaca-lora: https://github.com/tloen/alpaca-lora
Comming !!!
