Releases: InternLM/xtuner
Releases · InternLM/xtuner
XTuner Release V0.1.6
What's Changed
Full Changelog: v0.1.5...v0.1.6
XTuner Release V0.1.5
What's Changed
- [Fix] Rename internlm-chat-20b by @LZHgrla in #131
- [Fix] Fix CPU OOM during the merge step by @LZHgrla in #133
- [Fix] Add
--offload-folderfor merge and chat by @LZHgrla in #140 - [Feature] Support to remove history for chat script by @LZHgrla in #144
- [Docs] add conda env create by @KevinNuNu in #147
- [Fix] Fix activation checkpointing bug by @LZHgrla in #159
- [Refactor] Refactor the preprocess of dataset by @LZHgrla in #163
- [Feature] Support deepspeed for HF trainer by @LZHgrla in #164
- [Feature] Support the fine-tuning of MSAgent dataset by @LZHgrla in #156
- [Fix] Fix bugs on
traverse_dictby @LZHgrla in #141 - [Doc] Update
chat.mdby @LZHgrla in #168 - bump version to 0.1.5 by @LZHgrla in #171
New Contributors
- @KevinNuNu made their first contribution in #147
Full Changelog: v0.1.4...v0.1.5
XTuner Release V0.1.4
XTuner Release V0.1.3
What's Changed
- [Feature] Add Baichuan2 7B-chat, 13B-base, 13B-chat by @LZHgrla in #103
- [Fix] Use
token_idinstead oftokenforencode_fn& Set eval mode before generate by @LZHgrla in #107 - [Feature] Support log processed dataset & Fix doc by @HIT-cwh in #101
- [Fix] move toy data by @HIT-cwh in #108
- bump version to 0.1.3 by @HIT-cwh in #109
Full Changelog: v0.1.2...v0.1.3
XTuner Release V0.1.2
What's Changed
- [Doc] Fix dataset docs by @HIT-cwh in #87
- [Doc] Fix readme by @HIT-cwh in #92
- [Improve] Add ZeRO2-offload configs by @LZHgrla in #94
- [Improve] Redesign convert tools by @LZHgrla in #96
- [Fix] fix generation config by @HIT-cwh in #98
- [Feature] Support Baichuan2 models by @LZHgrla in #102
- bump version to 0.1.2 by @LZHgrla in #100
Full Changelog: v0.1.1...v0.1.2
XTuner Release V0.1.1
What's Changed
- [Doc] Update WeChat image by @LZHgrla in #74
- [Doc] Modify install commands for DeepSpeed integration by @LZHgrla in #75
- Add bot: Create .owners.yml by @del-zhenwu in #81
- [Improve] Add several InternLM-7B full parameters fine-tuning configs by @LZHgrla in #84
- [Feature] Add starcoder example by @HIT-cwh in #83
- [Doc] Add data_prepare.md docs by @LZHgrla in #82
- bump version to 0.1.1 by @HIT-cwh in #85
New Contributors
- @del-zhenwu made their first contribution in #81
Full Changelog: v0.1.0...v0.1.1
XTuner Release V0.1.0
Changelog
v0.1.0 (2023.08.30)
XTuner is released! 🔥🔥🔥
Highlights
- XTuner supports LLM fine-tuning on consumer-grade GPUs. The minimum GPU memory required for 7B LLM fine-tuning is only 8GB.
- XTuner supports various LLMs, datasets, algorithms and training pipelines.
- Several fine-tuned adapters are released simultaneously, including various gameplays such as the colorist LLM, plugins-based LLM, and many more. For further details, please visit XTuner on HuggingFace!