Skip to content

Support mtp fp8#4572

Merged
lvhan028 merged 2 commits into
InternLM:mainfrom
RunningLeon:mtp-fp8
May 11, 2026
Merged

Support mtp fp8#4572
lvhan028 merged 2 commits into
InternLM:mainfrom
RunningLeon:mtp-fp8

Conversation

@RunningLeon
Copy link
Copy Markdown
Collaborator

@RunningLeon RunningLeon commented May 8, 2026

Motivation

dataset version metric mode Qwen3.5-35B-A3B-pt-mtp4-fp8-bs256
General Reasoning - - - -
GPQA_diamond ae378a accuracy (8 runs average) gen 84.66
GPQA_diamond ae378a G-Pass@8_0.0 gen 93.94
aime2025_repeat_32_CompassAcademic 1f81cc accuracy (32 runs average) gen 92.71

Modification

Please briefly describe what modification is made in this PR.

BC-breaking (Optional)

Does the modification introduce changes that break the backward-compatibility of the downstream repositories?
If so, please describe how it breaks the compatibility and how the downstream projects should modify their code to keep compatibility with this PR.

Use cases (Optional)

If this PR introduces a new feature, it is better to list some use cases here, and update the documentation.

Checklist

  1. Pre-commit or other linting tools are used to fix the potential lint issues.
  2. The modification is covered by complete unit tests. If not, please add more unit tests to ensure the correctness.
  3. If the modification has a dependency on downstream projects of a newer version, this PR should be tested with all supported versions of downstream projects.
  4. The documentation has been modified accordingly, like docstring or example tutorials.

Copilot AI review requested due to automatic review settings May 8, 2026 11:00
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR updates the PyTorch speculative-decoding (MTP draft) path to better support FP8-weight models by propagating model_format into spec-decode draft model construction and aligning Qwen3.5 MTP module behavior with FP8 checkpoint expectations.

Changes:

  • Propagate engine_config.model_format into SpecDecodeConfig.from_config() so draft/MTP models can be built with the same weight-quantization format.
  • Adjust Qwen3.5 MTP module prefixing and disable quantization for the MTP fc projection to match FP8 model configs.
  • Ensure spec-decode cache config inherits quant_policy from the target cache config.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 1 comment.

File Description
lmdeploy/pytorch/models/qwen3_5_mtp.py Adjust MTP module prefixing and MTP fc quantization behavior for FP8 compatibility.
lmdeploy/pytorch/engine/config_builder.py Forward model_format into spec-decode draft model config building.
lmdeploy/pytorch/config.py Thread model_format through SpecDecodeConfig.from_config and carry quant_policy into spec-decode cache config.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +103 to 107
# do not quant fc as in https://huggingface.co/Qwen/Qwen3.5-27B-FP8/blob/main/config.json#L403
# and https://huggingface.co/Qwen/Qwen3.5-35B-A3B-FP8/blob/main/config.json#L409
self.fc = build_colwise_linear(
config.hidden_size * 2,
config.hidden_size,
@RunningLeon RunningLeon changed the title [WIP]: Support mtp fp8 Support mtp fp8 May 11, 2026
@lvhan028 lvhan028 self-requested a review May 11, 2026 06:36
@lvhan028 lvhan028 merged commit f5a9860 into InternLM:main May 11, 2026
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants