Implemented a temporary patch for recently broken MiniMax-M2 (in builds after 12/10) for some quants.
This commit is contained in:
@@ -80,6 +80,9 @@ RUN python3 use_existing_torch.py && \
|
||||
COPY fastsafetensors.patch .
|
||||
RUN patch -p1 < fastsafetensors.patch
|
||||
|
||||
# TEMPORARY PATCH for broken MiniMax M2 - tracking https://github.com/vllm-project/vllm/issues/30445 and https://github.com/vllm-project/vllm/pull/30389
|
||||
RUN curl -L https://patch-diff.githubusercontent.com/raw/vllm-project/vllm/pull/30389.diff | git apply
|
||||
|
||||
# Final Build
|
||||
# Uses --no-build-isolation to respect the pre-installed Torch/FlashInfer
|
||||
RUN pip install --no-build-isolation . -v
|
||||
|
||||
@@ -12,6 +12,11 @@ The Dockerfile builds from the main branch of VLLM, so depending on when you run
|
||||
|
||||
## CHANGELOG
|
||||
|
||||
### 2025-12-11
|
||||
|
||||
Applied a patch to fix broken MiniMax-M2 in some quants after [this commit](https://github.com/vllm-project/vllm/commit/d017bceb08eaac7bae2c499124ece737fb4fb22b) until [this PR](https://github.com/vllm-project/vllm/pull/30389) is approved.
|
||||
See [this issue](https://github.com/vllm-project/vllm/issues/30445) for details.
|
||||
|
||||
### 2025-12-05
|
||||
|
||||
Added `build-and-copy.sh` for convenience.
|
||||
|
||||
Reference in New Issue
Block a user