feat: Add recipe-based one-click model deployment system

Introduces a YAML recipe system for simplified model deployment:

- run-recipe.py: Main script handling build, download, and launch
- run-recipe.sh: Bash wrapper for dependency management
- recipes/: Pre-configured recipes for common models
  - glm-4.7-flash-awq.yaml: GLM-4.7-Flash with AWQ quantization
  - glm-4.7-nvfp4.yaml: GLM-4.7 with NVFP4 (cluster-only)
  - minimax-m2-awq.yaml: MiniMax M2 with AWQ
  - openai-gpt-oss-120b.yaml: OpenAI GPT-OSS 120B with MXFP4

Key features:
- Auto-discover cluster nodes with --discover, saves to .env
- Load nodes from .env automatically on subsequent runs
- cluster_only flag for models requiring multi-node setup
- build_args field for Dockerfile selection (--pre-tf, --exp-mxfp4)
- Solo mode auto-strips --distributed-executor-backend ray
- --setup flag for full build + download + run workflow
- --dry-run to preview execution without running

Usage:
  ./run-recipe.sh --discover           # Find and save cluster nodes
  ./run-recipe.sh glm-4.7-flash-awq --solo --setup
  ./run-recipe.sh glm-4.7-nvfp4 --setup  # Uses nodes from .env
This commit is contained in:
Raphael Amorim
2026-02-03 15:32:28 -05:00
parent 751bc5a47a
commit 30f16f1d4e
6 changed files with 1587 additions and 0 deletions

View File

@@ -0,0 +1,52 @@
# Recipe: OpenAI GPT-OSS 120B
# OpenAI's open source 120B MoE model with MXFP4 quantization support
recipe_version: "1"
name: OpenAI GPT-OSS 120B
description: vLLM serving openai/gpt-oss-120b with MXFP4 quantization and FlashInfer
# HuggingFace model to download (optional, for --download-model)
model: openai/gpt-oss-120b
# Container image to use
container: vllm-node-mxfp4
# Build arguments for build-and-copy.sh
build_args:
- --exp-mxfp4
# No mods required for this model
mods: []
# Default settings (can be overridden via CLI)
defaults:
port: 8888
host: 0.0.0.0
tensor_parallel: 2
gpu_memory_utilization: 0.70
max_num_batched_tokens: 8192
# Environment variables to set in the container
env:
VLLM_USE_FLASHINFER_MOE_MXFP4_MXFP8: "1"
# The vLLM serve command template
# Uses MXFP4 quantization for memory efficiency
command: |
vllm serve openai/gpt-oss-120b \
--tool-call-parser openai \
--reasoning-parser openai_gptoss \
--enable-auto-tool-choice \
--tensor-parallel-size {tensor_parallel} \
--distributed-executor-backend ray \
--gpu-memory-utilization {gpu_memory_utilization} \
--enable-prefix-caching \
--load-format fastsafetensors \
--quantization mxfp4 \
--mxfp4-backend CUTLASS \
--mxfp4-layers moe,qkv,o,lm_head \
--attention-backend FLASHINFER \
--kv-cache-dtype fp8 \
--max-num-batched-tokens {max_num_batched_tokens} \
--host {host} \
--port {port}