feat: Add recipe-based one-click model deployment system

Introduces a YAML recipe system for simplified model deployment:

- run-recipe.py: Main script handling build, download, and launch
- run-recipe.sh: Bash wrapper for dependency management
- recipes/: Pre-configured recipes for common models
  - glm-4.7-flash-awq.yaml: GLM-4.7-Flash with AWQ quantization
  - glm-4.7-nvfp4.yaml: GLM-4.7 with NVFP4 (cluster-only)
  - minimax-m2-awq.yaml: MiniMax M2 with AWQ
  - openai-gpt-oss-120b.yaml: OpenAI GPT-OSS 120B with MXFP4

Key features:
- Auto-discover cluster nodes with --discover, saves to .env
- Load nodes from .env automatically on subsequent runs
- cluster_only flag for models requiring multi-node setup
- build_args field for Dockerfile selection (--pre-tf, --exp-mxfp4)
- Solo mode auto-strips --distributed-executor-backend ray
- --setup flag for full build + download + run workflow
- --dry-run to preview execution without running

Usage:
  ./run-recipe.sh --discover           # Find and save cluster nodes
  ./run-recipe.sh glm-4.7-flash-awq --solo --setup
  ./run-recipe.sh glm-4.7-nvfp4 --setup  # Uses nodes from .env
This commit is contained in:
Raphael Amorim
2026-02-03 15:32:28 -05:00
parent 751bc5a47a
commit 30f16f1d4e
6 changed files with 1587 additions and 0 deletions

266
recipes/README.md Normal file
View File

@@ -0,0 +1,266 @@
# Recipes
Recipes provide a **one-click solution** for deploying models with pre-configured settings. Each recipe is a YAML file that specifies:
- HuggingFace model to download
- Container image and build arguments
- Required mods/patches
- Default parameters (port, host, tensor parallelism, etc.)
- Environment variables
- The vLLM serve command
## Quick Start
```bash
# List available recipes
./run-recipe.sh --list
# Run a recipe in solo mode (single node)
./run-recipe.sh glm-4.7-flash-awq --solo
# Full setup: build container + download model + run
./run-recipe.sh glm-4.7-flash-awq --solo --setup
# Run with overrides
./run-recipe.sh glm-4.7-flash-awq --solo --port 9000 --gpu-mem 0.8
# Cluster deployment
./run-recipe.sh glm-4.7-nvfp4 -n 192.168.1.10,192.168.1.11 --setup
```
## Cluster Node Discovery
The recipe runner can automatically discover cluster nodes:
```bash
# Auto-discover nodes and save to .env
./run-recipe.sh --discover
# Show current .env configuration
./run-recipe.sh --show-env
# Run recipe (uses nodes from .env automatically)
./run-recipe.sh glm-4.7-nvfp4 --setup
```
When you run `--discover`, it:
1. Scans the network for nodes with SSH access
2. Prompts you to select which nodes to include
3. Saves the configuration to `.env`
Future recipe runs will automatically use nodes from `.env` unless you specify `-n` or `--solo`.
## Workflow Modes
### Solo Mode (Single Node)
```bash
# Explicitly run in solo mode
./run-recipe.sh glm-4.7-flash-awq --solo
# If no nodes configured, defaults to solo
./run-recipe.sh minimax-m2-awq
```
### Cluster Mode (Multiple Nodes)
```bash
# Specify nodes directly (first IP is head node)
./run-recipe.sh glm-4.7-nvfp4 -n 192.168.1.10,192.168.1.11 --setup
# Or use auto-discovered nodes from .env
./run-recipe.sh --discover # First time only
./run-recipe.sh glm-4.7-nvfp4 --setup
```
When using cluster mode with `--setup`:
- Container is built locally and copied to all worker nodes
- Model is downloaded locally and copied to all worker nodes
### Cluster-Only Recipes
Some models are too large to run on a single node. These recipes have `cluster_only: true` and will fail with a helpful error if you try to run them in solo mode:
```bash
$ ./run-recipe.sh glm-4.7-nvfp4 --solo
Error: Recipe 'GLM-4.7-NVFP4' requires cluster mode.
This model is too large to run on a single node.
Options:
1. Specify nodes directly: ./run-recipe.sh glm-4.7-nvfp4 -n node1,node2
2. Auto-discover and save: ./run-recipe.sh --discover
Then run: ./run-recipe.sh glm-4.7-nvfp4
```
## Setup Options
| Flag | Description |
|------|-------------|
| `--setup` | Full setup: build (if missing) + download (if missing) + run |
| `--build-only` | Only build/copy the container, don't run |
| `--download-only` | Only download/copy the model, don't run |
| `--force-build` | Rebuild even if container exists |
| `--force-download` | Re-download even if model exists |
| `--dry-run` | Show what would happen without executing |
## Recipe Format
```yaml
# Required fields
name: Human-readable name
container: docker-image-name
command: |
vllm serve model/name \
--port {port} \
--host {host}
# Optional fields
description: What this recipe does
model: org/model-name # HuggingFace model ID for --setup downloads
cluster_only: false # Set to true if model requires cluster mode
build_args: # Extra args for build-and-copy.sh
- --pre-tf # e.g., for transformers 5.0
- --exp-mxfp4 # e.g., for MXFP4 Dockerfile
mods:
- mods/some-patch
defaults:
port: 8000
host: 0.0.0.0
tensor_parallel: 2
gpu_memory_utilization: 0.85
max_model_len: 32000
env:
SOME_VAR: "value"
```
### Build Arguments
The `build_args` field passes flags to `build-and-copy.sh`:
| Flag | Description |
|------|-------------|
| `--pre-tf` | Use transformers 5.0 (required for GLM-4.7 models) |
| `--exp-mxfp4` | Use MXFP4 Dockerfile (for MXFP4 quantized models) |
| `--use-wheels` | Use pre-built wheels instead of building from source |
### Parameter Substitution
Use `{param_name}` in the command to substitute values from defaults or CLI overrides:
```yaml
defaults:
port: 8000
tensor_parallel: 2
command: |
vllm serve my/model \
--port {port} \
-tp {tensor_parallel}
```
Override at runtime:
```bash
./run-recipe.sh my-recipe --port 9000 --tp 4
```
## CLI Reference
```
Usage: ./run-recipe.sh [OPTIONS] [RECIPE]
Cluster discovery:
--discover Auto-detect cluster nodes and save to .env
--show-env Show current .env configuration
Recipe overrides:
--port PORT Override port
--host HOST Override host
--tensor-parallel, --tp N Override tensor parallelism
--gpu-memory-utilization N Override GPU memory utilization (--gpu-mem)
--max-model-len N Override max model length
Setup options:
--setup Full setup: build + download + run
--build-only Only build/copy container, don't run
--download-only Only download/copy model, don't run
--force-build Rebuild even if container exists
--force-download Re-download even if model exists
Launch options:
--solo Run in solo mode (single node, no Ray)
-n, --nodes IPS Comma-separated node IPs (first = head)
-d, --daemon Run in daemon mode
-t, --container IMAGE Override container from recipe
--nccl-debug LEVEL NCCL debug level (VERSION, WARN, INFO, TRACE)
Other:
--dry-run Show what would be executed
--list, -l List available recipes
```
## Creating a Recipe
1. Create a new `.yaml` file in `recipes/`
2. Specify required fields: `name`, `container`, `command`
3. Add `build_args` if your model needs special build options
4. Add `mods` if your model needs patches
5. Set `cluster_only: true` if model is too large for single node
6. Set sensible `defaults`
7. Add `env` variables if needed
Example:
```yaml
name: My Model
description: My custom model setup
container: vllm-node-tf5
build_args:
- --pre-tf
mods:
- mods/my-fix
defaults:
port: 8000
host: 0.0.0.0
tensor_parallel: 1
gpu_memory_utilization: 0.85
command: |
vllm serve org/my-model \
--port {port} \
--host {host} \
-tp {tensor_parallel} \
--gpu-memory-utilization {gpu_memory_utilization}
```
## Architecture
```
┌─────────────────────────────────────────────────────────┐
│ run-recipe.sh / run-recipe.py │
│ - Parses YAML recipe │
│ - Auto-discovers cluster nodes (--discover) │
│ - Loads nodes from .env │
│ - Handles --setup (build + download + run) │
│ - Generates launch script from template │
│ - Applies CLI overrides │
└──────────┬────────────────────────┬─────────────────────┘
│ calls (for build) │ calls (for download)
▼ ▼
┌──────────────────────┐ ┌───────────────────────────────┐
│ build-and-copy.sh │ │ hf-download.sh │
│ - Docker build │ │ - HuggingFace model download │
│ - Copy to workers │ │ - Rsync to workers │
└──────────────────────┘ └───────────────────────────────┘
│ then calls (for run)
┌─────────────────────────────────────────────────────────┐
│ launch-cluster.sh │
│ - Cluster orchestration │
│ - Container lifecycle │
│ - Mod application │
│ - Launch script execution │
└─────────────────────────────────────────────────────────┘
```
This separation follows the Unix philosophy: `run-recipe.sh` provides convenience, while the underlying scripts remain focused on their specific tasks.

View File

@@ -0,0 +1,64 @@
# Recipe: GLM-4.7-Flash-AWQ-4bit
# cyankiwi's AWQ quantized GLM-4.7-Flash model
# Requires a patch for inference speed optimization
#
# NOTE: vLLM implementation is suboptimal even with the patch.
# The model performance is still significantly slower than it should be
# for a model with this number of active parameters. Running in cluster
# increases prompt processing performance, but not token generation.
# Expect ~40 t/s generation speed in both single node and cluster.
recipe_version: "1"
name: GLM-4.7-Flash-AWQ
description: vLLM serving cyankiwi/GLM-4.7-Flash-AWQ-4bit with speed optimization patch
# HuggingFace model to download
model: cyankiwi/GLM-4.7-Flash-AWQ-4bit
# This model can run on single node (solo) or cluster
cluster_only: false
# Container image to use
container: vllm-node-tf5
# Build arguments for build-and-copy.sh
# tf5 = transformers 5.0 (required for GLM-4.7)
build_args:
- --pre-tf
# Mods to apply before running (paths relative to repo root)
# This mod prevents severe inference speed degradation
mods:
- mods/fix-glm-4.7-flash-AWQ
# Default settings (can be overridden via CLI)
defaults:
port: 8888
host: 0.0.0.0
tensor_parallel: 1
gpu_memory_utilization: 0.7
max_model_len: 202752
max_num_batched_tokens: 4096
max_num_seqs: 64
served_model_name: glm-4.7-flash
# Environment variables to set in the container
env:
# Add any required env vars here
# The vLLM serve command template
# Use {var_name} for substitution from defaults/overrides
# In cluster mode, --distributed-executor-backend ray and -tp 2 are added
command: |
vllm serve cyankiwi/GLM-4.7-Flash-AWQ-4bit \
--tool-call-parser glm47 \
--reasoning-parser glm45 \
--enable-auto-tool-choice \
--served-model-name {served_model_name} \
--max-model-len {max_model_len} \
--max-num-batched-tokens {max_num_batched_tokens} \
--max-num-seqs {max_num_seqs} \
--gpu-memory-utilization {gpu_memory_utilization} \
-tp {tensor_parallel} \
--host {host} \
--port {port}

View File

@@ -0,0 +1,40 @@
# Recipe: MiniMax-M2-AWQ
# MiniMax M2 model with AWQ quantization
recipe_version: "1"
name: MiniMax-M2-AWQ
description: vLLM serving MiniMax-M2-AWQ with Ray distributed backend
# HuggingFace model to download (optional, for --download-model)
model: QuantTrio/MiniMax-M2-AWQ
# Container image to use
container: vllm-node
# No mods required
mods: []
# Default settings (can be overridden via CLI)
defaults:
port: 8000
host: 0.0.0.0
tensor_parallel: 2
gpu_memory_utilization: 0.7
max_model_len: 128000
# Environment variables
env: {}
# The vLLM serve command template
command: |
vllm serve QuantTrio/MiniMax-M2-AWQ \
--port {port} \
--host {host} \
--gpu-memory-utilization {gpu_memory_utilization} \
-tp {tensor_parallel} \
--distributed-executor-backend ray \
--max-model-len {max_model_len} \
--load-format fastsafetensors \
--enable-auto-tool-choice \
--tool-call-parser minimax_m2 \
--reasoning-parser minimax_m2_append_think

View File

@@ -0,0 +1,52 @@
# Recipe: OpenAI GPT-OSS 120B
# OpenAI's open source 120B MoE model with MXFP4 quantization support
recipe_version: "1"
name: OpenAI GPT-OSS 120B
description: vLLM serving openai/gpt-oss-120b with MXFP4 quantization and FlashInfer
# HuggingFace model to download (optional, for --download-model)
model: openai/gpt-oss-120b
# Container image to use
container: vllm-node-mxfp4
# Build arguments for build-and-copy.sh
build_args:
- --exp-mxfp4
# No mods required for this model
mods: []
# Default settings (can be overridden via CLI)
defaults:
port: 8888
host: 0.0.0.0
tensor_parallel: 2
gpu_memory_utilization: 0.70
max_num_batched_tokens: 8192
# Environment variables to set in the container
env:
VLLM_USE_FLASHINFER_MOE_MXFP4_MXFP8: "1"
# The vLLM serve command template
# Uses MXFP4 quantization for memory efficiency
command: |
vllm serve openai/gpt-oss-120b \
--tool-call-parser openai \
--reasoning-parser openai_gptoss \
--enable-auto-tool-choice \
--tensor-parallel-size {tensor_parallel} \
--distributed-executor-backend ray \
--gpu-memory-utilization {gpu_memory_utilization} \
--enable-prefix-caching \
--load-format fastsafetensors \
--quantization mxfp4 \
--mxfp4-backend CUTLASS \
--mxfp4-layers moe,qkv,o,lm_head \
--attention-backend FLASHINFER \
--kv-cache-dtype fp8 \
--max-num-batched-tokens {max_num_batched_tokens} \
--host {host} \
--port {port}