Updated Nemotron to support dual sparks
This commit is contained in:
10
README.md
10
README.md
@@ -149,6 +149,16 @@ Don't do it every time you rebuild, because it will slow down compilation times.
|
|||||||
|
|
||||||
For periodic maintenance, I recommend using a filter: `docker builder prune --filter until=72h`
|
For periodic maintenance, I recommend using a filter: `docker builder prune --filter until=72h`
|
||||||
|
|
||||||
|
### 2026-03-12
|
||||||
|
|
||||||
|
#### Nemotron-3-Super-120B NVFP4 Recipe
|
||||||
|
|
||||||
|
Added a new recipe `nemotron-3-super-nvfp4` for running `nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4` with Marlin kernels. Supports both solo and cluster modes. Includes a custom reasoning parser (`super_v3_reasoning_parser.py`) fetched from the model repository. Supports both dual and single Spark configurations.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
./run-recipe.sh nemotron-3-super-nvfp4
|
||||||
|
```
|
||||||
|
|
||||||
### 2026-03-11
|
### 2026-03-11
|
||||||
|
|
||||||
#### Qwen3-Coder-Next INT4-AutoRound Recipe
|
#### Qwen3-Coder-Next INT4-AutoRound Recipe
|
||||||
|
|||||||
@@ -7,8 +7,7 @@ description: vLLM serving Nemotron-3-Super-120B using Marlin kernels
|
|||||||
model: nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4
|
model: nvidia/NVIDIA-Nemotron-3-Super-120B-A12B-NVFP4
|
||||||
container: vllm-node
|
container: vllm-node
|
||||||
cluster_only: false
|
cluster_only: false
|
||||||
# This model can only run on single node (solo)
|
solo_only: false
|
||||||
solo_only: true
|
|
||||||
|
|
||||||
mods:
|
mods:
|
||||||
- mods/nemotron-super
|
- mods/nemotron-super
|
||||||
@@ -17,7 +16,7 @@ container: vllm-node
|
|||||||
defaults:
|
defaults:
|
||||||
port: 8000
|
port: 8000
|
||||||
host: 0.0.0.0
|
host: 0.0.0.0
|
||||||
tensor_parallel: 1
|
tensor_parallel: 2
|
||||||
gpu_memory_utilization: 0.7
|
gpu_memory_utilization: 0.7
|
||||||
max_model_len: 262144
|
max_model_len: 262144
|
||||||
max_num_seqs: 10
|
max_num_seqs: 10
|
||||||
@@ -41,4 +40,6 @@ command: |
|
|||||||
--load-format fastsafetensors \
|
--load-format fastsafetensors \
|
||||||
--tool-call-parser qwen3_coder \
|
--tool-call-parser qwen3_coder \
|
||||||
--reasoning-parser-plugin super_v3_reasoning_parser.py \
|
--reasoning-parser-plugin super_v3_reasoning_parser.py \
|
||||||
--reasoning-parser super_v3
|
--reasoning-parser super_v3 \
|
||||||
|
--tensor-parallel-size {tensor_parallel} \
|
||||||
|
--distributed-executor-backend ray
|
||||||
Reference in New Issue
Block a user