Changed KV type to fp8 in qwen3-coder-next recipe and reduced default context size to 131072 to ensure it all fits in a single Spark.

This commit is contained in:
Eugene Rakhmatulin
2026-02-17 13:07:54 -08:00
parent 0249f1fdde
commit 5b2313dddb
2 changed files with 5 additions and 2 deletions

View File

@@ -164,7 +164,7 @@ Don't do it every time you rebuild, because it will slow down compilation times.
For periodic maintenance, I recommend using a filter: `docker builder prune --filter until=72h` For periodic maintenance, I recommend using a filter: `docker builder prune --filter until=72h`
### 2026-02-14 ### 2026-02-17
#### Non-Privileged Mode Support #### Non-Privileged Mode Support
@@ -181,6 +181,8 @@ Example usage:
./launch-cluster.sh --non-privileged --mem-limit-gb 120 --shm-size-gb 64 exec vllm serve ... ./launch-cluster.sh --non-privileged --mem-limit-gb 120 --shm-size-gb 64 exec vllm serve ...
``` ```
May result in a slightly reduced performance (within 2%) in exchange for better reliability and stability.
### 2026-02-12 ### 2026-02-12
Added a mod for Qwen3-Coder-Next-FP8 that fixes: Added a mod for Qwen3-Coder-Next-FP8 that fixes:

View File

@@ -24,7 +24,7 @@ defaults:
host: 0.0.0.0 host: 0.0.0.0
tensor_parallel: 2 tensor_parallel: 2
gpu_memory_utilization: 0.7 gpu_memory_utilization: 0.7
max_model_len: 262144 max_model_len: 131072
# Environment variables # Environment variables
env: {} env: {}
@@ -37,6 +37,7 @@ command: |
--gpu-memory-utilization {gpu_memory_utilization} \ --gpu-memory-utilization {gpu_memory_utilization} \
--host {host} \ --host {host} \
--port {port} \ --port {port} \
--kv-cache-dtype fp8 \
--load-format fastsafetensors \ --load-format fastsafetensors \
--attention-backend flashinfer \ --attention-backend flashinfer \
--enable-prefix-caching \ --enable-prefix-caching \