Changed KV type to fp8 in qwen3-coder-next recipe and reduced default context size to 131072 to ensure it all fits in a single Spark.
This commit is contained in:
@@ -164,7 +164,7 @@ Don't do it every time you rebuild, because it will slow down compilation times.
|
||||
|
||||
For periodic maintenance, I recommend using a filter: `docker builder prune --filter until=72h`
|
||||
|
||||
### 2026-02-14
|
||||
### 2026-02-17
|
||||
|
||||
#### Non-Privileged Mode Support
|
||||
|
||||
@@ -181,6 +181,8 @@ Example usage:
|
||||
./launch-cluster.sh --non-privileged --mem-limit-gb 120 --shm-size-gb 64 exec vllm serve ...
|
||||
```
|
||||
|
||||
May result in a slightly reduced performance (within 2%) in exchange for better reliability and stability.
|
||||
|
||||
### 2026-02-12
|
||||
|
||||
Added a mod for Qwen3-Coder-Next-FP8 that fixes:
|
||||
|
||||
@@ -24,7 +24,7 @@ defaults:
|
||||
host: 0.0.0.0
|
||||
tensor_parallel: 2
|
||||
gpu_memory_utilization: 0.7
|
||||
max_model_len: 262144
|
||||
max_model_len: 131072
|
||||
|
||||
# Environment variables
|
||||
env: {}
|
||||
@@ -37,6 +37,7 @@ command: |
|
||||
--gpu-memory-utilization {gpu_memory_utilization} \
|
||||
--host {host} \
|
||||
--port {port} \
|
||||
--kv-cache-dtype fp8 \
|
||||
--load-format fastsafetensors \
|
||||
--attention-backend flashinfer \
|
||||
--enable-prefix-caching \
|
||||
|
||||
Reference in New Issue
Block a user