Kimi-K2.5 tramite KTransformers+SGLang su una configurazione ibrida di offload della memoria GPU/CPU: 4x RTX Pro 6000 Blackwells + 640GB di RAM La baseline originale era su 8x GPU utilizzando un carico di lavoro in stile agente di codifica sintetica mirato a 2k-45k token di input, 80-3k token di output massimi, e con fino a 10 richieste concorrenti. Riconfigurato sul nuovo setup ibrido Il miglior risultato che sono riuscito a ottenere: - 23.03 token di output/s @ 10 richieste concorrenti - TTFT medio: ~60s - TTFT mediano: ~64s Risultati della baseline: - 74.39 token di output/s @ 10 richieste concorrenti - TTFT medio: ~9s - TTFT mediano: ~3.7s
Yannick Nick
Yannick Nick26 feb 2026
Initial tests for Kimi-K2.5 via KTransformers+SGLang, on a hybrid 4x RTX Pro 6000 Blackwell + 640GB/1.5TB CPU memory offload. Compute provided by Lium pods: - 19.97 output tok/s @ 10 concurrent requests - Mean TTFT: ~120s - Median TTFT: ~102s Need to play with the KT flags to further optimize this setup, which is heavily dependent on the overall system's CPU core count & available RAM. GPU <-> PCIe <-> RAM interconnectivity is the most obvious bottleneck Experts per MoE Layer on GPU: --kt-num-gpu-experts=128 CPU cores dedicated to MoE inference: --kt-cpuinfer=104 CPU experts work overlapping GPU work: --kt-max-deferred-experts-per-token=2 Max tokens per prefill chunk: --chunked-prefill-size=32658 CUDA graph capture disabled: --disable-cuda-graph
Comando completo: export CUDA_VISIBLE_DEVICES=0,1,2,3 export OMP_NUM_THREADS=1 export MKL_NUM_THREADS=1 export OPENBLAS_NUM_THREADS=1 export NUMEXPR_NUM_THREADS=1 export VECLIB_MAXIMUM_THREADS=1 export PYTHONUNBUFFERED=1 exec python -m sglang.launch_server \ --model-path /workspace/models/huggingface/models--moonshotai--Kimi-K2.5/snapshots/54383e83fa343a1331754112fb9e3410c55efa2f \ --kt-weight-path /workspace/models/huggingface/models--moonshotai--Kimi-K2.5/snapshots/54383e83fa343a1331754112fb9e3410c55efa2f \ --kt-threadpool-count 1 \ --kt-method RAWINT4 \ --trust-remote-code \ --served-model-name kimi_k2 \ --tool-call-parser kimi_k2 \ --reasoning-parser kimi_k2 \ --disable-radix-cache \ --disable-chunked-prefix-cache \ --tensor-parallel-size 4 \ --enable-p2p-check \ --disable-shared-experts-fusion \ --disable-cuda-graph \ --host 0.0.0.0 \ --port 8000 \ --kt-cpuinfer 32 \ --kt-num-gpu-experts 128 \ --kt-max-deferred-experts-per-token 2 \ --kt-gpu-prefill-token-threshold 1024 \ --kt-expert-placement-strategy uniform \ --mem-fraction-static 0.92 \ --enable-mixed-chunk \ --chunked-prefill-size 32658 \ --max-total-tokens 200000 \ --attention-backend flashinfer
98