InferX — Serverless GPU Inference Platform for Production Workloads

Tenant Namespace Pod Name State Node Name Req. GPU Count Req. GPU vRam (MB) Type Standby (MB) Allocated GPU vRam (MB) Allocated GPU Slots
GPU Pageable Pinned GPU Slot Count
public ActionAnalytics public/ActionAnalytics/CR-70B/54/249 Standby computeinstance-e00r2jrqynf83a8b4f 4 71000 Restore Mem : 271616 File : 5464 File : 4 0 N/A
public Qwen public/Qwen/IntelliAsk-Qwen3-32B-450-Merged/76/252 Standby computeinstance-e00r2jrqynf83a8b4f 2 58000 Restore Mem : 115824 File : 3648 File : 4 0 N/A
public Trial public/Trial/Huihui-Qwen3-Next-80B-A3B-Thinking-abliterated/255/276 Snapshoting computeinstance-e00r2jrqynf83a8b4f 4 45000 Snapshot Mem : 0 File : 0 File : 0 45056 1 176
2 176
3 176
4 176
public Trial public/Trial/L3.3-70B-Loki-V2.0/259/274 Standby computeinstance-e00r2jrqynf83a8b4f 2 71000 Restore Mem : 141700 File : 3600 File : 2 0 N/A