Built for AI workloadsSwiss cloud with L40S GPUs
Run large language models, train deep learning systems and accelerate inference.
With access to NVIDIA L40S GPUs – directly from our datacenters in Switzerland.
Performancethat scales
Optimized for multi-GPU scalability – you can get up to four GPUs per instance. GPU servers are based on our “Dedicated CPU Cores” offering with the full performance of the selected number of CPU cores. You rent the full performance and can scale the memory or CPU at any time. Check out the Pricing page for all available configurations.
Dedicated L40S GPUs
- 48 GB VRAM per GPU
- Ada Lovelace architecture
- Full support for CUDA, TensorRT, PyTorch and TensorFlow
