Nebius AI Cloud “Aether 3.5”: Frictionless compute for real world AI
This release introduces new serverless capabilities, the NVIDIA RTX PRO™ 6000 Blackwell Server Edition GPU for applied AI use cases, improved cluster configuration tools, streamlined data operations and platform-level enhancements that reduce routine complexity while preserving full control.
Introducing DevPods, Jobs and Endpoints: Easy compute access with serverless AI
The serverless services at Nebius are a natural extension of how an AI infrastructure cloud evolves over time, building on a mature and well-established underlying platform. As the platform develops, it becomes possible to expose compute in more flexible and elastic forms that better match how AI workloads are consumed.
Introducing NVIDIA RTX PRO 6000 Blackwell Server Edition on Nebius
NVIDIA RTX PRO 6000 Blackwell opens new opportunities for cost-efficient inference and increased performance for visual computing and scientific simulations.
Nebius and PyTorch partner to accelerate frontier MoE training on NVIDIA Blackwell
In collaboration with PyTorch, Nebius helped demonstrate up to 41% faster pre-training of DeepSeek-V3 models on NVIDIA Blackwell GPUs.
Incident post-mortem analysis: us-central1 service disruption on March 10, 2026
A detailed analysis of the incident on March 10, 2026 that led to service outages in the us-central1 region.
Delivering a validated AI Factory stack for agent workloads on Nebius AI Cloud with DataRobot
At NVIDIA GTC 2026, Nebius and DataRobot, with NVIDIA, introduced a validated AI Factory stack for production-grade agent workloads. In this post, we outline how the DataRobot Agent Workforce Platform runs on Nebius AI Cloud to support sustained inference, governance and cost control for AI agents deployed in live business workflows.
Incident post-mortem analysis: eu-north-1 service disruption on February 26, 2026
A detailed analysis of the incident on February 26, 2026 that led to service outages in the eu-north-1 region.
From fragmented data to production-grade agents: Nebius, Nexla and Tripadvisor at NVIDIA GTC
Nexla and Nebius are partnering to deliver a production-ready data and agent stack that connects governed enterprise data with infrastructure built for sustained inference. In this post, we outline how this architecture enables multi-agent systems to move from fragmented data pipelines to reliable production workflows, and show it in action through a live “Inspiration to Trip” demo presented with Tripadvisor at NVIDIA GTC.
Nebius and Eigen AI partner to accelerate frontier open-source AI inference
Nebius and Eigen AI are partnering to bring optimized frontier open-source models to Nebius Token Factory. As part of the collaboration, optimized implementations of models such as DeepSeek, GLM, GPT-OSS, Kimi, Llama, MiniMax and Qwen will be published on the platform, giving developers direct access to high-performance inference through production-ready endpoints and APIs.
Elevating the craft: Introducing the Inference Frontier Program
Today we’re introducing the Inference Frontier Program, a new builder-to-builder initiative dedicated to production inference systems. The program surfaces real architectures, optimizations and engineering tradeoffs from teams running large-scale inference in production.
What is AI Cloud? Key features, use cases & how to choose
Modern ML and LLM workloads require environments equipped with specialized hardware, high-performance networking and integrated MLOps tools. In this article, we’ll explore how AI-focused clouds differ from general-purpose platforms — and what criteria define the right provider for building scalable AI systems.
NVIDIA Nemotron 3 Super now available on Nebius Token Factory
NVIDIA Nemotron 3 Super is now available on Nebius Token Factory, bringing a 120B hybrid MoE model optimized for multi-agent systems and complex reasoning workflows to production deployments. With long-context inference and OpenAI-compatible APIs, teams can run Nemotron 3 Super through dedicated GPU endpoints and autoscaling infrastructure without managing their own serving stack.
OpenClaw security: architecture and hardening guide
Self-hosted AI agents offer control and flexibility, but they also introduce real security risks. Incidents involving malicious ClawHub skills, exposed default ports and prompt-injection attacks show that running OpenClaw is not just an installation task, but an infrastructure decision. This guide explains OpenClaw’s architecture and maps real threats to concrete-hardening controls, so teams can deploy it safely in production.