FLUX.2 Image Generation in under 1 second. Read More →

Blog

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Illustration of a smiling astronaut and a cheerful orange flame character floating in front of a neon-lit triangular background.

Democratizing AI Compute Series

Go behind the scenes of the AI industry with Chris Lattner

Latest

🚨

News

Product

Modular 26.2: State-of-the-Art Image Generation and Upgraded AI Coding with Mojo

Today’s 26.2 release expands the Modular Platform’s modality support to include image generation and image editing workflows. This extends our existing support for text and audio generation. In the 26.2 version Black Forest Labs' FLUX.2 model variants are supported with over a 4x speedup over state-of-the-art.

March 19, 2026

/

Modular Team

,  

🚨

News

Community

Modular at NVIDIA GTC 2026: MAX on Blackwell, Mojo Kernel Porting, and DeepSeek V3 on B200

Each spring, San Jose fills up with people who have strong opinions about GPUs, and we're happily among them. Find us this week at NVIDIA GTC, Booth #3004, where we’ll be running demos all week on Blackwell.

March 16, 2026

/

Modular Team

,  

🚨

News

Engineering

Structured Mojo Kernels Part 2 - The Three Pillars

This post explains the components of Structured Mojo Kernels: TileIO, TilePipeline, and TileOp. Each component forms a node in a kernel execution pipeline, and the links between them create a logical separation of concerns that makes kernels easier to extend and update. That organization matters because GPU kernels don't stay static. By abstracting hardware optimized implementations into patterns, the same kernel structure can adapt across NVIDIA and AMD hardware generations with minimal rewrite.

March 11, 2026

/

Fabio Riccardi

,  

Modular Kernel Team

,  

🚨

News

Community

Modverse #53: Community Builds, Research Milestones, and a Growing Ecosystem

This edition captures everything happening across the Modular ecosystem, from developers building with MAX and Mojo🔥 to the broader impact Modular is having across AI infrastructure. Here's a look at what's been happening lately.

March 6, 2026

/

Inaara Walji

,  

🚨

News

Engineering

Structured Mojo Kernels Part 1 - Peak Performance, Half the Code

GPU programming has always demanded precision, but the cost of that precision keeps rising. A production matmul kernel written in C++ spans 3,000–5,000 lines of tightly coupled code where a misplaced barrier silently corrupts results. That complexity gatekeeps hardware that should be available to far more developers, and it's a direct product of how GPUs have evolved: with each architecture generation, more of the orchestration burden has shifted onto the programmer.

March 5, 2026

/

Fabio Riccardi

,  

Modular Kernel Team

,  

🚨

News

Engineering

The Claude C Compiler: What It Reveals About the Future of Software

Compilers occupy a special place in computer science. They're a canonical course in computer science education. Building one is a rite of passage. It forces you to confront how software actually works, by examining languages, abstractions, hardware, and the boundary between human intent and machine execution.

February 18, 2026

/

Chris Lattner

,  

🚨

News

Company

BentoML Joins Modular

Today, BentoML is joining Modular.

February 10, 2026

/

Chris Lattner

,  

Chaoyu Yang

,  

Tim Davis

,  

🚨

News

Engineering

The Five Eras of KVCache

vLLM, SGLang, TensorRT-LLM, and MAX Serve are all built on top of increasingly sophisticated KV cache management. This blog explores the evolution and role of the KV cache in these inference engines

February 5, 2026

/

Brian Zhang

,  

🚨

News

Product

Modular 26.1: A Big Step Towards More Programmable and Portable AI Infrastructure

Today we’re releasing Modular 26.1, a major step toward making high-performance AI computing easier to build, debug, and deploy across heterogeneous hardware. This release is focused squarely on developer velocity and programmability—helping advanced AI teams reduce time to market for their most important innovations.

January 29, 2026

/

Modular Team

,  

🚨

News

Community

How to Beat Unsloth's CUDA Kernel Using Mojo—With Zero GPU Experience

Traditional GPU programming has a steep learning curve. The performance gains are massive, but the path to get there (CUDA, PTX, memory hierarchies, occupancy tuning) stops most developers before they start. Mojo aims to flatten that curve: Python-like syntax, systems-level performance, no interop gymnastics, and the same performance gains.

January 14, 2026

/

David Robertson

,  

No items found within this category

We couldn’t find anything. Try changing or resetting your filters.

Build the future of AI with Modular

View Editions
  • Person with blonde hair using a laptop with an Apple logo.

    Sign up today

    Signup to our Cloud Platform today to get started easily.

    Sign Up
  • Magnifying glass emoji with black handle and round clear lens.

    Browse open models

    Browse our model catalog, or deploy your own custom model

    Browse models