ai

Docker

Verified Publisher

Verified Publisher

Docker

San Francisco, CA, USA

Displaying 1 to 30 of 63 repositories

model

397B MoE model with 17B activation for reasoning, coding, agents, and multimodal understanding

4h

10K+

4

model

397B-parameter MoE multimodal LLM with 17B active params, 262K context, 201 languages

4h

9.9K

1

model

Qwen3-Coder is Qwen’s new series of coding agent models.

1m

100K+

25

model

744B MoE language model with 40B active params for reasoning, coding, and agentic tasks (FP8)

1m

9.0K

3

model

Advanced coding agent model with 80B params (3B active MoE) for code generation and debugging

2m

10K+

1

model

Efficient 80B MoE coding model with 3B activated params, 256K context, and agentic capabilities

2m

10K+

1

model

Image generation model, uses a base latent diffusion model plus a refiner.

2m

10K+

5

model

GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.

2m

10K+

4

model

GLM-4.7-Flash is a top 30B-A3B MoE, balancing strong performance with efficient deployment.

2m

10K+

1

model

Devstral Small 2 is an FP8 instruct LLM for agentic SWE tasks, codebase tooling, and SWE-bench.

3m

10K+

4

model

FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.

3m

5.3K

1

model

FunctionGemma is a 270M open model for fine-tuned, offline function-calling agents on small devices.

3m

8.0K

2

model

Kimi K2 Thinking: open-source agent with deep reasoning, stable tool use, fast INT4, 256k context.

4m

50K+

1

model

Kimi K2 Thinking: open-source agent with deep reasoning, stable tool use, fast INT4, 256k context.

4m

10K+

1

model

DeepSeek-V3.2 boosts efficiency and reasoning with DSA, scalable RL, agentic data—IMO/IOI wins.

4m

10K+

10

model

Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use

4m

10K+

4

model

Ministral 3: compact vision-enabled model with near-24B performance, optimized for local edge use

4m

50K+

2

model

Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.

4m

10K+

3

model

Multilingual reranking model for text retrieval, scoring document relevance across 119 languages.

4m

10K+

model

Snowflake’s Arctic-Embed v2.0 boosts multilingual retrieval and efficiency

5m

4.7K

model

Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.

5m

10K+

1

model

Qwen3 Embedding: multilingual models for advanced text/ranking tasks like retrieval & clustering.

5m

10K+

model

OpenAI’s open-weight models designed for powerful reasoning, agentic tasks

5m

100K+

43

model

The most advanced Qwen model yet, with major gains in text, vision, video, and reasoning.

5m

100K+

9

model

Safety reasoning models for policy-based text classification and foundational safety tasks.

5m

10K+

2

model

Qwen3 is the latest Qwen LLM, built for top-tier coding, math, reasoning, and language tasks.

5m

500K+

147

model

Granite-4.0-nano: lightweight instruct model trained via SFT, RL, and merging on diverse data.

5m

9.3K

model

Granite-4.0-h-nano: lightweight instruct model trained via SFT, RL, and merging on diverse data.

5m

4.4K

1

model

Google’s latest Gemma, small yet strong for chat and generation

5m

10K+

1

model

OpenAI’s open-weight models designed for powerful reasoning, agentic tasks

5m

10K+

1