Stars
- All languages
- Adblock Filter List
- AppleScript
- Assembly
- AutoIt
- Batchfile
- Bikeshed
- BrighterScript
- C
- C#
- C++
- CSS
- CartoCSS
- Clojure
- CoffeeScript
- Common Lisp
- Crystal
- DIGITAL Command Language
- Dart
- Dockerfile
- Elixir
- Elm
- Emacs Lisp
- FreeMarker
- GLSL
- Git Attributes
- Go
- Go Template
- HTML
- Handlebars
- Haskell
- JSON
- Java
- JavaScript
- Jupyter Notebook
- Kotlin
- Less
- Lua
- MDX
- MLIR
- Makefile
- Markdown
- Nim
- Nix
- Nunjucks
- OCaml
- Objective-C
- Objective-C++
- OpenSCAD
- PHP
- PLpgSQL
- Pascal
- Perl
- PowerShell
- Pug
- PureBasic
- PureScript
- Python
- QML
- R
- ReScript
- Reason
- Rich Text Format
- Roff
- Ruby
- Rust
- SCSS
- Scala
- Shell
- Standard ML
- Starlark
- Svelte
- Swift
- TeX
- TypeScript
- V
- Vala
- Vim Script
- Vue
- WebAssembly
- XSLT
- Zig
'afm' command cli: macOS server and single prompt mode that exposes Apple's Foundation and MLX Models and other APIs running on your Mac through a single aggregated OpenAI-compatible API endpoint. …
Fast, local-first voice app for Mac. Dictation and transcription powered by Parakeet TDT on the Neural Engine.
Running a big model on a small laptop
🤗 The largest hub of ready-to-use datasets for AI models with fast, easy-to-use and efficient data manipulation tools
LLM inference server with continuous batching & SSD caching for Apple Silicon — managed from the macOS menu bar
The official TypeScript SDK for Model Context Protocol servers and clients
Developer-friendly OSS embedded retrieval library for multimodal AI. Search More; Manage Less.
A step-by-step guide to build your own AI agent.
A ligthweight cli for running single-purpose AI agents. Define focused agents in TOML, trigger them from anywhere; pipes, git hooks, cron, or the terminal.
llama.cpp fork with additional SOTA quants and improved performance
CLI proxy for coding agents that cuts noisy terminal output while preserving command behavior
⏩ Source-controlled AI checks, enforceable in CI. Powered by the open-source Continue CLI
An MCP server plus a CLI tool that indexes local code into a graph database to provide context to AI assistants.
The official source for MDN Web Docs content. Home to over 14,000 pages of documentation about HTML, CSS, JS, HTTP, Web APIs, and more.
A high-throughput and memory-efficient inference and serving engine for LLMs
Community maintained hardware plugin for vLLM on Apple Silicon
MLX-Embeddings is the best package for running Vision and Language Embedding models locally on your Mac using MLX.
OpenAI and Anthropic compatible server for Apple Silicon. Run LLMs and vision-language models (Llama, Qwen-VL, LLaVA) with continuous batching, MCP tool calling, and multimodal support. Native MLX …
High-performance code intelligence MCP server. Indexes codebases into a persistent knowledge graph — average repo in milliseconds. 66 languages, sub-ms queries, 99% fewer tokens. Single static bina…
Reliable model swapping for any local OpenAI/Anthropic compatible server - llama.cpp, vllm, etc
Pure TypeScript, cross-platform module for extracting text, images, and tabular data from PDFs. Run 🤗 directly in your browser or in Node.js
20+ high-performance LLMs with recipes to pretrain, finetune and deploy at scale.
Hundreds of models & providers. One command to find what runs on your hardware.
Color schemes for default macOS Terminal.app
See what your AI sees. Framework-agnostic LLM context window visualizer.