Multi-platform AI agent with Slack and Discord integration, featuring intelligent routing between chat and research modes.
- Multi-platform support: Slack and Discord
- Intelligent routing: Automatically routes requests to either chat or research mode
- RAG memory: Uses LanceDB for conversation history
- Queue-based processing: Serialized message handling to prevent race conditions
- Node.js (v16 or higher)
- Ollama running locally with these named models:
research-intent: The classifier that decides if a message is CHAT or RESEARCHbutler-chat: The primary chat personality (used by chat.js)geek-research: The deep-dive research model (used by researcher.js)memory-miner: The model that extracts insights for LanceDB (used by memory.js)nomic-embed-text: Used for semantic vector embeddings
-
Install dependencies:
npm install -
Configure environment variables: Create a
.envfile with:# Slack SLACK_BOT_TOKEN=xoxb-your-token SLACK_SIGNING_SECRET=your-signing-secret SLACK_APP_TOKEN=xapp-your-app-token # Discord (optional) DISCORD_TOKEN=your-discord-token -
Setup local models:
ollama create research-intent -f ./ollama/Modelfile.router
ollama create butler-chat -f ./ollama/Modelfile.chat
ollama create geek-research -f ./ollama/Modelfile.researcher
ollama create memory-miner -f ./ollama/Modelfile.memory
- Initialize the databases:
The LanceDB databases will be created automatically in
data/on first run.
node server.js# Start the server
pm2 start server.js
# Restart after making changes
pm2 restart server
# Stop the server
pm2 stop server
# View logs
pm2 logs server
# Check status
pm2 statusai-agent/
├── data/ # LanceDB storage (gitignored)
├── modules/
│ ├── chat.js # Chat handler (butler-chat)
│ ├── researcher.js # Research handler (geek-research)
│ ├── memory.js # LanceDB & insight extraction (memory-miner)
│ ├── discord.js # Discord integration
│ └── slack.js # Slack integration
├── ollama/
│ ├── Modelfile.chat # Chat personality model
│ └── Modelfile.geek # Researcher model
│ └── Modelfile.intent # Intent classification model
│ └── Modelfile.knowledge # Long Term Memory model
├── server.js # Main server file
└── .env # Environment variables (gitignored)
The agent intercepts specific keywords at the start of a message to perform system tasks:
!sleep: Manually triggers the memory "mining" process, consolidating the current conversation buffer into the long-term LanceDB knowledge base.
- chat.js: Handles casual conversation using the
butler-chatmodel - researcher.js: Deep-dives into topics using the
geek-researchmodel - memory.js: Manages LanceDB storage and uses
memory-minerfor insight extraction - discord.js / slack.js: Platform-specific bot setup and message handling
- Messages arrive via Slack or Discord
- System checks if the message starts with a
!command directive (see above) - if so, it attempts to execute that command - Otherwise, it classifies intent (CHAT vs RESEARCH)
- Request is queued and processed serially
- Response is generated using appropriate handler
- Conversation is saved to LanceDB for context
Make changes, test locally, then:
git add .
git commit -m "Description of changes"
git pushMIT