Hermes Agent + Ollama + Telegram – Local Easy Setup Guide

Hermes Agent + Ollama + Telegram – Local Easy Setup Guide

More

Descriptions:

Fahd Mirza walks through a complete local installation of Hermes, the new agent released by NovaSky Research (the lab behind several prominent open-source models), configured to run on Ollama-hosted models and accessible via a personal Telegram bot — no cloud API required after setup.

The walkthrough runs on an Ubuntu system with an NVIDIA RTX 6000 (48GB VRAM) running GLM 4.7 Flash via Ollama on port 11434. Hermes installs via a single bash script that clones the repo, pulls Python 3.11, installs dependencies including a full browser engine, and syncs 94 bundled skills covering everything from GitHub workflows to stable diffusion. The interactive setup wizard handles model provider selection (self-hosted, OpenAI-compatible endpoint), Telegram bot token and user ID configuration, systemd service registration for background persistence across reboots, and optional web search and Discord integrations.

Key differentiating features of Hermes include a closed learning loop that creates and refines skills from session experience, project-level memory that persists across restarts, and cross-platform presence (the agent runs on a server while the user interacts through Telegram from anywhere). Mirza notes one rough edge: the CLI model selection doesn’t always carry over from the onboarding wizard and may need to be re-specified at first run. Useful for developers wanting a fully local, self-improving agent without ongoing API costs.


📺 Source: Fahd Mirza · Published March 22, 2026
🏷️ Format: Tutorial Demo