CoPaw with Ollama + Telegram — Run Your Own AI Assistant Locally for Free

CoPaw with Ollama + Telegram — Run Your Own AI Assistant Locally for Free

More

Descriptions:

Alibaba’s Tongyi Lab has open-sourced CoPaw, a fully self-hosted personal AI assistant modeled after Open Claude that runs entirely within a user’s own environment. Fahd Mirza walks through a complete installation on Ubuntu — using an NVIDIA RTX 6000 GPU — connecting CoPaw to a locally running Ollama instance serving Qwen 3.5 9B, and integrating it with Telegram as a live messaging interface.

CoPaw connects to a range of messaging platforms out of the box including Telegram, Discord, DingTalk, and QQ, routing AI responses directly into those apps. Beyond conversational chat, its modular skills system supports scheduled autonomous tasks: morning news digests, email organization, YouTube channel monitoring, and self-querying workflows that push results to your phone on a timer. Installation is straightforward via pip (`pip install gopo`), with initialization defaults pointing to Ollama. Mirza also demonstrates the Telegram setup path — creating a bot through BotFather, retrieving the token, and configuring the channel inside CoPaw’s web UI at localhost:8088 — and verifies that messages sent in the browser appear in Telegram in real time.

For developers wanting a privacy-preserving AI assistant without cloud API dependencies, CoPaw supports multiple backends beyond Ollama including llama.cpp, OpenAI, Apple MLX, and Alibaba’s DashScope. The video is a practical entry point for setting up a self-hosted agent with scheduling capabilities and multi-platform messaging integration.


📺 Source: Fahd Mirza · Published March 05, 2026
🏷️ Format: Tutorial Demo

1 Item

Channels