Function Calling Harness with AutoBE and Ollama

Function Calling Harness with AutoBE and Ollama

More

Descriptions:

AutoBE is an open-source backend generation framework built around a principle that sets it apart from standard AI coding tools: the model never writes raw code. Instead, it forces the AI to populate structured forms, which purpose-built compilers then transform into TypeScript. When compilation fails, the system automatically captures the error and loops it back to the model until the output compiles cleanly โ€” no manual intervention required.

In this video, Fahd Mirza walks through a full local installation of AutoBE on Ubuntu, connecting it to an Ollama instance running the Qwen 3.5 27B dense model. The setup requires Node.js, npm, and pnpm, and the playground runs in-browser on localhost once installed. Mirza covers the vendor configuration UI, works through a few rough edges typical of early-stage open-source projects, and demonstrates starting a to-do list app generation session to show the analysis and document-generation pipeline in action.

A key practical note: quantized llama-format versions of Qwen 3.5 underperform the full model for function-calling-heavy workflows like AutoBE’s, so Mirza recommends API-based models or the full local weights for serious use. The video is useful for developers curious about compiler-backed code generation as an alternative to prompt-and-pray approaches, particularly those wanting to run the entire stack locally without sending code to external APIs.


๐Ÿ“บ Source: Fahd Mirza ยท Published March 28, 2026
๐Ÿท๏ธ Format: Tutorial Demo

1 Item

Channels